This post is more than 5 years old

2043

January 21st, 2010 21:00

NS-120 FC LUN Numbering

I'm curious why the LUN numbers I'm assigning in NaviSphere are not being detected correctly by the hosts I assign the LUNs to.

For example, I created two LUNs with LUN IDs 20 and 21 and presented them to two VMware ESX hosts.

The VMware ESX hosts see the LUNs as 0 and 1.

1-21-2010 11-19-14 PM.png1-21-2010 11-17-57 PM.png

4 Operator

 • 

8.6K Posts

January 22nd, 2010 08:00

might be a bit hidden - its also called "host ID" in the NaviSphere GUI

see here for examples  http://corpusweb130.corp.emc.com/ns/common/post_install/Config_storage_FC_NaviManager.htm

147 Posts

January 21st, 2010 23:00

probably because the Clariion has two LUN numbers - a ALU (20 in your case), which exists only once per Clariion

and a HLU - that one is set when you put the LUN into a storagegroup - its only unique per SG and gets automatically assigned starting from 0, but you can modify it using the properties

For a LUN used by the Celerra its important that this HLU is set manually to values larger than 16

Rainer

January 22nd, 2010 06:00

I'm not seeing where I can change HLU in the LUN or Storage Group properties.

This is not a showstopper but I am used to being able to dictate what the LUN numbers will be as the hosts see them.  The Celerra seems to automatically assign these numbers starting from 0 on up in the host presentation, yet the actual LUN IDs I assigned do not correspond which will make it a bit confusing to track the LUN utilization.

Jas

4 Operator

 • 

1.5K Posts

January 22nd, 2010 08:00

In the Navisphere - go to Storage Group Property page - click on "LUN" tab - in the Selected LUN area - move right to find the Column named "Host ID"  - you may not see any number associated to the newly added LUNs - simply click under that Host ID column for the respective LUN and you will be able to set the Host ID (HLU) value for the LUN. For Celerra Data LUNs it has to be greater than 16 as Rainer already mentioned earlier.

My 2 cents

Sandip

January 22nd, 2010 09:00

Many thanks to both of you.  I see now where to assign Host IDs.  This was successful.

I have created 4 FC LUNs, 6, 7, 8, and 9, and I've assigned these to a storage group and two ESX hosts successfully.  I did not assign to the Celerra CS.

Would one of you be able to briefly explain what a Celerra User LUN is where the Host ID and LUN number needs to be 16 or greater??  I understand it's a LUN which is provisioned for use with Celerra Manager, however, I don't see the need for this since the CSA is able to provision iSCSI, NFS, and CIFS disk into storage into pools w/o the use of NaviSphere.

If I place the two ESX hosts into the Celerra CS storage group and assign all the disk to that storage group, I receive several warnings in the Celerra Manager about disk tresspassing.  I did not perform and reads or writes.  I immediately moved the ESX hosts into their own storage group.

Thanks again, you have been a tremendous help.

Jas

4 Operator

 • 

8.6K Posts

January 22nd, 2010 10:00

If I place the two ESX hosts into the Celerra CS storage group and assign all the disk to that storage group, I receive several warnings in the Celerra Manager about disk tresspassing.  I did not perform and reads or writes.  I immediately moved the ESX hosts into their own storage group.

you were just lucky there

the Celerra does write a signature (diskmark) at the end of each LUN - if there is some data or structure there that ESX needs you're in trouble

9 Legend

 • 

20.4K Posts

January 22nd, 2010 10:00

if you rescan Celerra at the time those LUNs were presented in the storage group ..otherwise it should be ok right. It's not like Celerra constatly polls Clariion for new LUNs.

4 Operator

 • 

8.6K Posts

January 22nd, 2010 10:00

If you are fine with what the Storage Provisioning Wizard creates for use with the Celerra NAS piece then there is no need for NaviSphere there.

you would only need it if you want to configure something that SPW cant do like using only part of a raidgroup or LUNs of a specific size

also SPW is only available for half a year or so - before that you would have had to use Navi for configuring Celerra LUNs on a FC-enabled system

as well as for a gateway Celerra

the reason for hostid >16 there is that the HLU's below are reserved for Celerra OS use - so we wont map them as data disks in order to avoid accidental overwrite

In terms of VMWare I would suggest to also take a look at two TechBooks for VMware available from Powerlink

https://powerlink.emc.com/nsepn/webapps/btg548664833igtcuup4826/km/appmanager/km/secureDesktop?internalId=0b0140668014ac57

the Host Connectivity Guide

https://powerlink.emc.com/nsepn/webapps/btg548664833igtcuup4826/km/live1/en_US/Offering_Technical/Technical_Documentation/300-002-304.pdf

and Clariion white papers

https://powerlink.emc.com/nsepn/webapps/btg548664833igtcuup4826/km/live1/en_US/Offering_Technical/White_Paper/H1416-emc-clariion-intgtn-vmware-wp.pdf

https://powerlink.emc.com/nsepn/webapps/btg548664833igtcuup4826/km/live1/en_US/Offering_Technical/White_Paper/H5534-intro-emc-clariion-cx4-series-feat-ultraflex-tech-wp.pdf

and yes - each host or cluster needs to have its own storage group

Rainer

January 22nd, 2010 11:00

ral67 wrote:

If I place the two ESX hosts into the Celerra CS storage group and assign all the disk to that storage group, I receive several warnings in the Celerra Manager about disk tresspassing.  I did not perform and reads or writes.  I immediately moved the ESX hosts into their own storage group.

you were just lucky there

the Celerra does write a signature (diskmark) at the end of each LUN - if there is some data or structure there that ESX needs you're in trouble


The LUNs didn't have any ESX data on them. That wasn't a concern.  Those ESX LUNs were just for testing only and have since been deleted by me.

My immediate concern was the RAID Groups that the Celerra CS owned which were tresspassed.  Everything still appears to be in tact.  The ESX hosts hadn't detected them yet.  I ungrouped them as soon as I saw the warning messages about the tresspassing.

1-22-2010 1-36-37 PM.jpg

147 Posts

January 22nd, 2010 13:00

most probably a side effect of different multipathing

ESX accessing the Celerra LUNs over a non-preferred path and causing a trespass I guess

I would advise not to play around with storagegroups - esp. the one containing Celerra LUNs

No Events found!

Top