Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

5105

November 12th, 2015 06:00

Multi X-Brick XtremIO clusters and VMware vSphere LUN/path limits

Hello,

VMware vSphere ESXi 5.5 and 6.0 have a limit of 256 LUNs per host:

https://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf

https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf

For the traditional (if I can call them that) storage arrays like VNX where there are two Storage Controllers and VMware Best Practice to have two HBAs connected to the two Fabrics the host will have four paths to a LUN: 2 Controllers x 2 HBAs = 4 paths

When we use the same logic with EMC XtremIO clusters (two or more X-Bricks (max is currently 8), we should also consider another limit, the 'Number of total paths on a server' which is 1024.

If you have a cluster of two X-Bricks, you have four controllers, multiply by two HBAs and you have eight paths to a LUN.  Therefore the max number of LUNs will be 1024 / 8 = 128.

If you go to the extreme and configure your XtremIO with eight X-Brick, you have 16 controllers. Again, two HBAs per host and the max number of LUNs you can attach to an ESXi host will be 32...

I understand, different OS'es may have different limits than in VMware environment and this logic will not be available.

We currently have XtremIO array with two X-Bricks (four controllers) that is used for Oracle T&D environment with multiple RDM mapped LUNs and have hit the limit of 1024 paths (1024 / 4 controllers / 2 HBAs = 128 LUNs).

How would you resolve this issues? 

We can disable (manually or through the script) certain paths on the VMware ESXi host. And what logic we should apply to the selection criteria?

Can we do something clever in EMC PowerPath/VE to limit the number of path it will use etc?...

Thank you.

Mark Strong

www.vStrong.info

522 Posts

November 12th, 2015 06:00

This might not be exactly what you are looking for, but one way to potentially decrease that is to utilize only 4 paths per LUN by essentially allocating a host to a brick and round-robining the hosts between the bricks in that fashion. Its kind of a pain from a management perspective since the customer has to be mindful of that type of host/brick balancing, but this is supported and a way I have approached it with some of my customers due to the pathing limits you mention:

paths.png

727 Posts

November 12th, 2015 06:00

Which XtremIO version are you running?

You don’t necessarily need to map all the LUNs to all the controllers on the XtremIO array. See the user guide for some recommendations on zoning with multiple X-Brick arrays.

41 Posts

November 12th, 2015 08:00

Thank you Avi and echolaughmk

We are on 3.0.1.11

I will review current zoning in line with the Host Configuration and get back to you.

41 Posts

November 16th, 2015 07:00

Thank you all for your help.

What would be a good practice to configure zoning?

fcalias name CLUSTER01_hostc01n01 vsan 10

member device-alias hostc01n01_vhba0 initiator

exit

fcalias name CLUSTER01_hostc01n01_XIO_X1 vsan 10

member device alias X1_SC1_FC1 target

member device alias X1_SC2_FC1 target

exit

fcalias name CLUSTER01_hostc01n02 vsan 10

member device-alias hostc01n02_vhba0 initiator

exit

fcalias name CLUSTER01_hostc01n02_XIO_X2 vsan 10

member device alias X2_SC1_FC1 target

member device alias X2_SC2_FC1 target

exit

zone name CLUSTER01_hostc01n01 vsan 10

member fcalias CLUSTER01_hostc01n01

member fcalias CLUSTER01_hostc01n01_XIO_X1

exit

zone name CLUSTER01_hostc01n02 vsan 10

member fcalias CLUSTER01_hostc01n02

member fcalias CLUSTER01_hostc01n02_XIO_X2

exit

zoneset name zs_vsan10 vsan 10

member CLUSTER01_hostc01n01

member CLUSTER01_hostc01n02

exit

zoneset activate name zs_vsan10 vsan 10

zone commit vsan 10

end

copy run start

Or to have an alias for each X-Brick and re-use it for zoning?

Like this:

=============== Host 1 --> Xbrick 1 ==============================

fcalias name CLUSTER01_hostc01n01 vsan 10

member device-alias hostc01n01_vhba0 initiator

exit

fcalias name XIO_XBrick1 vsan 10

member device alias X1_SC1_FC1 target

member device alias X1_SC2_FC1 target

exit

zone name CLUSTER01_hostc01n01 vsan 10

member fcalias CLUSTER01_hostc01n01

member fcalias XIO_XBrick1

exit

=============== Host 2 --> Xbrick 2 ==============================

fcalias name CLUSTER01_hostc01n02 vsan 10

member device-alias hostc01n02_vhba0 initiator

exit

fcalias name XIO_XBrick2 vsan 10

member device alias X2_SC1_FC1 target

member device alias X2_SC2_FC1 target

exit

zone name CLUSTER01_hostc01n02 vsan 10

member fcalias CLUSTER01_hostc01n02

member fcalias XIO_XBrick2

exit

XIO_XBrick1 and XIO_XBrick2 can then be re-used for zoning other hosts where a specific host-->XBrick relationship needs to be established?

Thank you.

Much appreciated.

41 Posts

November 17th, 2015 06:00

Would it be supported if we zone the host like this:

HBA0   -    Host 1     -       HBA1

SAN0                              SAN1

X1-SC1&2                        X2-SC1&2

2 Intern

 • 

20.4K Posts

November 17th, 2015 07:00

this is what i do:

Fabric A

zone name oracle-h0-xio-x1 vsan 109

  pwwn 10:00:00:00:c9:7e:00:d3 [oracle-h0]  init

  pwwn 51:4f:0c:50:0c:ef:4c:00 [xio-x1-sc1-fc1]  target

  pwwn 51:4f:0c:50:0c:ef:4c:04 [xio-x1-sc2-fc1]  target

Fabric B

zone name oracle-h1-xio-x1 vsan 110

  pwwn 10:00:00:00:c9:7e:01:08 [oracle-h1]  init

  pwwn 51:4f:0c:50:0c:ef:4c:01 [xio-x1-sc1-fc2]  target

  pwwn 51:4f:0c:50:0c:ef:4c:05 [xio-x1-sc2-fc2]  target

41 Posts

November 17th, 2015 08:00

Thank you dynamox


You zone the host in exactly the same way as in the Host Configuration Guide, ie host1 is zoned to the same X-Brick (x01)  Would it be better if we zone the second HBA (hba1 / h1) on the second SAN switch (vsan 110) to the second X-Brick?

Something like this:


Fabric B

zone name oracle-h1-ndb1xio-x1 vsan 110

  pwwn 10:00:00:00:c9:7e:01:08 [oracle-h1]  init

  pwwn 51:4f:0c:50:0c:ef:XX:XX [xio-x2-sc1-fc2]  target

  pwwn 51:4f:0c:50:0c:ef:XX:XX [xio-x2-sc2-fc2]  target

35 Posts

November 17th, 2015 12:00

Is Xtremio compatible with Smart Zones? I was told by Professional services to keep it one to one. It would make life easier for me .

35 Posts

November 18th, 2015 07:00

Mark,

I initially set up my zoning  splitting HBAs  across SCs on different Xbricks since we are set up with 4 bricks for redundancy but it was strongly recommended to follow best practice so I changed it. Pro Services recommends that I zone all HBAs from a host to all SCs in a brick. It had something to do with thumbprint generation and its placement on SCs.

Now I have zoned all the even numbered host to all SCs on even numbered Xbricks and odd numbered hosts to all SCs on odd numbered Xbricks. It is the only way I could design it for  HBA failure, SC failure, Xbrick failure while keeping best practices.

This design won't really matter if two SCs fail simultaneously as the cluster will automatically shutdown to preserve the data.

2 Intern

 • 

20.4K Posts

November 18th, 2015 08:00

Mark_Strong wrote:

Thank you dynamox


You zone the host in exactly the same way as in the Host Configuration Guide, ie host1 is zoned to the same X-Brick (x01)  Would it be better if we zone the second HBA (hba1 / h1) on the second SAN switch (vsan 110) to the second X-Brick?

Something like this:


Fabric B

zone name oracle-h1-ndb1xio-x1 vsan 110

  pwwn 10:00:00:00:c9:7e:01:08 [oracle-h1]  init

  pwwn 51:4f:0c:50:0c:ef:XX:XX [xio-x2-sc1-fc2]  target

  pwwn 51:4f:0c:50:0c:ef:XX:XX [xio-x2-sc2-fc2]  target

in my opinion it does not buy you anything but confusion. I simply rotate host zoning between Xbricks. Host 1 to Xbick 1, Host 2 to Xbrick 2, Host 3 to Xbrick 1 ..and so on.

2 Intern

 • 

20.4K Posts

November 18th, 2015 08:00

no issues with SmartZones whatsoever, PS is sticking to 15 year old practice of single target single initiator ..they need to move on.

35 Posts

November 19th, 2015 07:00

Cool I will set this up in DR and see how it goes. So much easier.

41 Posts

December 4th, 2015 02:00

dynamox wrote:

Mark_Strong wrote:

Thank you dynamox


You zone the host in exactly the same way as in the Host Configuration Guide, ie host1 is zoned to the same X-Brick (x01)  Would it be better if we zone the second HBA (hba1 / h1) on the second SAN switch (vsan 110) to the second X-Brick?

Something like this:


Fabric B

zone name oracle-h1-ndb1xio-x1 vsan 110

  pwwn 10:00:00:00:c9:7e:01:08 [oracle-h1]  init

  pwwn 51:4f:0c:50:0c:ef:XX:XX [xio-x2-sc1-fc2]  target

  pwwn 51:4f:0c:50:0c:ef:XX:XX [xio-x2-sc2-fc2]  target

in my opinion it does not buy you anything but confusion. I simply rotate host zoning between Xbricks. Host 1 to Xbick 1, Host 2 to Xbrick 2, Host 3 to Xbrick 1 ..and so on.

Thank you dynamox

As per saj.hou's message: "

I initially set up my zoning  splitting HBAs  across SCs on different Xbricks since we are set up with 4 bricks for redundancy but it was strongly recommended to follow best practice so I changed it. Pro Services recommends that I zone all HBAs from a host to all SCs in a brick. It had something to do with thumbprint generation and its placement on SCs.

Now I have zoned all the even numbered host to all SCs on even numbered Xbricks and odd numbered hosts to all SCs on odd numbered Xbricks. It is the only way I could design it for  HBA failure, SC failure, Xbrick failure while keeping best practices."

I have followed the best practice and documented the zoning configuration: VMware Host zoning for multi X-Brick EMC XtremIO storage array | vStrong.info

I would like to thank everybody for your advice and recommendation.

Mark Strong

http://www.vStrong.info

2 Intern

 • 

20.4K Posts

December 6th, 2015 16:00

if  entire Xbrick fails you are toast.

727 Posts

December 7th, 2015 11:00

dynamox  - if the entire X-Brick is offline (because of some dual failure in the array), then the array will shutdown anyways. So, planning the zoning so that you can survive the complete X-Brick being offline is not going to buy you much.

No Events found!

Top