Highlighted
2 Iron

Multi X-Brick XtremIO clusters and VMware vSphere LUN/path limits

Jump to solution

Hello,

VMware vSphere ESXi 5.5 and 6.0 have a limit of 256 LUNs per host:

https://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf

https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf

For the traditional (if I can call them that) storage arrays like VNX where there are two Storage Controllers and VMware Best Practice to have two HBAs connected to the two Fabrics the host will have four paths to a LUN: 2 Controllers x 2 HBAs = 4 paths

When we use the same logic with EMC XtremIO clusters (two or more X-Bricks (max is currently 8), we should also consider another limit, the 'Number of total paths on a server' which is 1024.

If you have a cluster of two X-Bricks, you have four controllers, multiply by two HBAs and you have eight paths to a LUN.  Therefore the max number of LUNs will be 1024 / 8 = 128.

If you go to the extreme and configure your XtremIO with eight X-Brick, you have 16 controllers. Again, two HBAs per host and the max number of LUNs you can attach to an ESXi host will be 32...

I understand, different OS'es may have different limits than in VMware environment and this logic will not be available.

We currently have XtremIO array with two X-Bricks (four controllers) that is used for Oracle T&D environment with multiple RDM mapped LUNs and have hit the limit of 1024 paths (1024 / 4 controllers / 2 HBAs = 128 LUNs).

How would you resolve this issues? 

We can disable (manually or through the script) certain paths on the VMware ESXi host. And what logic we should apply to the selection criteria?

Can we do something clever in EMC PowerPath/VE to limit the number of path it will use etc?...

Thank you.

Mark Strong

www.vStrong.info

0 Kudos
Reply
19 Replies
Highlighted
2 Iron

So why have all these storage controllers  and complicate zoning if you are limited to a single brick failure.  Wouldn't it save power and space if  you just had one X brick with expandable storage shelves and add controllers as needed or engineer a way to make use of the other storage controllers  when an xbrick goes down they are all interconnected via infiniband.

I do not like the fact the cluster shuts down automatically with an xbrick failure. That means all hosts connected to the good xbicks  lose storage visibility which cripples of the business. Not a good design at all.

0 Kudos
Reply
Highlighted
4 Tellurium

The probability of having such a catastrophic failure taking out an entire X-brick is very, very low.

If you're still worried about such an event you need to be thinking about splitting into failure domains.

0 Kudos
Reply
Highlighted
2 Bronze

XtremIO employs a scale out architecture where capacity and performance is expanded linearly, as opposed to a scale up design where capacity is added to existing controller infrastructure. This allows the array to provide more predictable and linear performance gains when the system is expanded (through the addition of an X-brick).

As performance and capacity is shared across the entire cluster there is no need to design your infrastructure for a single "controller" - your data is spread across the cluster and can be accessed from any of the X-bricks.

0 Kudos
Reply
Highlighted
7 Thorium

Avi wrote:

dynamox - if the entire X-Brick is offline (because of some dual failure in the array), then the array will shutdown anyways. So, planning the zoning so that you can survive the complete X-Brick being offline is not going to buy you much.

where did you see me recommend planning for X-brick failure ?

0 Kudos
Reply
Highlighted
2 Iron

Is Xtremio compatible with Smart Zones? I was told by Professional services to keep it one to one. It would make life easier for me .

0 Kudos
Reply
Highlighted
7 Thorium

no issues with SmartZones whatsoever, PS is sticking to 15 year old practice of single target single initiator ..they need to move on.

0 Kudos
Reply
Highlighted
2 Iron

Cool I will set this up in DR and see how it goes. So much easier.

0 Kudos
Reply
Highlighted
4 Tellurium

Which XtremIO version are you running?

You don’t necessarily need to map all the LUNs to all the controllers on the XtremIO array. See the user guide for some recommendations on zoning with multiple X-Brick arrays.

Reply
Highlighted
2 Iron

Thank you and

We are on 3.0.1.11

I will review current zoning in line with the Host Configuration and get back to you.

0 Kudos
Reply
Highlighted
2 Iron

Thank you all for your help.

What would be a good practice to configure zoning?

fcalias name CLUSTER01_hostc01n01 vsan 10

member device-alias hostc01n01_vhba0 initiator

exit

fcalias name CLUSTER01_hostc01n01_XIO_X1 vsan 10

member device alias X1_SC1_FC1 target

member device alias X1_SC2_FC1 target

exit

fcalias name CLUSTER01_hostc01n02 vsan 10

member device-alias hostc01n02_vhba0 initiator

exit

fcalias name CLUSTER01_hostc01n02_XIO_X2 vsan 10

member device alias X2_SC1_FC1 target

member device alias X2_SC2_FC1 target

exit

zone name CLUSTER01_hostc01n01 vsan 10

member fcalias CLUSTER01_hostc01n01

member fcalias CLUSTER01_hostc01n01_XIO_X1

exit

zone name CLUSTER01_hostc01n02 vsan 10

member fcalias CLUSTER01_hostc01n02

member fcalias CLUSTER01_hostc01n02_XIO_X2

exit

zoneset name zs_vsan10 vsan 10

member CLUSTER01_hostc01n01

member CLUSTER01_hostc01n02

exit

zoneset activate name zs_vsan10 vsan 10

zone commit vsan 10

end

copy run start

Or to have an alias for each X-Brick and re-use it for zoning?

Like this:

=============== Host 1 --> Xbrick 1 ==============================

fcalias name CLUSTER01_hostc01n01 vsan 10

member device-alias hostc01n01_vhba0 initiator

exit

fcalias name XIO_XBrick1 vsan 10

member device alias X1_SC1_FC1 target

member device alias X1_SC2_FC1 target

exit

zone name CLUSTER01_hostc01n01 vsan 10

member fcalias CLUSTER01_hostc01n01

member fcalias XIO_XBrick1

exit

=============== Host 2 --> Xbrick 2 ==============================

fcalias name CLUSTER01_hostc01n02 vsan 10

member device-alias hostc01n02_vhba0 initiator

exit

fcalias name XIO_XBrick2 vsan 10

member device alias X2_SC1_FC1 target

member device alias X2_SC2_FC1 target

exit

zone name CLUSTER01_hostc01n02 vsan 10

member fcalias CLUSTER01_hostc01n02

member fcalias XIO_XBrick2

exit

XIO_XBrick1 and XIO_XBrick2 can then be re-used for zoning other hosts where a specific host-->XBrick relationship needs to be established?

Thank you.

Much appreciated.

0 Kudos
Reply