Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

4997

November 12th, 2015 06:00

Multi X-Brick XtremIO clusters and VMware vSphere LUN/path limits

Hello,

VMware vSphere ESXi 5.5 and 6.0 have a limit of 256 LUNs per host:

https://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf

https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf

For the traditional (if I can call them that) storage arrays like VNX where there are two Storage Controllers and VMware Best Practice to have two HBAs connected to the two Fabrics the host will have four paths to a LUN: 2 Controllers x 2 HBAs = 4 paths

When we use the same logic with EMC XtremIO clusters (two or more X-Bricks (max is currently 8), we should also consider another limit, the 'Number of total paths on a server' which is 1024.

If you have a cluster of two X-Bricks, you have four controllers, multiply by two HBAs and you have eight paths to a LUN.  Therefore the max number of LUNs will be 1024 / 8 = 128.

If you go to the extreme and configure your XtremIO with eight X-Brick, you have 16 controllers. Again, two HBAs per host and the max number of LUNs you can attach to an ESXi host will be 32...

I understand, different OS'es may have different limits than in VMware environment and this logic will not be available.

We currently have XtremIO array with two X-Bricks (four controllers) that is used for Oracle T&D environment with multiple RDM mapped LUNs and have hit the limit of 1024 paths (1024 / 4 controllers / 2 HBAs = 128 LUNs).

How would you resolve this issues? 

We can disable (manually or through the script) certain paths on the VMware ESXi host. And what logic we should apply to the selection criteria?

Can we do something clever in EMC PowerPath/VE to limit the number of path it will use etc?...

Thank you.

Mark Strong

www.vStrong.info

35 Posts

December 9th, 2015 06:00

So why have all these storage controllers  and complicate zoning if you are limited to a single brick failure.  Wouldn't it save power and space if  you just had one X brick with expandable storage shelves and add controllers as needed or engineer a way to make use of the other storage controllers  when an xbrick goes down they are all interconnected via infiniband.

I do not like the fact the cluster shuts down automatically with an xbrick failure. That means all hosts connected to the good xbicks  lose storage visibility which cripples of the business. Not a good design at all.

December 9th, 2015 23:00

The probability of having such a catastrophic failure taking out an entire X-brick is very, very low.

If you're still worried about such an event you need to be thinking about splitting into failure domains.

1 Rookie

 • 

20.4K Posts

December 10th, 2015 11:00

Avi wrote:

dynamox - if the entire X-Brick is offline (because of some dual failure in the array), then the array will shutdown anyways. So, planning the zoning so that you can survive the complete X-Brick being offline is not going to buy you much.

where did you see me recommend planning for X-brick failure ?

12 Posts

December 13th, 2015 20:00

XtremIO employs a scale out architecture where capacity and performance is expanded linearly, as opposed to a scale up design where capacity is added to existing controller infrastructure. This allows the array to provide more predictable and linear performance gains when the system is expanded (through the addition of an X-brick).

As performance and capacity is shared across the entire cluster there is no need to design your infrastructure for a single "controller" - your data is spread across the cluster and can be accessed from any of the X-bricks.

No Events found!

Top