Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

11819

April 27th, 2015 09:00

XtremIO Zoning Best Practices

Today we had our XtremIO Two Brick Storage Array Configured. I'm looking for some guidance on Zoning Best Practices to our FC Switches. With the Two Bricks we have a total of Eight Fiber Channel connections, Four Fiber Connections to Fabric A and Four Fiber Connections to Fabric B. Below are examples of the FC initiators "Aliases" from XtremIO connections Fabric A & B.  How would I implement zoning for a new server that has two HBA's "One HBA per fabric"  For Example: Server Name: UCS01 - (HBA1 "UCS01_HBA1" & "UCS01_HBA2"). Is it best to zone each HBA using one-to-one, or create one big zone set with all four XtremIO FC connections and my Server HBA?

Does our servers see storage from both X-Bricks? If so would we just zone one server to one X-Brick? Looking for guidance, whitepapers, best practices, etc...

Switch "Fabric A"

aliases 

"UCS01_HBA1"   

"X1_SC1_A"

"X1_SC2_A"

"X2_SC1_A"

"X2_SC2_A"

Switch "Fabric B"

aliases

"UCS01_HBA2"

"X1_SC1_B"

"X1_SC2_B"

"X2_SC1_B"

"X2_SC2_B"

Thank You

Terry

2 Intern

 • 

20.4K Posts

March 8th, 2017 14:00

Hi Avi,

any substantial benefits of zoning one host to two X-Bricks versus to one X-Brick (other than front-end balancing) ?

Thanks

13 Posts

March 8th, 2017 15:00

@dynamox

I believe so, but please someone correct me if I am wrong here. Say you have four HBA ports for example (2 to fabric A / 2 to fabric B). You can create an initiator group like so HostA-1 (with one set of ports/initiators) and a second initiator group called HostA-2 (with the second set of ports/initiators). I have not tried this though.

I know for VPLEX it is common to see two initiator groups on the XtremIO. That because of the same concern regarding volume limitations per initiator group.

2 Intern

 • 

20.4K Posts

March 8th, 2017 18:00

with 4 HBAs per server, sure i can see it providing better queuing as you have more "lanes on highway" but with two HBAs i wondering what is it that we are getting by connecting to two xbricks ?

727 Posts

March 8th, 2017 19:00

Yes, reducing the front end queueing on the storage controller is the main benefit that you are after. At a high level, the goal is to spread the workload as evenly across the storage controllers as possible. You don't HAVE to do it, but it gives you the best performance because you are are not potentially overloading one of the storage controllers (if you connected only to one controller).

Also, in the previous version of the connectivity diagram (the one that is pasted earlier in this thread) - we had Host 1 connecting to X-Brick 1 and Host 2 connecting to X-Brick 2. That was just an example, but some people understood that to mean that they need to dedicate one host to one X-Brick only - which is obviously not correct

@Hernan - The basic rule that we need to follow is that if the host is connected to one storage controller on any X-Brick, it should also be connected to the other storage controller of that same X-Brick. For example, you cannot connect only to SC1 from all the X-Bricks in the array. I will check internally on why the health check returned with an error in that scenario that you described.

1 Rookie

 • 

63 Posts

March 9th, 2017 07:00

Avi -

"..if the host is connected to one storage controller on any X-brick, it should also be connected to the other storage controller on that same X-brick"

Your statement is understood, but I would add that it seems EMC's expectation, based on the output of our recent health check, is the following

"if the host is connected to one storage controller on any X-brick, THAT SAME INITIATOR should be also connected to the other storage controller on that same X-brick" - We are not zoned in this fashion.

This is our zoning - we are zoning host HBA across X-bricks, so in our case SAME INITIATOR is not zoned to the other X-brick controller, the OTHER INITIATOR is.

However, while this meets your stated requirement above, this does NOT meet the requirement set by the DellEMC XIO health check, hence "does not comply with best practices".

Fabric A

HBA0 to X1-SC2-FC1

HBA0 to X2-SC2-FC1

Fabric B

HBA1 to X1-SC1-FC2

HBA1 to X2-SC1-FC2

727 Posts

March 9th, 2017 09:00

Hernan - I stand corrected. I should have said "same initiator" and not "same host" in my earlier response.

I have confirmed that the health check script would check that each initiator is zoned, at least, to two different targets on both storage controllers on the same X-Brick. This is what is stated in the Host Configuration Guide also.

1 Rookie

 • 

63 Posts

March 9th, 2017 09:00

Avi - So the question remains - why is the zoning we have presented not the best practice?

Why must a single HBA be zoned to the same X-brick?

How is this any more resilient than zoning across X-bricks?  (A config we actually worked on with DellEMC on-site and supported),

Is the reasoning more performance related?

Bottom line - we are not in production, but when we move into production, it looks like we may have to change our zoning  - we just need to understand the reasoning. Thanks

64 Posts

March 9th, 2017 14:00

dynamox wrote:

any substantial benefits of zoning one host to two X-Bricks versus to one X-Brick (other than front-end balancing) ?

Presuming what you're asking is the difference between what we originally had in the Host Configuration Guide (and thus in the earlier posts in this thread) and the current recommendation, the answer is "A little, but not much".  If you're already configured using the "old" layout, then there's absolutely no need to change. For new installs, we would recommend the newer layout, although the old one would still be considered fully valid.

The only difference between the old and and new is what happens when a storage controller is down (eg, failed).  In the old layout, this would cause you to lose half of the paths to the host (or none, depending on which host and which SC was down).  With the new layout, you would lose 1 path to the host.

In both cases things will stay up and running - it's just down to what percentage of the total paths/bandwidth you lose.

727 Posts

March 10th, 2017 11:00

It has to do with how failures of software modules are handled internally in the code. Engineering provided the recommendation to have the same initiator see both the storage controllers on the same X-Brick.

31 Posts

March 13th, 2017 07:00

Nice attention to detail

4 Posts

March 14th, 2017 06:00

Hi Hernan,

To provide a little more detail, the recommendation to zone to both Storage controllers in an X-brick is to avoid service interruption.

The Array is fully redundant, however if both storage controllers in the same X-brick were to go down, then the cluster will stop. By zoning to both storage controllers in the same X-brick, you will maintain connectivity with the cluster unless it goes down.

If you are zoned to only 1 Storage controller on each X-brick, then in the worst case scenario you could lost connectivity with the array while it is still online and servicing I/O (Each X-Brick can survive with only 1 storage controller).

Neither of these scenarios is likely to happen, however connecting to both storage controllers in a single X-brick provides us with the same redundancy as connecting to both storage controllers in multiple X-bricks (just with less target balancing).

No Events found!

Top