Start a Conversation

Unsolved

This post is more than 5 years old

A

4828

March 12th, 2016 20:00

XtremIO device path recommendation

We have 4-Bricks XtremIO and we are now considering how to connect FC paths to the servers.

I found the general guideline as below in the Host Configuration Guide.

========================

The optimal number of paths depends on the operating system and server information.

To avoid multipathing performance degradation, do not use more than 16 paths per device.

It is recommended to use 8 paths.

========================

Then I have 2 questions.

1) For 4-Bricks XtremIO, is it also recommended to use 8 paths or all 16 paths?

What is the reason that 8 paths is recommeded in general?

2) I found the logical connection topology for 8 paths in the Host Configuration Guide.

8 path.jpg

In this diagram, Host1 is connecting to X1 and X2, and Host2 is connecting to X3 and X4.

Why are these two hosts connecting to different bricks? I thought connecting to all bricks will improve a high availability.

Does this topology below have any problems? (such as any limitations or something.)

8 path v2.jpg

727 Posts

March 15th, 2016 20:00

XtremIO will give best performance when we are multipathing across every single port on the array. On smaller arrays (with 4 ports) this is generally possible. As the number of bricks grows, so does the number of ports.

Most mulitpathing software begins to have issues once you get beyond about 8 or possibly 16 ports, which is why we recommend those numbers. Doing less will work, but will impact performance – both in terms of the maximum bandwidth available, but also as the array will end up a little unbalanced. If you do use less paths, it’s best practice to spread the paths from multiple hosts over different ports on the array to help remove any potential bottlenecks and balance things better.

I am checking with a few folks on the second part of your question and will get back to you asap.

522 Posts

March 16th, 2016 07:00

I thought at one point in some of the documentation there was strong suggestion/requirement to have a host hba see both controllers of an Xbrick and that is what led to the topology BP being the first picture taken from the host guide above. I have had customers ask about the latter picture which you modified as well and explained it as supported, but not optimal for access locality and HA....however if this is now changed it will be good to note since the host will technically be zoned to meet the requirement below (just through different HBA's). But I guess the question that you are probably researching Avi is if this type of dispersed zoning buys you anything? From a management perspective when talking about quad bricks and paths, I usually have customer stop at 4 or 8 paths and an easy way to deal with that on the zoning side it to simply round-robin bricks or pairs of bricks like in the first pic....but management aside, it would be good to know if there are any negatives against doing it in a dispersed setup.

• A host should be zoned to at least one X-brick and both storage controllers on the given X-Brick

1 Rookie

 • 

20.4K Posts

March 16th, 2016 10:00

i am zoning one host to one x-brick and rotate them around. IE in 2 x-brick conifg:

Fabric A

host1-hba1-x1-sc1-fc1

host1-hba1-x1-sc2-fc1

host2-hba1-x2-sc1-fc1

host2-hba1-x2-sc2-fc1

host3-hba1-x1-sc1-fc1

host3-hba1-x1-sc2-fc1

Fabric B

host1-hba2-x1-sc1-fc2

host1-hba2-x1-sc2-fc2

host2-hba2-x2-sc1-fc2

host2-hba2-x2-sc2-fc2

host3-hba2-x1-sc1-fc2

host3-hba2-x1-sc2-fc2

522 Posts

March 16th, 2016 11:00

that is how I do it as well with my setup and my customer's.

1 Rookie

 • 

20.4K Posts

March 16th, 2016 13:00

Keith,

sounds like you do a lot of implementations. Have you seen any customers who "legitimately"  needed more than 4 paths  per host to XtremIO ?

280 Posts

March 16th, 2016 19:00

Hi All,

So the best practice is

- host hba should connect all array ports.

- multipath software begins to have issue and possibly CPU usage will increase when hba sees 8 or 16 ports.

Most of my customers use 4 or 8 paths per host, so I would like to know if host really needs to see 16 paths in 4 x-bricks config.

In the second picture, host1 is connecting to fc1 ports on all storage controllers and host2 is connecting to fc2 port on all storage controllers.

Is it better to connect to both fc1 and fc2 ports for local HA?

727 Posts

March 16th, 2016 20:00

Hi Keith,

You got it right, we are going to add a note in the Host Configuration Guide which states that each zone from the host should have a path to both Storage Controllers in any X-Brick. You still need to balance the hosts between the Storage Controllers to provide a distributed load across all target ports. But each host initiator group requires at least one path to two Storage Controllers from the same X-Brick.

This is done from an HA perspective and this is also the reason for going with the configuration shown in the first screenshot (and in the HCG). As I said earlier, we are updating the HCG to clarify this guidance.

Thanks!

727 Posts

March 16th, 2016 20:00

No – a host does NOT need to connect to all ports on the array. What I said earlier was that each zone should have connections to both storage controller and not to only a single storage controller in the X-Brick. The first pic (from the HCG) follows this rule, but the second pic does not.

For example, the solid red line is a single zone and is connected to both storage controllers of X-Bricks X1 and X2 in the first pic. However, this rule is not satisfied in the second pic.

280 Posts

March 17th, 2016 03:00

Thanks!

I was confused a little, but now I understood that "each zone" should connect to both storage controllers.

522 Posts

March 17th, 2016 05:00

Thanks Avi!

dynamox - I haven't seen many that really need more than 4 paths. I have 2 setups where hosts have and probably benefit from 8 paths and those servers are crushing I/O so from a performance perspective we went with 8 paths both for performance and availability, but otherwise I feel the vast majority out there are well balanced with 4 paths. When I do see customers with 8 paths or have customers that go with 8 paths, it is usually related to something larger than a dual-brick setup and can be for one of a few reasons:

1. Just from a mgmt perspective, with a dual-brick for example, it becomes "easier" to just zone a host with 2 HBA's to all ports in a fabric (usually split) and that makes it quick and easy from a zoning perspective. So that ends up with 8 paths, no harm, no foul if the host can tolerate that many paths.

2. Some powerful hosts with more than 2 HBA's could end up with more than 4 paths to drive more I/O and in these situations you can find an array "dedicated" to an application so 8 paths is probably no uncommon here either - again, usually more for management, but there are some performance sensitive applications where I can see this benefiting them.

3. With the larger XtremIO clusters, management is broken into pairs of bricks and that can lead to 8 paths as well, but this is not usually related to performance IMO and more about just an easier way to think about round-robining hosts like you have in your example (which is how I usually have it laid out).

I haven't ever gone with more than 8 paths and I recommend not to for many of the reasons discussed here. From a speeds and feeds perspective, I feel 4 paths can satisfy the majority of setups out there since it is unlikely the bottleneck with this array will be the front-end channels like we might have had with other arrays where I/O concurrency needs to be increased by adding more host channels.

Let me know if that is along the same lines as what you were thinking.

HTH,

-Keith

1 Rookie

 • 

20.4K Posts

March 17th, 2016 10:00

Good stuff Keith. My thinking is the same as yours. I have one huge AIX server that has 4 HBAs, currently connected to a dedicated VMAX 20K.  If i were to move something like that to XtremIO i would try to go wide and give it more paths (as i do today with FAs).  At least in my shop i can't think of any systems/application (with 2 HBAs) that would benefit of more than 4 logical paths. Maybe there are some workloads out there that queue so much in the operating systems that giving them more logical paths would be beneficial.

10 Posts

April 13th, 2016 09:00

Avi,

  If I'm understanding correctly above, a given HBA needs to be zoned to a FC port on each of the SC of each Xbrick that it is attached to in a 2+ Xbrick environment?

In the HCG on page 18, there's a graphic that I've got below:

hcg_single_xbrick.jpg

  The single HBA per SAN fabric configuration on the left appears to abide by this rule, however the configuration on the right does not.

Is the rule of connecting a HBA to a FC port on each of the SC of an Xbrick only apply in a 2+ environment?

I.e. if the customer were to expand this into a multi brick cluster there would need to be rework of the existing zones?

Thanks,

Brian

727 Posts

April 14th, 2016 14:00

The rule is that if you are connecting to one storage controller of any X-Brick, you should also be connected to another storage controller of the same X-Brick. This does NOT mean that you need to be connected to every storage controller in every X-Brick. In general, we dont recommend going more than 8 (or 16 at max) paths from a host perspective.

1 Rookie

 • 

20.4K Posts

April 14th, 2016 21:00

Avi wrote:

The rule is that if you are connecting to one storage controller of any X-Brick, you should also be connected to another storage controller of the same X-Brick.

Avi,

can you please draw a picture (or modify one from Host Connectivity guide) that displays 1-xbrick configuration + 1 host with 2 HBAs ..that is INVALID.

Your statement can be interpreted so many ways:

If i have a host with 2 HBAs, are we saying that HBA1 must be connected to port  SC1-FC1 and SC2-FC1 and HBA2 connected to SC1-FC2 and SC2-FC2 ?  

OR

If i have a host with 2 HBAs, are we saying that HBA1 must be connected to port  SC1-FC1 and HBA2 connected to  SC2-FC2 ? 

Performance/HA aside, which config is actually invalid ?

Thank you

727 Posts

April 15th, 2016 11:00

The first example in your post is recommended. In that scenario, each HBA is connected to two storage controllers from the same X-Brick and therefore satisfies the requirements.

If you look at the other examples given in the HCG, each pattern represents a zone - for example, a solid blue line is connected to both storage controllers of the SAME X-Brick, a solid red line is always connected to both storage controllers of the same X-Brick, etc.

An invalid configuration would be when you have the solid blue line connecting to two FC ports on the same storage controller, but not being connected to the other storage controller of the same X-Brick.

No Events found!

Top