Start a Conversation

Unsolved

This post is more than 5 years old

5978

August 11th, 2011 11:00

VNXe3300 and iSCSI multipathing

When configuring the iSCSI servers on a VNXe3300, I notice each has to be assigned to a SP.  Can access to the underlying storage be accessed by a host through either iSCSI server?

Meaning, can I create iSCSI-ServerA on SPA and iSCSI-ServerB on SPB then my host uses multipathing to access the same iSCSI share through each iSCSI server's IP?  Or do I have to assign two IPs to a single iSCSI server in order to multipath?

Thanks

727 Posts

August 11th, 2011 11:00

You cannot access one share through two iSCSI servers. You will need to provide two IP addresses to one iSCSI server for multipathing. VNXe systems allow that.

5 Posts

August 11th, 2011 12:00

So if I'd like to use all 4 network ports on SPA for iSCSI, I'll either need to aggregate them all together and give the iSCSI server two IPs, or not aggregate and use 4 IPs, one per network port?

This configuration would be mirrored on SPB.

727 Posts

September 1st, 2011 15:00

Remember that if you are not using any extra IO modules, you have only four ports available per SP and you will be able to aggregate eth2 and eth3 together, but you will not be able to aggregate eth4 and eth5 together. Do you want to use all four ports for the same iSCSI server? If not, you can create two iSCSI servers per SP and assign two IP addresses/eth ports per iSCSI server. In this manner you would be able to use all four ports for iSCSI and at the same time, you would have two IP addresses per iSCSI server also.

14 Posts

September 8th, 2011 20:00

Avi,

Where is EMC's documentation on this? I can't believe there is nothing from EMC on how to configure multipathing on a VNXe with VMware... if EMC is pushing this as an array that integrates with VMware better than any other then they need to publish documentation so that people can configure it correctly. If you do a google search on the subject there are a ton of people out there looking for guidance on this, and also a ton of people who are doing it incorrectly because EMC doesn't have any docs on it.

Not sure if you are an EMC employee or what. If you are, please push to publish a whitepaper on configuring VNXe with iSCSI multipathing for VMware.

727 Posts

September 9th, 2011 06:00

EMC documentation on high availability features in VNXe would be available shortly in a whitepaper. This whitepaper will also contain guidance for creating an HA configuration using VNXe systems. Thanks for bringing this up and also for your patience.

5 Posts

September 9th, 2011 08:00

727 Posts

September 9th, 2011 08:00

I dont have a target date currently, but I can definitely post the link here once it gets available on Powerlink or www.emc.com

14 Posts

September 9th, 2011 08:00

in the meantime can you speak to the correct configuration for optimal multipathing on VNXe with VMware?

I see 2 scenarios that most people are using.

The first is to split the ports on an SP across separate physical switches. Port 0 of each SP gets connected to physical switch 0, and are configured in subnet X. Port 1 of each SP gets connected to physical switch 1, and are configured in subnet Y. VMware is then configured with a physical link to both physical switch 0 and physical switch 1, and has 2 VMkernel interfaces, one in subnet X and one in subnet Y. I suppose all interfaces could also be configured in the same subnet in this scenario as well.

The VNX Techbook for VMware uses the above method as the recommended method. SP ports get split across physical switches and all ports are in the same subnet. Does this recommendation hold true for VNXe as well?

The other that people seem to be doing is connecting all ports from SP-A to one physical switch, and all ports on SP-B to the other physical switch. All SP ports are in the same subnet in this scenario. Is it appropraite to use LACP here?

Other questions... should the 2 physical switches be configured with links between them? I know with CLARiiON CX and AX the switches were not supposed to be connected. Also in the VNX Techbook for VMware attached on page 50 it does not show the physical switches being connected.

Another question I have is that it is referenced that VNXe is Active/Active (kind of). While each LUN is bound and "owned" by a particular SP, there is connectviity between the SPs inside the array that can be used to transfer IO received for a LUN on the "non-owning" SP to the owning SP, which sounds like ALUA, but I have never seen anyone classify the VNXe as an offical ALUA array. Is the VNXe officially ALUA or not? If it is not, what is the difference between the ALUA VNX and the "non-ALUA" VNXe?

How does all of this impact multipathing policies for VMware? Traditional active/passive arrays mandate that you use fixed or MRU policies, because using round robin can create a path thrashing scenario. If it is true that there is inter-SP connectivity inside the array (like ALUA), that enables the ability to use round robin without path thrashing. What is the specific EMC recommendation for VNXe?

It would be great to get an authoritative answer on all of these questions so we can put this to rest . I'm not looking for an "it depends," kind of answer. Very simply put, what is the best practice for configuring the VNXe array SPs and ports in relation to physical switches, LACP or not, separate subnets or not, ALUA or not, and what kind of VMware multipathing policy.

Thanks for your attention and responses thus far! I really appreciate it!

14 Posts

September 9th, 2011 08:00

Avi,

Is there a target release date for these papers? And can you post links back to this thread once they are released so we all are notified?

Thanks for the response.

5 Posts

September 9th, 2011 09:00

Think of the VNXe as a pair of Data Movers rather than a pair of Storage  Processors.

For multi-pathing in VMware, the best approach is to  create an iSCSI Server on the VNXe, assign it two network ports one on subnet A  and the other on subnet B.  Create multiple iSCSI Servers if you'd like to load  balance storage across more than one iSCSI Server, but bear in mind when you  create storage you must attach it to one of the iSCSI Servers.  You cannot  access the same storage through two different iSCSI Servers.

With each iSCSI server you create, physically connect the  port assigned to subnet A to switch 1 and subnet B to switch B.  We want to  create two separate "fabrics," if you will, just like what we'd do with Fibre  Channel.

In VMware, create two VMkernel port groups, one on subnet  A and the other on subnet B.  Connect subnet A to switch 1 and subnet B to  switch 2.  If you're using the Software iSCSI Initiator, be sure you use the  esxcli command via Tech Support Mode to link your vmk's to the Software iSCSI  Initiator.

Your two switches do not have to be stacked but they can  be if you'd like easier management.

Make sure that you plan the ports your iSCSI Servers are  going to use accordingly.  If you want to use Jumbo Frames, be aware that due to  the way the VNXe fails over, if you enable JF on eth2 you're enabling it on both  SP's eth2 simultaneously.  If you're using VLANs, your switches must be  configured so that if one SP dies and all its services move to the other SP that  you can still access all those services.

Also be aware that if you want to use Jumbo Frames or  add/remove ports to an iSCSI Server, do so before you create the iSCSI Server  and create storage.  If you try to go back and changes things after the fact,  I've found that the iSCSI Server will not function properly and your hosts won't  be able to see the storage behind it.  So plan ahead so you only need to create  the iSCSI Servers once.  If you decide to change Jumbo Frames after you've  created an iSCSI Server using the ports you enabled it on, you'll have to  destroy the iSCSI Server and all storage attached to it and recreate it all over  again!

Hope this helps!

727 Posts

September 12th, 2011 13:00

The same thing is being discussed at https://community.emc.com/message/564371#564371

14 Posts

October 20th, 2011 09:00

All,

EMC has released an excellent whitepaper on how to properly configure VNXe networking for HA configurations, both iSCSI and NFS.

Find the paper here: http://t.co/IzfKR0nAhttps://supportbeta.emc.com/docu35554_White-Paper:-EMC-VNXe-High-Availability.pdf?language=en_US

Will

No Events found!

Top