Unsolved
This post is more than 5 years old
5 Posts
0
5978
VNXe3300 and iSCSI multipathing
When configuring the iSCSI servers on a VNXe3300, I notice each has to be assigned to a SP. Can access to the underlying storage be accessed by a host through either iSCSI server?
Meaning, can I create iSCSI-ServerA on SPA and iSCSI-ServerB on SPB then my host uses multipathing to access the same iSCSI share through each iSCSI server's IP? Or do I have to assign two IPs to a single iSCSI server in order to multipath?
Thanks
Kumar_A
727 Posts
1
August 11th, 2011 11:00
You cannot access one share through two iSCSI servers. You will need to provide two IP addresses to one iSCSI server for multipathing. VNXe systems allow that.
msasse
5 Posts
0
August 11th, 2011 12:00
So if I'd like to use all 4 network ports on SPA for iSCSI, I'll either need to aggregate them all together and give the iSCSI server two IPs, or not aggregate and use 4 IPs, one per network port?
This configuration would be mirrored on SPB.
Kumar_A
727 Posts
1
September 1st, 2011 15:00
Remember that if you are not using any extra IO modules, you have only four ports available per SP and you will be able to aggregate eth2 and eth3 together, but you will not be able to aggregate eth4 and eth5 together. Do you want to use all four ports for the same iSCSI server? If not, you can create two iSCSI servers per SP and assign two IP addresses/eth ports per iSCSI server. In this manner you would be able to use all four ports for iSCSI and at the same time, you would have two IP addresses per iSCSI server also.
huberw1
14 Posts
0
September 8th, 2011 20:00
Avi,
Where is EMC's documentation on this? I can't believe there is nothing from EMC on how to configure multipathing on a VNXe with VMware... if EMC is pushing this as an array that integrates with VMware better than any other then they need to publish documentation so that people can configure it correctly. If you do a google search on the subject there are a ton of people out there looking for guidance on this, and also a ton of people who are doing it incorrectly because EMC doesn't have any docs on it.
Not sure if you are an EMC employee or what. If you are, please push to publish a whitepaper on configuring VNXe with iSCSI multipathing for VMware.
Kumar_A
727 Posts
0
September 9th, 2011 06:00
EMC documentation on high availability features in VNXe would be available shortly in a whitepaper. This whitepaper will also contain guidance for creating an HA configuration using VNXe systems. Thanks for bringing this up and also for your patience.
msasse
5 Posts
0
September 9th, 2011 08:00
Kumar_A
727 Posts
0
September 9th, 2011 08:00
I dont have a target date currently, but I can definitely post the link here once it gets available on Powerlink or www.emc.com
huberw1
14 Posts
0
September 9th, 2011 08:00
in the meantime can you speak to the correct configuration for optimal multipathing on VNXe with VMware?
I see 2 scenarios that most people are using.
The first is to split the ports on an SP across separate physical switches. Port 0 of each SP gets connected to physical switch 0, and are configured in subnet X. Port 1 of each SP gets connected to physical switch 1, and are configured in subnet Y. VMware is then configured with a physical link to both physical switch 0 and physical switch 1, and has 2 VMkernel interfaces, one in subnet X and one in subnet Y. I suppose all interfaces could also be configured in the same subnet in this scenario as well.
The VNX Techbook for VMware uses the above method as the recommended method. SP ports get split across physical switches and all ports are in the same subnet. Does this recommendation hold true for VNXe as well?
The other that people seem to be doing is connecting all ports from SP-A to one physical switch, and all ports on SP-B to the other physical switch. All SP ports are in the same subnet in this scenario. Is it appropraite to use LACP here?
Other questions... should the 2 physical switches be configured with links between them? I know with CLARiiON CX and AX the switches were not supposed to be connected. Also in the VNX Techbook for VMware attached on page 50 it does not show the physical switches being connected.
Another question I have is that it is referenced that VNXe is Active/Active (kind of). While each LUN is bound and "owned" by a particular SP, there is connectviity between the SPs inside the array that can be used to transfer IO received for a LUN on the "non-owning" SP to the owning SP, which sounds like ALUA, but I have never seen anyone classify the VNXe as an offical ALUA array. Is the VNXe officially ALUA or not? If it is not, what is the difference between the ALUA VNX and the "non-ALUA" VNXe?
How does all of this impact multipathing policies for VMware? Traditional active/passive arrays mandate that you use fixed or MRU policies, because using round robin can create a path thrashing scenario. If it is true that there is inter-SP connectivity inside the array (like ALUA), that enables the ability to use round robin without path thrashing. What is the specific EMC recommendation for VNXe?
It would be great to get an authoritative answer on all of these questions so we can put this to rest . I'm not looking for an "it depends," kind of answer. Very simply put, what is the best practice for configuring the VNXe array SPs and ports in relation to physical switches, LACP or not, separate subnets or not, ALUA or not, and what kind of VMware multipathing policy.
Thanks for your attention and responses thus far! I really appreciate it!
huberw1
14 Posts
0
September 9th, 2011 08:00
Avi,
Is there a target release date for these papers? And can you post links back to this thread once they are released so we all are notified?
Thanks for the response.
msasse
5 Posts
1
September 9th, 2011 09:00
Hope this helps!
Kumar_A
727 Posts
0
September 12th, 2011 13:00
The same thing is being discussed at https://community.emc.com/message/564371#564371
huberw1
14 Posts
0
October 20th, 2011 09:00
All,
EMC has released an excellent whitepaper on how to properly configure VNXe networking for HA configurations, both iSCSI and NFS.
Find the paper here: http://t.co/IzfKR0nAhttps://supportbeta.emc.com/docu35554_White-Paper:-EMC-VNXe-High-Availability.pdf?language=en_US
Will