pie8ter
2 Bronze

MD3220i, ESXi hosts and PowerConnect Switch question.

We have the following inventories:

5 DELL PE R710 servers
1 DELL MD3220i
2 iSCSI optimized DELL powerconnect switches

We plan to setup vSphere 4.0.  Each ESXi 4.1 host (r710) has total of 6 network ports.

3 ports = Storage
2 ports = LAN
1 port = Management (vCenter)

The three storage ports from each ESXi host will be connected to two PowerConnect iSCSI optimized switches.   The MD3220i with the dual RAID controllers will connect to the switches. 

I have the following questions.

1) The MD3220i admin guide says I need to assign each iSCSI port on the controllers to different subnet.  For example:

Controller 0, port 0 => 192.168.1.1 /24
Controller 0, port 1 => 192.168.2.1 /24
Controller 0, port 2 => 192.168.3.1 /24
Controller 0, port 3 => 192.168.4.1 /24

Controller 1, port 0 => 192.168.1.2 /24
Controller 1, port 1 => 192.168.2.2 /24
Controller 1, port 2 => 192.168.3.2 /24
Controller 1, port 3 => 192.168.4.2 /24

a) What's the purpose of assigning different IP subnets for the ports?  Does it help the MPIO? Can I just put all the iscsi ports in a single subnet?  I want all the ESXi hosts see all the LUNs in the storage array for vMotion and vmware HA.

b) I will be NIC teaming the three physical ethernet ports in each host to a vSwitch for iscsi.  Does this mean I need to create one VMKernel port for each subnet on the storage unit  (total of four VMKernel ports) and attach them to the vSwitch?

2) What's a host group?  The guide says a host groups is used only for Microsoft Cluster?  Can I put all the ESXi hosts into a host group and assign it to all the targets?

2) When I NIC team the storage NICs in ESXi, which load balancing algorithm should I select (IP Port, MAC Hash or IP Hash)?  How will this affect the ESXi's multipath I/O? 

 

Thanks

0 Kudos
6 Replies
cjtompsett
2 Iron

Re: MD3220i, ESXi hosts and PowerConnect Switch question.

1) The MD3220i admin guide says I need to assign each iSCSI port on the controllers to different subnet.  For example:

Controller 0, port 0 => 192.168.1.1 /24
Controller 0, port 1 => 192.168.2.1 /24
Controller 0, port 2 => 192.168.3.1 /24
Controller 0, port 3 => 192.168.4.1 /24

Controller 1, port 0 => 192.168.1.2 /24
Controller 1, port 1 => 192.168.2.2 /24
Controller 1, port 2 => 192.168.3.2 /24
Controller 1, port 3 => 192.168.4.2 /24

a) What's the purpose of assigning different IP subnets for the ports?  Does it help the MPIO? Can I just put all the iscsi ports in a single subnet?  I want all the ESXi hosts see all the LUNs in the storage array for vMotion and vmware HA.

Having the different subnets helps keep the maximum number of iscsi sessions established down . MPIO will work ok with the iSCSI ports all on one subnet or on multiple.
In your configuration if you have everything on one subnet then each of the iSCSI nics on each of the servers would establish a session to each iSCSI port on the array.
For all of the nics on one subnet you would have
5 Servers * 3 NICs for iSCSI * 8 iSCSI ports on the MD3200i = 120 iSCSI sessions.
If you have multiple subnets then the number of iSCSI sessions established would be lower.

b) I will be NIC teaming the three physical ethernet ports in each host to a vSwitch for iscsi.  Does this mean I need to create one VMKernel port for each subnet on the storage unit  (total of four VMKernel ports) and attach them to the vSwitch? With iSCSI in VMware you do not want to team the nics. You will want configure a seperate VMKerel port for each of the VMNics. A 1 VMkerenel to 1 VMnic configuration is preffered and recommended.

This document should provide more answers
PowerVault MD32xxi Deployment Guide for VMware ESX 4.1
http://www.dell.com/downloads/global/products/pvaul/en/powervault-md32xxi-deployment-guide-for-vmwar...

2) What's a host group?  The guide says a host groups is used only for Microsoft Cluster?  Can I put all the ESXi hosts into a host group and assign it to all the targets? A host group is used when you will want to have multiple servers to see the same LUNS. This is can be used in in a Microsoft Clsuter, Linux Cluster or a VMware HA Clutser. The VMware installation guide linke above has information about this starting on page 7

2) When I NIC team the storage NICs in ESXi, which load balancing algorithm should I select (IP Port, MAC Hash or IP Hash)?  How will this affect the ESXi's multipath I/O?  As stated above, you do not want to team the nics. The multipathing will take care of the I/O across the paths to the storage. After ESX see the lun presented you will want to right click on each of the LUNs and select a multipath policy of Round Robin.

Regards,
cjtompsett

 

0 Kudos
JOHNADCO
3 Cadmium

Re: MD3220i, ESXi hosts and PowerConnect Switch question.

Some sans internal routers get confused if the ports are not all on different subnets.  At least all the subnets on a given controller should be set different if the SAN install guide states to use different subnets.    Others san install guides will state to put them all in one subnet. sort of depends on the internal routing inside the san itself.

 

On sans that state to use different subnets?  Generally they look at each network connection as a separate network.   If you have two or more in the same subnet that is when it can confuse the internal routing inside the san.   Performance can suffer even if it seems to work.

0 Kudos
cjtompsett
2 Iron

Re: MD3220i, ESXi hosts and PowerConnect Switch question.

In the Dell deployment guide the configuration has all of the iSCSI ports and iSCSI nics on the same subnet.

PowerVault MD32xxi Deployment Guide for VMware ESX 4.1
http://www.dell.com/downloads/global/products/pvaul/en/powervault-md32xxi-deployment-guide-for-vmwar...

-cjtompsett

0 Kudos
pie8ter
2 Bronze

Re: MD3220i, ESXi hosts and PowerConnect Switch question.

Thanks for the link.

That deployment guide is exactly what I was looking for.   I called Dell's support asking for a white paper that talks about MD32xxi in ESXi environment.  I was told there isn't one.  

I think the deployment guide is just for the demo and that's why Dell recommends single subnet for all the iSCSI ports.  I decided to setup multiple subnets like the MD3220i Admin Guide suggests.

0 Kudos
mrokkam
3 Argentium

Re: MD3220i, ESXi hosts and PowerConnect Switch question.

Having multiple subnets also ensures that the full throughput of all ports is accurately setup and used.

If only one subnet is used and the source IP for all iSCSI sessions is left at default or not setup correctly, all iSCSI traffic will go out of a single port. This can be difficult to debug and will impact performance. One more reason why separate subnets are recommended.

Thanks,

Mohan

0 Kudos
Highlighted
cjtompsett
2 Iron

Re: MD3220i, ESXi hosts and PowerConnect Switch question.

With all IPs from the storage target and initiator on the same subnet you will not not have issues with it using all of the paths. If you do use all one subnet you want to make sure that any switches being used are either stacked or trunked to ensure you do not get a split horizon. Multipathing software be it VMware native support, MS MPIO, Linux RDAC, Linux multipathd, EMC powerpath, EQL MEM  etc...  ensures that each path to the iSCSI target is used. I have personally setup multiple configurations of iSCSI Storage products in both single and multiple subnet configurations.

When the discovery of the array is complete the VMware iSCSI initiator will login to all ports on the array from all VMkernel iSCSI IPs. The only differnce between the IPs being on one subnet or multiple subnets will be the number of iSCSI sessions / paths established to the iSCSI target.

SInce the MD32xxxi arrays have two Raid controllers not all of the paths to the iSCSI target will show Active. The Active paths will be to the ones connected to the controller that currently owns the LUN. To make sure you are making use of all of the paths to the Active controller you will need to change multipathing policy to Round Robin. At the bottom of this Dell wiki page here  are screen shots of what needs to be set.

From the www.delltechcenter.com - http://www.delltechcenter.com/page/VMware+ESX+4.0+and+PowerVault+MD3000i

 12. To configure round robin multipathing policy using the VI client GUI, for each LUN exposed to the ESX server, change the default path-selection policy to Round Robin (VMware). This enables load balancing over two active paths to the LUN (two paths through the controller that owns the LUN; the other two paths should be stand by).

a. Right-click on the device and choose Manage Paths.

VMware ESX 4.0 and PowerVault MD3000i - The Dell TechCenter

b.In the Path Selection drop-down, select Round Robin.
Before this selection is made, one path has a status of Active, another is Active (I/O), and the other two paths are in Stand by.

VMware ESX 4.0 and PowerVault MD3000i - The Dell TechCenter

c.After selecting Round Robin, two paths have a status of Active (I/O) and other two paths are in Stand by.

VMware ESX 4.0 and PowerVault MD3000i - The Dell TechCenter

d. Repeat the process for each iSCSI LUN presented to the ESX 4 server from the Dell PowerVault MD3000i.

 

-cjtompsett

0 Kudos