Unsolved
This post is more than 5 years old
9 Posts
0
103833
March 5th, 2008 14:00
MD3000i and ISCSI Network IP Ranges
I have the exact hardware and network setup as page 3 in this link:
http://www.dell.com/downloads/global/solutions/md3000i_esx_deploy_guide.pdf
On the MD3000i I have set up the controllers like this:
controller 0, port 0 - 192.168.130.101
controller 0, port 1 - 192.168.131.101
controller 1, port 0 - 192.168.130.102
controller 1, port 1 - 192.168.131.102
On the ESX Servers, each server has an IP pool with an address in the 192.168.130, and 192.168.131 ranges.
Everything is working fine.
A college has suggested that why have the separate ranges? To which I answered "I don't know".
Is it best practice to keep 2 IP ranges or is that just adding complexity to the network setup?
Could I set up my controllers to be something like:
controller 0, port 0 - 192.168.130.101
controller 0, port 1 - 192.168.130.102
controller 1, port 0 - 192.168.130.103
controller 1, port 1 - 192.168.130.104
And the ESX servers to have just 192.168.130.* IP addresses.
Which config would you suggest and why?
http://www.dell.com/downloads/global/solutions/md3000i_esx_deploy_guide.pdf
On the MD3000i I have set up the controllers like this:
controller 0, port 0 - 192.168.130.101
controller 0, port 1 - 192.168.131.101
controller 1, port 0 - 192.168.130.102
controller 1, port 1 - 192.168.131.102
On the ESX Servers, each server has an IP pool with an address in the 192.168.130, and 192.168.131 ranges.
Everything is working fine.
A college has suggested that why have the separate ranges? To which I answered "I don't know".
Is it best practice to keep 2 IP ranges or is that just adding complexity to the network setup?
Could I set up my controllers to be something like:
controller 0, port 0 - 192.168.130.101
controller 0, port 1 - 192.168.130.102
controller 1, port 0 - 192.168.130.103
controller 1, port 1 - 192.168.130.104
And the ESX servers to have just 192.168.130.* IP addresses.
Which config would you suggest and why?
No Events found!



virtualTodd
112 Posts
0
March 5th, 2008 15:00
If you are not worried about losing a switch or worried about losing connectivity to the storage for a period of time, then you could just run it all across one network. This would make total sense for a test environment for example.
Thanks - Todd
star_it
9 Posts
0
March 5th, 2008 20:00
That all sounds good, and that is how I've got my SAN configured at the moment, but the theory behind it is still bugging me...
Assuming everything was on the same subnet, and I had 2 switches (which I do). If a switch died I still wouldn't lose connectivity to the storage array. If a storage processor went, failover should work there too. On paper it actually offers more redundant paths than having multiple network subnets. So I don't see how I would lose connectivity to the storage array.
I'm used to working with fibre SANs which is the reason I separated the subnets in the first place, however the more I think about it, it seems that the same subnet still ensures connectivity and would be easier to configure.
Am I missing something? What scenarios can you see where I might lose connectivity to the storage, if everything is on the same subnet?
virtualTodd
112 Posts
0
March 7th, 2008 11:00
"It does not matter if you have separate subnets or the same subnet it's really user preference. Some folks like to run separate subnets to isolate I/O traffic from each other. You can do the same with VLANs.
What it boils down to is the switch configuration and type of switches.
Some switches have full port duplex meaning they have full port speed no matter how many connections to the switch. Others utilize a "port cluster" architecture where a cluster or group of ports share resources and with heavy I/O can be saturated by a single port in the cluster.
Other switches share resources across all the ports in the switch (this is really the worst case). Saturation can occur by a select few ports running really heavy I/O. If one app is killing the performance of the SAN because and hogging switch resources then it's a problem.
From a storage perspective he can use either configuration described but if he uses a single subnet with redundant switches he'll want to ISL (inter switch links) the switches so that if one link goes down somewhere in the mix, he doesn't lose performance."
Thanks,
Todd
Dell-Jeff G
156 Posts
0
March 9th, 2008 12:00
There are several reasons I like seperate subnets for Windows hosts... I'm still working to come to a conclusion for ESX.
If you are using the ESX iSCSI initiator, you may want to have all IP addresses on the same subnet and team the NICs, but it seems ether way will work. There have been some posts on the Dell Community Forum regarding this. If you are using the Windows iSCSI initiator, I would use seperate subnets.
First, Windows typically complains when 2 NICs are on the same subnet unless they are teamed. Keep in mind, teaming is not supported by Microsoft for iSCSI traffic. Having redundant isolated networks increases reliability by limiting any broadcast storms to a single subnet. Another reason to place the NICs on separate subnets is for troubleshooting purposes. If you have 2 NICs on the same subnet, PING will only work on the first NIC in the binding order.
Dave_T1
7 Posts
0
March 10th, 2008 09:00
virtualTodd
112 Posts
0
March 11th, 2008 13:00
If you miss the chat the transcript will be posted the following day.
Todd
svein.
1 Rookie
•
4 Posts
0
May 1st, 2008 15:00
controller 0, port 0 - 192.168.130.101/255.255.255.0
controller 0, port 1 - 192.168.131.101/255.255.255.0
controller 1, port 0 - 192.168.130.102/255.255.255.0
controller 1, port 1 - 192.168.131.102/255.255.255.0
This is the preferred configuration for Microsoft iSCSI initiator, but I am still not sure how to set this up in VMware ESX 3.5. I have a PE2950 with 6 pNIC. I would use 2 pNIC for iSCSI. I have two PC5448 switches for iSCSI. Should I configure two vSwitch? Like this:
controller 0, port 0--pSwitch0--pNIC0--vSwitch0(VMkernel0, ServiceConsole0)
controller 1, port 0-/
controller 0, port 1--pSwitch1--pNIC1--vSwitch1(VMkernel1, ServiceConsole1)
controller 1, port 1--/
VMkernel0: 192.168.130.229/255.255.255.0
VMkernel1: 192.168.131.229/255.255.255.0
I could also use a team for vSwitch0 (pNIC0, pNIC2) and for vSwitch0 (pNIC1, pNIC3), but I would like to use pNIC2 and pNIC3 for DMZ. pNIC4 and pNIC5 is used for LAN.
Is this a reasonable setup or am I missing something?
Dell-Jeff G
156 Posts
0
May 2nd, 2008 09:00
You could consider establishing the iSCSI connections from within the guest OS as well. Current version of ESX only supports failover and not multipathing to a particular LUN. For example if your LUN was on Controller 0, I/O would only use VMKernel0 to Ctrl 0-0. If you used the MS iSCSI Initiator in the guest OS, you could establish 2 sessions, one from NIC1 to ctrl 0-0 and the other from NIC2 to ctrl 1-0. MDSD would configure MPIO to use Round Robin by default. So you might get better performance, I have not tested.
Dell-Jeff G
156 Posts
0
May 2nd, 2008 09:00
svein.
1 Rookie
•
4 Posts
0
May 2nd, 2008 12:00
What I don’t understand is why DELL/VMware does not have a better guide for MPIO configuration. This http://www.dell.com/downloads/global/solutions/md3000i_esx_deploy_guide.pdf setup cannot be used with MS iSCSI MPIO.
I would like to upload some screen dumps the make this easier to understand – but it looks that it’s not possible to do that in this forum.
Dell-Jeff G
156 Posts
0
May 2nd, 2008 13:00
svein.
1 Rookie
•
4 Posts
0
May 2nd, 2008 15:00
Dell-Jeff G
156 Posts
0
May 4th, 2008 11:00
The comment that for Windows hosts have to be on separate subnets is not 100% true. It is only "recommended" that they are on separate subnets for the MD3000i. If you choose to put all iSCSI NICs on the same subnet, you will have to explicitly call out each source NIC to target NIC connections for it to use each NIC; otherwise, it would only use the first NIC in the binding order for I/O and then use the second NIC only for failover. Dell's other iSCSI products such as Dell | EqualLogic recommends a single subnet (redundant switches) based on its architecture; for Dell |EMC products there are documents explaining setting up either method, but separate subnets are recommended.
Your comment about NIC teaming with MS is not supported is correct, however you could team NICs at the ESX layer and have MS use the virtual switch. There really is not any benefit to doing this, you can always just have more sessions from additional NICs.
So, in conclusion, you should be able to either use a single subnet or two subnets with both VMWare and Microsoft. If this were me setting it up, I would use 2 separate subnets. Based if you need the guest OS to reside on the iSCSI LUN or just its data, you could use the ESX iSCSI initiator or the MS iSCSI initiator from within the guest OS. If the guest OS needs to be on the iSCSI LUN, then you will have to use the ESX iSCSI initiator.
Hope this helps.
svein.
1 Rookie
•
4 Posts
0
May 5th, 2008 18:00
“For the Dell | EMC products, the guides have just been updated to support multiple subnets for iSCSI with VMWare .” – where can I find this guide?
Dell-Jeff G
156 Posts
0
May 7th, 2008 07:00
I tried searching for the document title, but it was not in the top results. :(
You can also browse to the document at : Home > Support > Technical Documentation and Advisories > Host Connectivity/HBAs > Installation/Configuration.
Below are specific pages to look at:
"Single iSCSI subnet configuration" on page 127 - 134
“Multiple iSCSI subnet configuration" on page 134 - 142