Unsolved

This post is more than 5 years old

9 Posts

103833

March 5th, 2008 14:00

MD3000i and ISCSI Network IP Ranges

I have the exact hardware and network setup as page 3 in this link:
http://www.dell.com/downloads/global/solutions/md3000i_esx_deploy_guide.pdf
On the MD3000i I have set up the controllers like this:
controller 0, port 0 - 192.168.130.101
controller 0, port 1 - 192.168.131.101
controller 1, port 0 - 192.168.130.102
controller 1, port 1 - 192.168.131.102
On the ESX Servers, each server has an IP pool with an address in the 192.168.130, and 192.168.131 ranges.
Everything is working fine.
A college has suggested that why have the separate ranges? To which I answered "I don't know".
Is it best practice to keep 2 IP ranges or is that just adding complexity to the network setup?
Could I set up my controllers to be something like:
controller 0, port 0 - 192.168.130.101
controller 0, port 1 - 192.168.130.102
controller 1, port 0 - 192.168.130.103
controller 1, port 1 - 192.168.130.104
And the ESX servers to have just 192.168.130.* IP addresses.
Which config would you suggest and why?

112 Posts

March 5th, 2008 15:00

The reason for the multiple network subnets is to allow for redundant networks. This would be a best practice to ensure connectivity to the storage. If you don't actually have redundant network switches you can use VLANs for now and be ready to add the switches later. The redundant netowrks is very simliar to how most fibre channel SANs are setup with redundant fabrics.

If you are not worried about losing a switch or worried about losing connectivity to the storage for a period of time, then you could just run it all across one network. This would make total sense for a test environment for example.

Thanks - Todd

9 Posts

March 5th, 2008 20:00

Thanks Todd,
That all sounds good, and that is how I've got my SAN configured at the moment, but the theory behind it is still bugging me...
Assuming everything was on the same subnet, and I had 2 switches (which I do). If a switch died I still wouldn't lose connectivity to the storage array. If a storage processor went, failover should work there too. On paper it actually offers more redundant paths than having multiple network subnets. So I don't see how I would lose connectivity to the storage array.
I'm used to working with fibre SANs which is the reason I separated the subnets in the first place, however the more I think about it, it seems that the same subnet still ensures connectivity and would be easier to configure.
Am I missing something? What scenarios can you see where I might lose connectivity to the storage, if everything is on the same subnet?

112 Posts

March 7th, 2008 11:00

You bring up a good point which I hadn't really thought through before. So I asked a couple of iSCSI experts here at Dell and the info below is what I got back.:

"It does not matter if you have separate subnets or the same subnet it's really user preference. Some folks like to run separate subnets to isolate I/O traffic from each other. You can do the same with VLANs.

What it boils down to is the switch configuration and type of switches.
Some switches have full port duplex meaning they have full port speed no matter how many connections to the switch. Others utilize a "port cluster" architecture where a cluster or group of ports share resources and with heavy I/O can be saturated by a single port in the cluster.
Other switches share resources across all the ports in the switch (this is really the worst case). Saturation can occur by a select few ports running really heavy I/O. If one app is killing the performance of the SAN because and hogging switch resources then it's a problem.

From a storage perspective he can use either configuration described but if he uses a single subnet with redundant switches he'll want to ISL (inter switch links) the switches so that if one link goes down somewhere in the mix, he doesn't lose performance."

Thanks,
Todd

156 Posts

March 9th, 2008 12:00

There are a lot of differnet theories as to what is the best practices.. iSCSI is very versital and can be setup and supported to what works best for each environment.
There are several reasons I like seperate subnets for Windows hosts... I'm still working to come to a conclusion for ESX.
If you are using the ESX iSCSI initiator, you may want to have all IP addresses on the same subnet and team the NICs, but it seems ether way will work. There have been some posts on the Dell Community Forum regarding this. If you are using the Windows iSCSI initiator, I would use seperate subnets.
First, Windows typically complains when 2 NICs are on the same subnet unless they are teamed. Keep in mind, teaming is not supported by Microsoft for iSCSI traffic. Having redundant isolated networks increases reliability by limiting any broadcast storms to a single subnet. Another reason to place the NICs on separate subnets is for troubleshooting purposes. If you have 2 NICs on the same subnet, PING will only work on the first NIC in the binding order.

7 Posts

March 10th, 2008 09:00

For ESX with two switches, I think you need to use seperate subnets, unless you connect the switches. We ran into problems with this in our failover testing. We found that ESX wouldn't fail over to a secondary card unless that card could see the same ISCSI controller addresses in the storage that the first card could see. We solved this by connecting our two switches, but I believe you can get this to work without interconnected switches, if you have two subnets.

112 Posts

March 11th, 2008 13:00

We are having a chat today at 3 Central to discuss VMware and iSCSI - http://www.delltechcenter.com/page/TechCenter+Chat

If you miss the chat the transcript will be posted the following day.

Todd

1 Rookie

 • 

4 Posts

May 1st, 2008 15:00

I would like to use this configuration on MD3000i for use with Microsoft iSCSI initiator and VMware ESX 3.5:
controller 0, port 0 - 192.168.130.101/255.255.255.0
controller 0, port 1 - 192.168.131.101/255.255.255.0
controller 1, port 0 - 192.168.130.102/255.255.255.0
controller 1, port 1 - 192.168.131.102/255.255.255.0

This is the preferred configuration for Microsoft iSCSI initiator, but I am still not sure how to set this up in VMware ESX 3.5. I have a PE2950 with 6 pNIC. I would use 2 pNIC for iSCSI. I have two PC5448 switches for iSCSI. Should I configure two vSwitch? Like this:
controller 0, port 0--pSwitch0--pNIC0--vSwitch0(VMkernel0, ServiceConsole0)
controller 1, port 0-/

controller 0, port 1--pSwitch1--pNIC1--vSwitch1(VMkernel1, ServiceConsole1)
controller 1, port 1--/

VMkernel0: 192.168.130.229/255.255.255.0
VMkernel1: 192.168.131.229/255.255.255.0

I could also use a team for vSwitch0 (pNIC0, pNIC2) and for vSwitch0 (pNIC1, pNIC3), but I would like to use pNIC2 and pNIC3 for DMZ. pNIC4 and pNIC5 is used for LAN.

Is this a reasonable setup or am I missing something?

156 Posts

May 2nd, 2008 09:00

dell-svein, Everything you described looks correct.
You could consider establishing the iSCSI connections from within the guest OS as well. Current version of ESX only supports failover and not multipathing to a particular LUN. For example if your LUN was on Controller 0, I/O would only use VMKernel0 to Ctrl 0-0. If you used the MS iSCSI Initiator in the guest OS, you could establish 2 sessions, one from NIC1 to ctrl 0-0 and the other from NIC2 to ctrl 1-0. MDSD would configure MPIO to use Round Robin by default. So you might get better performance, I have not tested.

156 Posts

May 2nd, 2008 09:00

Dave T is correct here. If there is not a phyical connection between the 2 switches they have to be on seperate subnets. This also applies to direct attached storage as well. The IP routing wouldn't know which NIC to use, it would use the first NIC in the binding order on the subnet. Using a seperate subnet lets the OS know it has to use the other NIC.

1 Rookie

 • 

4 Posts

May 2nd, 2008 12:00

I have been testing this configuration for failover today to be absolute sure it’s working. Four ping commands to each controller/port from ESX/Linux was started to see what was going on. Network cables was removed and inserted on my server pNICs and MD3000i pNICs. The failover worked better than expected :-) On vSwitch0 and vSwitch1 I also used NIC teaming. I removed pNICs cables (3) until only on was left on my ESX server. Even if I removed the last pNIC cable and inserted another pNIC cable on a different subnet the ongoing large-file-copy process on the iSCSI storage succeed. Of course it was a delay for a while when no network cable was attached to the server for iSCSI. I did the same thing on the MD3000i. The failover was better than expected. I am now calm about setting this in production. This configuration will also work with Microsoft iSCSI initiator with MPIO. (Who would exclude this possibility when setting up MD3000i?)

What I don’t understand is why DELL/VMware does not have a better guide for MPIO configuration. This http://www.dell.com/downloads/global/solutions/md3000i_esx_deploy_guide.pdf setup cannot be used with MS iSCSI MPIO.

I would like to upload some screen dumps the make this easier to understand – but it looks that it’s not possible to do that in this forum.

156 Posts

May 2nd, 2008 13:00

"What I don’t understand is why DELL/VMware does not have a better guide for MPIO configuration. This http://www.dell.com/downloads/global/solutions/md3000i_esx_deploy_guide.pdf setup cannot be used with MS iSCSI MPIO.
"
That guide is focused on ESX. If you are deploying using the guest O/S it would be configured the same as if it were a phyiscal machine.

1 Rookie

 • 

4 Posts

May 2nd, 2008 15:00

If you follow the guide you will end up with one subnet using only pNIC teaming (see the first post by Acrobat). If you have configured MD3000i with only the 192.168.130.* net then MPIO for MS iSCSI initiator is not possible because the two pNICs in a MS Windows pHost need to be on different subnet. Teaming is not supported for MS iSCSI initiator with MPIO. You cannot use more than one network configuration at the same time on the MD3000i. To be clear: I talking about using MD3000i from ESX host AND from physical MS Windows host at the same time (physical fileserver, Exchange, MS SQL etc).

156 Posts

May 4th, 2008 11:00

"If you follow the guide you will end up with one subnet using only pNIC teaming (see the first post by Acrobat). If you have configured MD3000i with only the 192.168.130.* net then MPIO for MS iSCSI initiator is not possible because the two pNICs in a MS Windows pHost need to be on different subnet. Teaming is not supported for MS iSCSI initiator with MPIO. You cannot use more than one network configuration at the same time on the MD3000i. To be clear: I talking about using MD3000i from ESX host AND from physical MS Windows host at the same time (physical fileserver, Exchange, MS SQL etc)."
I'm not an ESX expert, but my understanding is that previous versions ESX didn't support multiple subnets for iSCSI, but it does now. All the testing I have done it seems to work just fine on separate subnets. I'm not sure what testing Dell has done with the MD3000i and multiple subnets. For the Dell | EMC products, the guides have just been updated to support multiple subnets for iSCSI with VMWare .
The comment that for Windows hosts have to be on separate subnets is not 100% true. It is only "recommended" that they are on separate subnets for the MD3000i. If you choose to put all iSCSI NICs on the same subnet, you will have to explicitly call out each source NIC to target NIC connections for it to use each NIC; otherwise, it would only use the first NIC in the binding order for I/O and then use the second NIC only for failover. Dell's other iSCSI products such as Dell | EqualLogic recommends a single subnet (redundant switches) based on its architecture; for Dell |EMC products there are documents explaining setting up either method, but separate subnets are recommended.
Your comment about NIC teaming with MS is not supported is correct, however you could team NICs at the ESX layer and have MS use the virtual switch. There really is not any benefit to doing this, you can always just have more sessions from additional NICs.
So, in conclusion, you should be able to either use a single subnet or two subnets with both VMWare and Microsoft. If this were me setting it up, I would use 2 separate subnets. Based if you need the guest OS to reside on the iSCSI LUN or just its data, you could use the ESX iSCSI initiator or the MS iSCSI initiator from within the guest OS. If the guest OS needs to be on the iSCSI LUN, then you will have to use the ESX iSCSI initiator.

Hope this helps.

1 Rookie

 • 

4 Posts

May 5th, 2008 18:00

Thanks for useful information. I think also that separate subnet is the best configuration. Then I can have full control on which pSwitch that is used. The ideal is that the pSwitches are not connected together or connected to the LAN to ensure max iSCSI throughput. The configuration needs users to be careful when connecting network cables. Wrong patching on connected switches will result in transferring data on the connected uplink.

“For the Dell | EMC products, the guides have just been updated to support multiple subnets for iSCSI with VMWare .” – where can I find this guide?

156 Posts

May 7th, 2008 07:00

"“For the Dell | EMC products, the guides have just been updated to support multiple subnets for iSCSI with VMWare .” – where can I find this guide?
"
If you search PowerLink for "300-000-603", one of the results should be: :"EMC® Host Connectivity Guide for Windows"
I tried searching for the document title, but it was not in the top results. :(

You can also browse to the document at : Home > Support > Technical Documentation and Advisories > Host Connectivity/HBAs > Installation/Configuration.

Below are specific pages to look at:

"Single iSCSI subnet configuration" on page 127 - 134
“Multiple iSCSI subnet configuration" on page 134 - 142
No Events found!

Top