Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

68860

July 31st, 2013 03:00

EqualLogic and MD3200i in the same environment

Hi

I have an existing VMWare vSphere 4.1 infrastructure with 3 hosts and an EqualLogic iSCSI SAN. Each host has 8 pnics of which 2 are dedicated to iSCSI. iSCSI traffic also runs on dedicated switches.

We now have a spare Dell MD3200i SAN available which I would like to integrate into the existing storage network. I thought this would be a fairly painless process of just configuring all the ports on the MD for the existing iSCSI subnet, create some new VMkernel ports on each host and bind pnics to the relevant vmk ports. After much investigation this doesn’t appear to be the optimum config. 

Dell state in their MD3200i deployment guide for vSphere 4.1 that:

“It is recommended that you have different ports on different subnets due to throughput and pathing considerations….

Multiple subnets should be allocated by the number of array ports per controller. With the MD3200i you only get an active path to multiple ports on the same controller if they are on different subnets. Likewise, with VLANs, for multiple target ports on the same controller you only get an active path if the target ports are on different VLANs. Since the MD3220i has 4 ports per controller you get your best throughput with 4 subnets."

Conversely the recommended config for the EqualLogic is all ports on the same subnet (which I believe is also generally the VMWare stance)

I seem to have a number of options: 

  1. Continue as planned and place all ports on the same subnet (probably not a realistic option as according to the above, I will only have 1 active path at a time)
  2. Create new subnets for the MD. Create new VMkernel ports on the new subnets. Bind pnics to the vmks. Separate the traffic using VLANs.
  3. Free up another 2 pnics on each host and use these for the MD iSCSI networks, thus separating the 2 SANs     

Does anyone have any experience of running an MD3200i or similar in a mixed SAN environment? 

Thanks in advance for any advice.

Cheers

Nick

August 1st, 2013 07:00

Nick,

 Adding two more pNICs for iSCSI on each host will improve performance simply by reducing the oversubscription on the existing iSCSI pNICs.  However, I would ADD two NICs per host rather than reduce functionality elsewhere on the vSphere host.  In most vSphere configurations using IP-based storage, I recommend at least 8 x 1 Gb NICs on the host, on redundant hardware.

On the R710/720 servers that have 4 onboard NICs, I add either one quad port or two dual port PCIe cards to meet the required 8 pNICs.

For the 4 primary vSphere network services, use pNIC pairs from different locations (Onboard + PCIe)

 vSphere Management - *I prefer separate ports and VLAN for management

 vMotion  - *Use dedicated ports and VLAN for vMotion, as it is not encrypted

 VM Networks (tagged guest VLANs)

 Storage- iSCSI/NFS

Of course, this changes with 10 Gb and converged networks, but in your scenario we are talking 1 Gb.  If you want to break out the iSCSI for the MD3200i onto separate NICs, add another PCIe card to each host rather than restrict the services above.

My two cents.

-Tim Antonowicz

Dell TechCenter RockStar, vExpert

5 Practitioner

 • 

274.2K Posts

July 31st, 2013 09:00

Hello,

Overall running mixed storage isn't recommended.  It's not something that's certified.

Since the MD line uses multiple subnets, if you are going to try this, create entirely new subnets for the MD storage.  Do not share them with the EQL subnet.  

July 31st, 2013 10:00

Nick,

  There are two approaches to a mixed environment like this, and I have configured both in the past.

1) Simple design, low performance.

  If you use a single port on the MD3200i, you can keep it on the same VLAN as your EqualLogic array.  However, this limits you to only 1 Gb of bandwidth for the volumes on that storage.  This requires minimal configuration on your behalf and on vSphere, but limits your performance.

2) Complex design, high performance.

  In order to configure your MD3200i for maximum performance, you will need to create an additional 4 VLANs on your SAN switches, dedicated to each port on the MD controller.  Configure the outbound SAN ports as Access ports on the indicated VLANs.  You will also need to add 4 more vKernel ports on your iSCSI vSwitch, one per VLAN.  Splitting them out and dedicating them to the physical NICs as indicated in the diagram below.  This spreads the connections across those two physical ports.  Connections between the vSphere pNICs and the SAN switches should be Trunk ports, with the indicated VLANs tagged for that port.  As you build volumes and mount datastores to your vSphere hosts, you will need to balance the connections manually between the 4 ports to even out the bandwidth.

  While either configuration will work with vSphere, the second one will give you the best performance for your datastores that are residing on the MD3200i without disrupting your existing EqualLogic storage.  In the long run, it is more complex, but you will wind up getting the most performance out of the storage devices and ultimately, your VMs and client services.

-Tim Antonowicz

Dell TechCenter RockStar, vExpert

3 Posts

August 1st, 2013 01:00

Thanks Tim.

The complex design you have explained above is exactly what I was thinking for scenario 2 in my original post.

I am increasingly leaning towards scenario 3 in the OP, by utilising 4 pnics for iSCSI - 2 for the EQL and 2 for the MD3200i (using VLANs as per your post).

Would this additional level of seperation between the SANs have any performance benefit in your opinion?

Thanks for your help.

Nick

 

 

3 Posts

August 1st, 2013 07:00

Tim,

Thanks for taking the time to advise me.

The vSphere infrastructure is currently configured exactly as you state above - 8 NICs paired for management, vmotion, vms and iscsi.

I'm currently weighing up the cost of an additional 2 NICs against the performance of Management and vMotion, as I could bind the 2 networks to the same NICs as active/standby on opposite NICs.

But, if I can get another dual-port card for each host then all the better.

Thanks for all your help.

Regards

Nick

No Events found!

Top