Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

53075

December 29th, 2011 00:00

Dell MD3600i & multipath with vSphere 4/5

Dual controller MD3600i with separate subnets (as per manual), connecting via separate isolated switches to two separate HBAs in ESX 4/5

What am I supposed to see (have) with multipath working correcly?

Active (I/O) - Active (this is what I have showing currently)

or

Active (I/O) - Active (I/O)
(I believe it should be that condition)

on each controller for each target (LUN)

The manual states:

Increasing Bandwidth With Multiple iSCSI Sessions The PowerVault MD3600i Series storage array in a duplex configuration  supports two active/active asymmetric redundant controllers. Each  controller has two 10G Ethernet ports that support iSCSI. The bandwidth  of the two ports on the same controller can be aggregated to provide  optimal performance. A host can be configured to simultaneously use the  bandwidth of both the ports on a controller to access virtual disks  owned by the controller. The multi-path failover driver that Dell  provides for the MD3600i Series storage array can be used to configure  the storage array so that all ports are used for simultaneous I/O  access. If the multi-path driver detects multiple paths to the same  virtual disk through the ports on the same controller, it load-balances  I/O access from the host across all ports on the controller.

There is no driver for ESX, so how would one configure it?

I am NOT using the targets (LUN) for VMFS, but RDM to my VM OS

Thanks

Seb

4 Operator

 • 

9.3K Posts

December 30th, 2011 07:00

This is an active/passive array; each virtual disk is owned by a single controller. Therefor only the paths to the controller that owns the virtual disk in question will be able to process I/O; the paths to the non-owning controller are there for failover. This is why using 2 or more virtual disks is recommended so that each controller can own it's own virtual disks (and the other controller for it's failover).

Active/Active SANs put you in a completely different price range than an MD3600i.

Just FYI; if you check the MD3600i support matrix ( ), you'll see that Qlogic HBAs aren't supported. This means that if you have connectivity issues, there is no support from Dell on these HBAs. You can check the support matrix to see which options are available that offload iSCSI.

847 Posts

December 30th, 2011 11:00

In MDSM, go change the preferred ownership on the effected LUNS to whatever current ownership is.

If that stops it?  There has to be something in the switch or switch config causing it.

If you switch one manually when everything is green.   You can watch the paths switch in Vi Client and it should be pretty seemless.  

I did my last MD3220i cluster in 4.1 ESXi and it ran well for me, no HBA's though.   We recently tried some Qlogic HBA's with our older MD3000i's and they sucked bad, all sorts of issues.   So bad the storage cloud server manufacturer customized their software initiators to work 100% properly with the MD3000i's for us.

4 Operator

 • 

9.3K Posts

December 29th, 2011 12:00

You mentioned 2 HBAs. What are you using for the iSCSI connectivity?

Normally you would see 4 paths per virtual disk; 2 active and 2 standby. By default the failover is 'fixed' I think (causing only 1 active path to show "(I/O)"), but I'm pretty sure Round Robin is supported. Once you change to Round Robin both active paths will show (I/O).

185 Posts

December 29th, 2011 14:00

What are you using for the iSCSI connectivity?

2 x Qlogic DQL4060C per host, each on separate subnet

I now have 4 paths per target, with RR, 2 are Active (I/O) & 2 are Standby (after selecting multipath RR in RDM disk properties)

I expected this array to give me EACH path (FOUR per target) as Active (I/O)

Or am I dreaming?

Seb

185 Posts

December 30th, 2011 09:00

I would also feel good if the speed was as expected (which is definitely not the case yet)

Still need to figure out how to get rid of the "Virtual Disk Not On Preferred Path" error on the array (obviously I do NOT want to access ALL VD via same controller, as it is a waste of resources!)

I think it should NOT be that difficult...

Seb

847 Posts

December 30th, 2011 09:00

If two say active and two say stand by,  all should be fine with RR.   But, with that said,  ESXi 5.0 does have issues with a fair amount of iSCSI sans out there and this issue.   I think there was an update that helped with at leats some of the iSCSI sans afflicted.

If two say active and two say satnd by,  it takes and AVT event to make that switch.  either both active paths became unaccessable or the host issued the AVT command for the controller ownership to change from preferred.

As far as ESX goes?   If the San is on the HCL and the HBA is on the HCL?  I'd feel pretty good about it all.

847 Posts

December 30th, 2011 11:00

PS:  You should absolutely be able to have independent luns on each controller.  It is how you load balance MD iSCSI sans, and it works really well.    

185 Posts

December 30th, 2011 12:00

Lucky you to have manufacturer to customize their software for you.

I can not hope for that to happen.

Yes, changing the ownership of the VD works OK (really seemless)

The paths change in vCenter or I can disable paths in vCenter & the other controller paths kick in instantly (and when I re-

enable them, they stay as Standby for this VD)

So this part works fine now.

No Events found!

Top