Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3077

October 31st, 2014 10:00

vmware round-robin multipathing mixed with last-used on CLARiiON.

VMware people:

This is a VMware question from a Storage Guy.

We recently stood up a SQL Server VMware Farm connected to a CLARiiON array.  Each ESX Server has 2 x HBAs to the SAN, so that they see 4 x "paths" to the Array (2 paths on each HBA / fabric).

They SQL Server VMware Farm hosts multiple SQL Server "Clusters" on multiple ESX hosts running vSphere 5.1.  MicroSoft Clustering on vSphere 5.1 requires that you use RDM as Cluster Disks, and furthermore you must access them with "fixed" ESX multipathing, whatever that's called.  It has SCSI interlock implicaitons.  If they were running vSphere 5.5, they could use VMFS files for Microsoft Cluster cluster disks.

So, they designed the entire VMware Farm to use all RDMs for All data volumes.  (C: and D: drives of individual instances are VMFS.)  And they ALL use "fixed" or "last-used" multipathing.  The end result is that they beat up on one path on one HBA to the Storage Array and don't even use the other paths at all.

Can they set up their system so that the MSCS cluster volumes are RDMs with "fixed" or "last-used" multipathing, and the rest of the volumes (the data RDMs) are "round-robin".  At a minimum that would improve performance.

    Stuart

341 Posts

November 3rd, 2014 01:00

I assume that the initiators are registered on the CLARiiON array with FailoverMode=4 (ALUA Mode)?

If so, yes the cluster RDM’s can be set for Fixed mode, to manually balance the load, ensure that half of the RDM devices have a preferred path going to SPA and the other half are going to use SPB as their preferred path (Co-ordinate this with the default owner of each LUN on the array-side) And the non-cluster RDM’s can use RR.

To use RoundRobin policy for the rest of the regular data devices, you will need to set the NMP policy per-device (using the naa. Identifier)

See https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vcli.examples.doc_50%2Fcli_manage_storage.6.5.html for details of the command syntax. The same setup should be configured for all ESXi hosts in the cluster.

If you have a large number of hosts/LUNs a powercli script would be the fastest way to make the above changes on multiple hosts. (Plenty of examples of powercli scripts for setting NMP policy can be found by googling)

Before you set off on changing policies, have a read of the following article http://cormachogan.com/2013/11/20/vsphere-5-5-storage-enhancements-part-3-mscs-updates/

ESXi5.5 now supports RR NMP policy for MSCS cluster LUNs, so if you are planning to migrate to ESXi5.5 it’s worth bearing in mind.

Hope this helps

Conor

26 Posts

November 3rd, 2014 01:00

Hi Stuart,

yes, they could use different NMP settings on a per device base.

All they should keep in mind that they will use identical NMP settings for a device on each ESXi Server which could access that device.

Btw, EMC PP/VE does support virtual MSCS with pRDM's, so it would avoid to run into the limitations from native VMware NMP.

EMC® PowerPath/VE® for VMware vSphere

Regards,

Ralf

No Events found!

Top