Start a Conversation

Unsolved

This post is more than 5 years old

1352

May 16th, 2014 07:00

Adding more VMAX FE ports to VMware Cluster

What are best practices on adding more VMAX FE ports to VMware 5.1 Cluster.

I have VMware cluster with 10 hosts connected to VMAX 4 FE (1E0/2E0/3E0/4E0). I would like to add 4 more FE(1H0/2H0/3H0/4H0).

Do I mask ALL 8 FE to  ALL 10 hosts ? or split 4 FE per 5 hosts.

2.2K Posts

May 16th, 2014 09:00


Beware that adding that many ports to your ESX hosts will increase the path count per volume and decrease the number of volumes each host can support.

So I would ask what is it you are trying to address by adding more ports?

32 Posts

May 16th, 2014 10:00

There are high queue depth on existing 4 FEs serving VMware Cluster. There are 26 LUNs(Data Stores) on VMware cluster.

One other note: This VMAX is target of recover point LUN replication where we have seen Response Time of 60 ms  on recover point destination LUNs and journal LUNs.

2.2K Posts

May 19th, 2014 11:00

Regarding the RP replication targets... Are the 4 FAs in use also used by RP for the target replicas? If so you may be better off moving the VMware cluster to different FAs: isolate your DR replication traffic from your production traffic by putthing those workloads on separate Port Groups.

Also what is the mulitpathing you are using on your ESX hosts? VMware NMP (Round Robin for VMAX) or PowerPath? If using NMP what is your IO Operations Limit parameter set to (default is 1000)?

Splitting the 10 host cluster between different port groups may make the cluster management more complicated than you want. Instead consider one of the following options

  • Add 4 new FAs to the existing Port Group
  • Create two separate masking views for the cluster with each masking view using a separate port group with 4 ports in it
    • Balance your LUNs across the two masking views

The drawback to the first option is that you would now have 8 paths per LUN and this will reduce the overall number of LUNs per host (1024 paths per host limit). The benefit is that if you are using PowerPath you have more active paths that the workload is balanced across. If you are using NMP this may not have a big improvement in performance unless the benefit is simply that some of the exisitng FAs are seeing higher utilization levels from the competing workloads.

The drawback to the second option is that you are adding complexity to the management of the cluster.

Generally 4 paths per LUN on a VMware cluster connected to a VMAX is enough for decent performance. Before making a lot of changes to the cluster I would look at what is causing contention on those FAs. Is it just the amount of IO coming from the VMware cluster itself? If so then adding mroe FAs will help. Is it the competing workload from RP? Then while you would be reducing the amount of IO on those high utilized FAs you may be better of just isolating the VMware cluster from traffic. If the RP targets are DR datastores for the VMware cluster then you could have a DR target masking view and production masking view for the cluster.

No Events found!

Top