Start a Conversation

Unsolved

This post is more than 5 years old

1908

July 16th, 2013 11:00

Multi-Engine Cluster(ed) Host Zoning Best Practices

Hello Everyone-

We just upgraded one of our VPlex arrays to 4-engines.  We are discussing how we should zone our clustered hosts (eg: SQL Clusters) to the new configuration and there are decent arguments for a few different layouts. 

Our internal standard is for four(4) paths per host.

When encapsulating a two node SQL Cluster, what is the best-practice for zoning each node?

The options we have come up with are:

Each host is isolated from the other on VPlex Frontend ports across all four engines


OR


Each host shares the same front end ports across all four engines


OR


Each host is isolated from the other across front end ports on two of the engines, saving front end ports on the other two engines for other use?

OR

Each host shares the same front end ports on two of the engines, saving front end ports on the other two engines for other use.

I attached a .jpg for those of us who are more visual than the others.

Thoughts?  What have you implemented?

1 Attachment

15 Posts

August 6th, 2013 10:00

Overtow,

This discussion stems on weighing availability vs. performance.  In addition, you have to consider the entire environment and not just focus on a single host cluster.

You also have to take into consideration requirements around NDU and system limits.

Keeping within these boundries, performance considerations dictate using no more than four directors on two engines per host platform or host cluster nodes.  This has the performance benefit of having the highest possibility of a cache read hit while adhering to NDU requirements.

Continuous availability dictates utilizing all directors across all engines allowing for an N-1 architecture which allows for seven director failures out of the eight available without losing access to data.  Even with this consideration I would not recommend routinely laying out hosts across all directors.  There may be some exceptions to this rule.

The continuous availability reduces the possibility of a cache read hit from a maximum of 25% to 12.5% while consuming additional IT nexuses against the system limit of 3,200.  Attaching to four directors vs. eight directors for a host platform using dual initiators would allow for a total of eight hundred host platforms to be attached vs. only four hundred connected to all eight directors.

Host clustering allows for any single node to be connected to four directors across two engines while another node of the cluster would be connected to a different four directors across the other two engines.  This would keep the total IT nexuses to a minimum while meeting the NDU requirement and would also allow for failover within the host clusters to handle multiple director failures before losing access to data.

Bottom line is that any single node must meet the NDU requirements of four directors across two engines.

For port considerations, each frontend port supports four hundred host initiators to be connected at the same time.  I can't imagine this being much of a consideration for designing a simple host cluster.  I would be more apt to use this consideration in the bigger picture when attaching hundreds of hosts to help isolate the workload.  Think about the 80/20 rule where 20% of the hosts are doing 80% of the workload.  Isolate these different grouping such that they don't impact the heavy hitters.

--Mike

117 Posts

August 9th, 2013 20:00

If you are using multiple SQL instances and running them on the separate nodes,  I would use your top design in your diagram as this would allow workload isolation at the port an engine level.  Due to the nature of MSCS SQL clustering, you would not need to be concerned with the impact to cache since no two nodes would ever request the same block of data except due to a failover of the DB services. In that case you would have a period of time that cache would need to be populated for the new directors in use.

Otherwise if it is one DB instance, I would probably go with your bottom design as it fits our typical best practices, and allows you to rotate between engine pairs as you add additional host balancing the system usage and connectivity. You may consider separating the nodes across different ports in this design. If you are not concerned with the application over running a director, then this design with port isolation using one director from two separate engines would suffice for even the multiple DB instance workloads, i.e. you do not get director isolation, but you do get port isolation for the workload.

Mike correct me if I am wrong, but the IT nexus consumption would be the same in all scenarios outlined, so that really should not be a factor either. 

No Events found!

Top