Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Configuring Microsoft Cluster Sets on Dell PowerEdge

Summary: Cluster Sets on Windows Server 2019

This article may have been automatically translated. If you have any feedback regarding its quality, please let us know using the form at the bottom of this page.

Article Content


Symptoms

Cluster Sets, introduced in Windows Server 2019 (WS19) improves SDDC (Software Defined Data Center) flexibility and resilience. Cluster Set is a technology that allows administrators to combine multiple Windows 2019 Clusters into a single umbrella of Clusters.

Existing failover clusters can accommodate a maximum of 64 nodes. Cluster Sets technology combines multiple WS19 clusters in a single domain, with each of these clusters supporting up to 64 WS19 nodes. Compared to a Failover-Cluster, Cluster Set has more resiliency. For example, a 4-node failover cluster can survive 2-node failure. With the same 4-node cluster if we divide into two 2-node clusters and form a cluster sets out of it, it can survive one cluster failure plus one node failure from the remaining cluster. So, it can survive 3 node failures altogether.

For an overview of Cluster-Sets feature in Server 2019 refer "Introduction-to-cluster-sets-in-windows-server-2019" and "Cluster sets". Cluster Sets gains its flexibility by the use of an underlying technology called Infrastructure Scale-Out File Server; this also eases the cross-cluster migration of VMs within the Cluster Set.

Lab Setup for Deploying Cluster Set on PowerEdge

Servers Used: Two PowerEdge R730XD’s and Two PowerEdge R740XD’s

Created the first cluster using the two R730XD’s and named S2D13G54 (called Member Cluster 1).

Created the second cluster using the two R740XD’s and named S2D14G54 (called Member Cluster 2).

Created two CSV volumes on each of the above created Clusters.

Created a VM ‘vm1’ on Member Cluster 1 and a VM ‘vm2’ on Member Cluster 2. Then, I combined these two VMs to create a Management Cluster (named mgClus54) for the Cluster Set. No shared storage is required while creating the Management Cluster.

Installed the File-Services role in each of the nodes in the Member Cluster 1, Member Cluster 2 and Management Cluster:

Install-WindowsFeature File-Services -IncludeAllSubFeature –IncludeManagementTools –Restart

Created an Infrastructure SOFS File Server on Member Cluster 1, Member Cluster 2 and the Management Cluster:

Add-ClusterScaleOutFileServerRole -Name <Name of the Infrastructure SOFS> -Infrastructure

 

SLN316252_en_US__1image(8815)

Created a Cluster Set named CLUSSET54:

New-ClusterSet -Name CLUSSET54 -NamespaceRoot <Management Cluster SOFS Name> -CimSession <CIM session to Management Cluster>

 

And then add the created S2D14G54 and S2D13G54 Cluster to the cluster to the ClusterSet:

Add-ClusterSetMember -ClusterName S2D14G54 -CimSession <Cim Session to ClusterSet> -InfraSOFSName <Name of SOFS created on S2D14G54 cluster>

 

Add-ClusterSetMember -ClusterName S2D13G54 -CimSession <Cim Session to ClusterSet> -InfraSOFSName <Name of SOFS created on S2D13G54 cluster>


SLN316252_en_US__2image(8816)
 

 SLN316252_en_US__3image(8818)

 

Then I deploy two VM’s V213G and V214G on Member Cluster 1 and Member Cluster 2 respectively and register the VMs on the Cluster Set:

Get-ClusterSetMember -ClusterName <Cluster Name> | Register-ClusterSetVM -VMName <VM Name>

 
For Testing live migration across clusters, I tried to migrate VM "V213G" to Member Cluster 2. Before performing migration across clusters, we need to consider below points:

    1. VM settings, Processor Compatibility should be enabled.
    2. Configure Kerberos constrained delegation (KCD) between all pairs of cross-cluster nodes
      1. Constrained delegation guidance from Microsoft Hyper-V product team will be useful in setting this up.
      2. Configure the cross-cluster virtual machine live migration authentication type to Kerberos on each node in the Cluster Set.

foreach($h in $hosts){Set-VMHost -VirtualMachineMigrationAuthenticationType Kerberos -computerName $h }

      1. Add the management cluster to the local administrators group on each node in the cluster set.

foreach($h in $hosts){ Invoke-Command -ComputerName $h -ScriptBlock {Net localgroup administrators /add <management_cluster_name>$} }

SLN316252_en_US__4image(8819)

 

For performing any maintenance activity of a cluster in Cluster Set, migrate all the VMs that are part of the cluster to other clusters in the Cluster Set and then remove Cluster from the Cluster Set:

 

Remove-ClusterSetMember -ClusterName <ClusterName> -CimSession <Session created for ClusterSet>

 

After performing the maintenance activity, add back the cluster to the Cluster Set.

 

In case of unexpected failure of a Member Cluster, Cluster Set is not intelligent enough to handle the fail-over. Only manual movement of resources from one cluster to another cluster is supported in Windows Server 2019; even though automatic VM failover continue to function within a single member cluster scope.

 

 

 

 

 

 

 

 

 

 

 


This blog has been written by DELL Engineer AS Nithya Priya

Cause

Existing failover clusters can accommodate a maximum of 64 nodes. Cluster Sets technology combines multiple WS19 clusters in a single domain, with each of these clusters supporting up to 64 WS19 nodes. Compared to a Failover-Cluster, Cluster Set has more resiliency. For example, a 4-node failover cluster can survive 2-node failure. With the same 4-node cluster if we divide into two 2-node clusters and form a cluster sets out of it, it can survive one cluster failure plus one node failure from the remaining cluster. So, it can survive 3 node failures altogether.

Resolution

After performing the maintenance activity, add back the cluster to the Cluster Set.

 

In case of unexpected failure of a Member Cluster, Cluster Set is not intelligent enough to handle the fail-over. Only manual movement of resources from one cluster to another cluster is supported in Windows Server 2019; even though automatic VM failover continue to function within a single member cluster scope.

Article Properties


Affected Product

PowerEdge, Microsoft Windows Server 2019

Last Published Date

04 Oct 2023

Version

4

Article Type

Solution