Please submit all your questions, about:
- Cluster Enabler multi-site clusters (SRDF, MirrorView, RecoverPoint)
- and any other Windows Server Failover Cluster on EMC storage questions
Great session so far.
My question is on SRDF / CE with a (old/er) DMX and new VMAX arrays. With a disk expansion for a disk that is already SRDF-ed, I think the question of active/ passive comes in here, especially with the DMX (if i remember correctly), that the cluster needs to be brought down on one node - then the disk expanded (and BCV/ etc if its striped meta) and then re-presented during which the cluster is failed over, resources brought up, etc.
Is the setup still considered active/ passive since the node needs to be brought down?
Also curious to know if this case still persists with the VMAX (has been some time that I worked on SRDF/CE on the VMAX) and any improvements on the same.
A very good question.
To extend an volume in a SRDF/CE cluster, it is a quite a simple process, and this process is not different if you are running on VMAX or the older DMX.
The process is described in https://support.emc.com/kb/15140
After the above processes, the cluster should pick up the extended device, after which you need to extend the partition and NTFS volume in the host.
If the Windows OS is not picking up the larger space on the devices, the nodes of the cluster might require a reboot.
let me detail one of the many questions which come in from time to time.
Many times I have been asked to provide an explanation because:
"Cluster Enabler failed over the cluster groups"
Lets start with a statement: Cluster Enabler will not make any decisions (one exception to this), and will not cause the cluster groups to failover (one exception to this), nor will it actually actively failover any cluster groups.
Besides the exceptions noted, which I will come back to further down. Cluster Enabler is merely a resource in the cluster, and will follow the groups and cluster where they want to "go".
Microsoft Windows Server Failover Cluster is in control, all the normal rules for cluster failover are honoured, and cluster will determine the failover based on the set error detecting properties of cluster itself.
Cluster still behaves as if it was a normal cluster, it will perform error detecting, and the following three methods are used:
Based on the error detection routines within cluster, it is cluster to determines:
As cluster is a highly customisable engine itself, there are many factors which influences the failover decision of cluster. Here are the most common ones:
All this is done, the error detecting, the decision to failover (or not), and where to move to, by cluster.
Cluster Enabler does not come into play in this logic, it is merely a resource in the cluster.
Hope this clears up the situation of "Cluster Enabler failed over the group", the answer for that is: Cluster detected an error, and Cluster failed over the group. That will give you a good start when looking at those situations.
I mentioned 2 exceptions:
And there you have it, failover of cluster in a multi-site cluster using Cluster Enabler.
It is basically and not much more than any other cluster !
All the normal cluster logic applies
Please let me know if you have any questions
Great discussion, everyone! Help us promote this debate. Please, share the following tweet:
Join the discussion NOW! Ask the Expert: EMC Cluster Enabler multi-site clustering. http://bit.ly/1jLn8wq 4/9 - 5/2 #EMCATE
Need some urgent help with a SAN migration for windows 2008 clusters that use the EMC cluster enabler plugin. We're migrating SAN disks for W2k8 clusters (clusters use file share witness) from EMC CX4 to VNX using SAN copy. The EMC components we're running on the cluster nodes are listed below.
EMC cluster enabler mirrorview plug-in 184.108.40.206
EMC powerpath 5.5
EMC solutions enabler 7.3.0
EMC cluster enabler base component 4.1.0
There's 2 sites that use a stretched VLAN - both sites are in the same location. 2 node W2k8 cluster with 1 node in each site, each site has a CX4 and VNX array using mirrorview/s replication.
Would appreciate if you could help with the SAN disk migration procedure - more from the EMC cluster enabler perspective. Thank you.
Here's the high Level steps we plan to follow however for the EMC cluster enabler - I think we only need to add the new LUN in the existing resource group for the service (cluster resource group for DHCP for example) in cluster enabler manager console, delete the old LUN from enabler Manager and make sure the EMC enabler resource in cluster management console is added as a dependency for the new disk.
I am new to SRDF/ CE and we have cluster enabler in our environment. 2 weeks back we had DR testing and CE did not failover few of the devices and they write disabled.
After checking we got to know that those devices were not added to cluster group. Now that we need to add those devices to the cluster group and would like to know the procedure. I was given below kb article but that is light confusing.
Since they are existing devices and mount point are already created and I want to know if we can add the disks without un-mounting the drives. Since critical application is running we don't get long outage window. Please help me with this.
We have windows 2008 server.
The article which I referred was: