Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3921

October 21st, 2013 14:00

SRDF/Cluster enabler 4-node upgrade procedure?

Hello,

I'm replacing two DMX's (with SRDF) by two VMAX10K (also SRDF).

The arrays are used by a 4-node "SRDF/Cluster Enabler" cluster.

In order to support the VMAX10K microcode 5876, I have to upgrade the SRDF/CE nodes to Solutions Enabler 7.6 and SRDF/CE 4.1.4.

The Product Guide mentions the following first step in the upgrade procedure:

"Step1: Move all cluster groups to node A"

I guess this should be "Move all cluster groups to SITE A (R1 site)"? Because each site has 2 nodes, and one node is not powerful enough to run all applications.

thanks for any advice,

Francis

January 31st, 2014 02:00

hi Tomasz,

to upgrade your Cluster Enabler Software and Solutions Enabler Software, here is the process.
For completeness I have included a short version, and a long version.
The long version incorporates the best practices.

Preperation:
- download and read the "SRDF Cluster Enabler Plug In Product Guide": https://support.emc.com/docu44842_SRDF-Cluster-Enabler-Plug-in-Product-Guide.pdf?language=en_US
- download and read the "Cluster Enabler Base Component Release Notes": https://support.emc.com/docu44840_Cluster-Enabler-Base-Component-Release-Notes.pdf?language=en_US
- download and read the "SRDF Cluster Enabler Plug In Release Notes": https://support.emc.com/docu44841_SRDF-Cluster-Enabler-Plug-in-Release-Notes.pdf?language=en_US

Short Version:

- Download the Solutions Enabler software
- Download the Cluster Enabler software

- First Upgrade Solutions Enabler, using KB: https://support.emc.com/kb/7735

To upgrade Solutions Enabler on a SRDF/CE cluster, follow these steps:

- Use Cluster Administrator to move all cluster resources to Node1 (if necessary).
- On Node2, stop cluster service and then stop the SRDF/CE (or Cluster Enabler) service. Also, stop any other services that are related to Solutions Enabler.
- On Node2, Use task manager to kill the wmiprvse.exe process running as SYSTEM account
- On Node2, install the new version of Solutions Enabler.
- On Node2, open a command prompt and issue a symcfg discover™ command to update the local SymAPI database.
- On Node2, open the SRDF/CE GUI and issue a discover by right-clicking on SRDF/CE icon and choosing discover.
- On Node2, start the SRDF/CE service, then start the cluster service.
- Use Cluadmin to move all cluster resources to Node2.
- Repeat Steps 2  7 on Node1.
- Test moving all groups between the nodes.

- Follow the Upgrade Process "Upgrading the Base Component along with the plug-ins" as detailed in the "SRDF Cluster Enabler Plug In Product Guide" on page 121

1. Move all cluster groups to node A.
2. Perform the following actions on all other cluster nodes:
a. Copy the setup.exe, EMC_CE_Base.msi, and .msi files for the plug-ins to the
same local folder on your host.
b. Click setup.exe to launch the installation.
c. A Plug-in Selection dialog box displays the available plug-in modules. Select
your desired plug-in modules to be installed.
d. Complete the steps in the InstallShield wizard, being sure to select the Upgrade
path.
e. When prompted to restart your system, click Yes.
f. After the node has finished rebooting, log onto the node. Using the Cluster
Manager verify that the cluster service is up.
3. After all other nodes are up, move all groups from node A to one of the other nodes. If
using a shared quorum cluster model, verify that the quorum group comes online on
the other node before continuing.
4. Repeat step 2 on node A.

- Test you installation and cluster operations

Long Version:

Additional Preperation:
- Look at KB: https://support.emc.com/kb/91194
- Download the Solutions Enabler software
- Open a Service Request with EMC Customer service to get the latest Cluster Enabler software and mention KB 91194
-- for Windows 2003: Base:v4.1.3.15 PlugIn:v4.1.3.19
-- for all other (later) windows: Base:v4.1.4.23 PlugIn:v4.1.4.19

Upgrade Process:
- follow/check the "PREPARE STORAGE" section of KB 91194
- follow/check the "PREPARE OPERATING SYSTEM" section of KB 91194

- First Upgrade Solutions Enabler, using KB: https://support.emc.com/kb/7735

To upgrade Solutions Enabler on a SRDF/CE cluster, follow these steps:
- Use Cluster Administrator to move all cluster resources to Node1 (if necessary).
- On Node2, stop cluster service and then stop the SRDF/CE (or Cluster Enabler) service. Also, stop any other services that are related to Solutions Enabler.
- On Node2, Use task manager to kill the wmiprvse.exe process running as SYSTEM account
- On Node2, install the new version of Solutions Enabler.
- On Node2, open a command prompt and issue a symcfg discover™ command to update the local SymAPI database.
- On Node2, open the SRDF/CE GUI and issue a discover by right-clicking on SRDF/CE icon and choosing discover.
- On Node2, start the SRDF/CE service, then start the cluster service.
- Use Cluadmin to move all cluster resources to Node2.
- Repeat Steps 2  7 on Node1.
- Test moving all groups between the nodes.

- Follow the Upgrade Process "Upgrading the Base Component along with the plug-ins" as detailed in the "SRDF Cluster Enabler Plug In Product Guide" on page 121

1. Move all cluster groups to node A.
2. Perform the following actions on all other cluster nodes:
a. Copy the setup.exe, EMC_CE_Base.msi, and .msi files for the plug-ins to the
same local folder on your host.
b. Click setup.exe to launch the installation.
c. A Plug-in Selection dialog box displays the available plug-in modules. Select
your desired plug-in modules to be installed.
d. Complete the steps in the InstallShield wizard, being sure to select the Upgrade
path.
e. When prompted to restart your system, click Yes.
f. After the node has finished rebooting, log onto the node. Using the Cluster
Manager verify that the cluster service is up.
3. After all other nodes are up, move all groups from node A to one of the other nodes. If
using a shared quorum cluster model, verify that the quorum group comes online on
the other node before continuing.
4. Repeat step 2 on node A.

- Test you installation and cluster operations
- follow/check the "FINE TUNE" section of KB91194


HTH,
Edwin

If this answer was useful, please "Mark as Answer" and "Like"

61 Posts

October 23rd, 2013 07:00

For most procedures, you can subtitute "Node X" with "a node at Site X".

When you are migrating storage and upgrading versions, my recommendation would be:

1. Deconfigure CE

2. Uninstall SRDF/CE plugin & base and reboot.

3. Upgrade Solutions Enabler (if needed)

4. Install SRDF/CE base & plugin (reboot again)

5. Perform storage migration

6. Verify everything is OK on the cluster. Consider performing a "manual" DR failover to verify that everything is working properly.

7. Run the Configure CE wizard to add the Cluster Enabler bits back into the cluster

8. Perform full failover testing

I hope this helps. If you need more details, feel free to open a service request and we can discuss this process further.

30 Posts

October 24th, 2013 04:00

Hello John,

Thanks for your reply.

I managed to do the upgrade with some procedures I found in the EMC knowledge base and with the procedure from the SRDF/CE Product Guide, without unconfiguring SRDF/CE:

 

How to upgrade Solutions Enabler in a SRDF/CE for MSCS cluster (KB 7735)

1. Use Cluster Administrator to move all cluster resources to Node1 (if necessary). Also move quorum disk to Node1.

2. On Node2, stop cluster service and then stop the SRDF/CE (or Cluster Enabler) service. Also,
stop any other services that are related to Solutions Enabler (like storapid). 

3. On Node2, install the new version of Solutions Enabler.

4. On Node2, open a command prompt and issue a symcfg discover™ command to update the local SymAPI
database.

5. On Node2, start the SRDF/CE service, then start the cluster service.

6. Use Cluadmin to move all cluster resources to Node2.

7. Repeat Steps 2 6 on Node1.

Then Upgrade SRDF/CE:

Use ‘mstsc /admin /v: ’ to connect to cluster nodes

1. Move all cluster groups to node A to site A + ALSO QUORUM DISK (stop/start cluster service on node that has quorum to kick out. How to control Quorum location in 4-node cluster?)!!

2. Perform the following actions on all other cluster nodes:

a. Copy the setup.exe, EMC_CE_Base.msi, and .msi files for the plug-ins to the same local folder on your host.

b. Click setup.exe to launch the installation.

c. A Plug-in Selection dialog box displays the available plug-in modules. Select your desired plug-in modules to be installed.

d. Complete the steps in the InstallShield wizard, being sure to select the Upgrade path.

e. When prompted to restart your system, click Yes.

f. After the node has finished rebooting, log onto the node. Using the Cluster Manager verify that the cluster service is up.

3. After all other nodes are up, move all groups from node A to one of the other nodes. If using a shared quorum cluster model, verify that the quorum group comes online on the other node before continuing.

               

6 Posts

January 29th, 2014 04:00

Dear John,

We are preparing upgrade on 2 node clusters due to VMAX upgrade to code 76.

Could you provide proper way of making upgrade into SRDF/CE 4.1. & SE 7.5.

In our current environment we are using SRDF/CE4.0.1.14  & SE in ver. V7.2-1108.

Regards,

Tomasz K.

6 Posts

April 6th, 2014 05:00


Hello Edwin,

Your procedure was correct and upgrade action went very well

Best regards,

Tomasz K.

859 Posts

April 6th, 2014 19:00

Nobody knows SRDF/CE better than Edwin & John

2 Posts

January 15th, 2017 16:00

Hi Edwin,

do you have a step by step guide for installing configuring SRDF/CE on 2 node failover cluster?

thanks in advance.

January 15th, 2017 23:00

Hi russel242,

I would suggest you read the Product Guide.

If you have any questions about the guide, let us know.

Rgds,

Edwin.

9 Posts

January 16th, 2017 00:00

Hi,

adding SF article for the online Cluster Enabler 2 nodes.

support.emc.com/kb/466576

Regards,

Oleh

No Events found!

Top