Start a Conversation

Unsolved

This post is more than 5 years old

S

1806

April 11th, 2014 18:00

SAN Copy migration of a Red Hat Cluster that's using GFS

I am doing several data migrations using SAN Copy.  On the Red Hat Linux systems (these are mostly all 5.8), I am using the following procedure:

1. Initial SAN Copy

2. unmount SAN filesystems

3. vgchange -an

4. vgexport

5. Comment out FS in /etc/fstab

6. Shutdown host

7. Incremental SAN Copy

8. Boot host

9. pvscan

10. vgimport

11. Uncomment /etc/fstab

12. Mount filesystem

This works fine on a single node.  However, I am wondering if this would work in a clustered environment.  I'm thinking I would shut down the applications then shut down the cluster between steps 1 and 2 above.  Then startup the cluster and then bring up the apps after step 10.

Any thoughts on this?

Thanks

91 Posts

April 18th, 2014 07:00

Hi,

Yes,you have to shutdown the application first and then the cluster and then follow step 3 to step 10. Once done you can powerup the hosts one by one.

Jithin

1 Rookie

 • 

53 Posts

April 22nd, 2014 14:00

Well, answered my own question but I thought I'd post an answer here.  Part of my procedure was taken from a Red Hat article here: http://www.redhat.com/archives/linux-cluster/2011-June/msg00012.html.  All of this is very dependent on what version of RedHat cluster services you are using.


One of the concerns was about the quorum drive, and whether the Linux cluster would recognize the SAN Copy version of this device.  So I decided to create a new quorum drive on the new array, rather than migrate it.  It is possible the SAN Copy version would be recognized, but I didn't want to leave that to chance.

The only filesystems in this cluster were the GFS2 cluster filesystems, which are NOT mounted via /etc/fstab.  Performing the vgexport/vgimport prior to shutting down the host wasn't necessary, or even possible really.

The cluster deactivates the volume groups when you shut down all the cluster servces.  So you can't vgexport the volume groups manually.  As it turns out, at least when using LVM2, LVM recognizes the SANCopy volumes as the original volumes just fine.

So the procedure that worked for me was:

1. Initial SAN Copy (all volumes except for the Quorum)

2. Create new device for the quorum on target array, present it to cluster hosts

3. Shutdown cluster apps and all cluster services

4. Mark new device as quorum device using mkqdisk (mkqdisk -L to verify)

5. Edit cluster.conf to point to new quorum device

6. Distribute cluster.conf to all nodes

7. Bring up cluster, test.  Clustat will show new quorum device.

8. Shutdown cluster and shutdown all nodes

9. Perform final SAN Copy

10. Remove hosts from old storage groups

11. Remove zones to old storage

12. Add hosts to new storage group for cluster

13. Boot all nodes

14. Test

No Events found!

Top