Start a Conversation

Unsolved

This post is more than 5 years old

A

5 Practitioner

 • 

274.2K Posts

1292

January 28th, 2012 10:00

Looking for a clean cutover for migrating ESX with SRDF

Hello champions,

We are looking for a procedure for a clean cutover for ESX host being migrated by SRDF.

We were migrating an ESX OS with SRDF in an R21 configuration and ran into two issues:

  1. In an (8) node cluster only (1) node saw the data and come up in the new location. The original production ESX server was up; however, the VM’s were down for the R1 split.
    1. The work around for this was to use vMotion and then have a node see the new device. We needed to do this for each node until they all saw their data.
  2. None of the nodes in the cluster saw their new devices. Ran command:
    1. Ran command Symcfg –sid 574 show pool FC –T2 –P1 p-Thin –gb –detail for each array and looked at the line for our  R1->R21->R2  devices.          
    2. We did not see the same amount of gigabytes on the target (R2)  as we did on the R1 or the R21 devices.
    3. We re-replicated the R21 to the R2, as well as re-replicated the R1 to R21, then R21 to R2.  The “show pool” still showed a discrepancy.
      • i.      Re-booting the host on the source side allowed us to see the new devices; we could also see the correct amount of storage on the target.

Thank you

George

199 Posts

January 28th, 2012 22:00

George, as you have the R21 configuration, I assume R1 and R2 arrays are on different sites. Are 8 ESX nodes you mentioned located at the DC along with R2 array? What did you mean other 7 nodes can not see the data? Are these devices listed when you issue symdev list/show command or only ESX Data store scan cannot find them?

Even if the disk size is different from source, you still can get the VM up on target site, right? It sounds like you used SRDF on thin devices, which mess up with the amount of GB displayed on R2. I find a primus "After migrating from thick to thin devices using SE 7.2 (enginuity 5875) the thin pool shows difference in allocated and written tracks for migrated device." You can go check if emc265267 applies to your scenario. Some migration LUN size discrepancy were also found on PPME or HP UX native commands.

199 Posts

January 28th, 2012 23:00

George, I find the  post perience and considerations when migrating VMs across storage written by David Hanacek. It might have explained your first issue. You can go check if resignature your R2 device can solve the multiple nodes awareness issue.

Those are the facts shared:

Since ESX 4.1 you are able to do a resignature or a persistent mount per datastore when presenting a “cloned” VMFS datastore to an ESX host.

1: When using the force mount, you can only mount the datastore to one ESX(i) host at a time. Multiple hosts simultaneously is only possible after resignaturing of the disk.

2: Metadata change is done by the resignature. It recognizes that the disk has the same content, but the path to access the disk is different, that’s why ESX identifies it as a duplicate, and won’t allow force mounting it to more than one host, unless you perform a resignature.

https://community.emc.com/thread/131539?tstart=0

No Events found!

Top