Unsolved
This post is more than 5 years old
34 Posts
0
23938
How to migrate Windows Cluster VMs (incl. RDMs) from old to new storage with minimal downtime?
Hello community,
we are looking for some BP around "How to migrate Windows Cluster VMs (incl. RDMs) from old to new storage with minimal downtime ?".
Who has done this and what are your suggestions from your experience?
Thank you!
David
EricDeWitte1
13 Posts
1
February 1st, 2012 07:00
Hi David,
I've been there several times and maybe I can point you in some options.
First, as usual, the answer will be : "it depends" and "your mileage will vary"
Assumptions :
- cluster spanning physical hosts => physical RDMs
You will always need to remap your RDMs when migrating from boxes.
-> what are your expectations in terms of "minimum downtime" ? ; )
Options:
1) SAN Based tools (assumption - you're using CX/VNX type array as destination)
- take note of the RDM mapping to the VMs (note the SCSI ID assigned to the RDM in the VM configuration)
- shutdown the VMs of that Virtual cluster
- unmap the RDMs
- Create the Luns on the destination array
- Use SAN copy to copy to new array
- present new lun to ESX (rescan, etc ...)
- use Storage vMotion to move the VM to the new datastores
- remap the new LUN as RDMp with the same SCSI ID
- don't forget to set the bus sharing if it disappeared
Pro's :
- using Array based tooling to do the data copy => no load on ESX host.
Con's (?) :
- need EMC SAN Copy
- Need to configure your SAN accordingly
2) VMware tools based
-
- take note of the RDM mapping to the VMs (note the SCSI ID assigned to the RDM in the VM configuration)
- shutdown the VMs of that Virtual cluster
- unmap the RDMs
- Create the Luns on the destination array (must be at least as large as the source !!!)
- present to ESXhost (rescan, ...)
- use Storage vMotion to move the VM to the new datastores
- use ESX CLI vmkfstools to copy the RDM content to the new lun
create the new RDM
vmkfstools -z /vmfs/devices/disks/disk_ID:Partition nameofdisk.vmdk
Copy to new rdm
vmkfstools -i srcfile -d rdmp: /vmfs/devices/disks/disk_ID:Partition
or
vmkfstools -i srcfile nameofdisk.vmdk
PS : this is purely CLI based - you can do some of it using the gui (like create the RDM mapping file).
PS2 : I need to do some testing and come back to you with a real example to make this very clear.
- remap the new LUN as RDMp with the same SCSI ID
- don't forget to set the bus sharing if it disappeared
Pro's :
- using VMware based tooling to do the data copy => no specific SAN configuration needed.
- it will still go over your storage network so should be quite fast (!runs in the "console" so limited resources)
- big fun if you like CLI
Con's (?) :
- Got to love CLI
- load on server
- risk of human error (which disk id maps to which lun ....)
MSCS clusters are very touchy beasts. I managed to migrate / Failover a lot of them in the past. It is quite easy when you know how to handle them. It goes bad quickly if you miss some steps.
Hope this helps.
Eric.
christopher_ime
2K Posts
0
February 5th, 2012 00:00
Eric,
Awesome feedback. I wanted to also bring up one more option which could be to use SRM as a migration strategy. Since SRM v1.0 there has been full support for RDM's. Of course it would require an SRM license and array-based replication (unless using v5); however, I only bring it up because you explicitly mentioned "minimal downtime" and the risk of possibly "going bad quickly" if steps aren't followed properly as pointed out by Eric could be minimized. As we all know, you also have the ability to test/rehearse the recovery plan before committing. Probably an unlikely candidate as I'm thinking this would have already been considered if it were available.
EricDeWitte1
13 Posts
0
February 7th, 2012 02:00
David,
As promised more detailed and tested CLI (thanks Vlabs !) :
This approach works in all cases, whether you use the same storage array, migrate to another array from the same vendor and if you migrate from one vendor to another one.
command :
ls /vmfs/devices/disks/ -lh
sample result :
-rw------- 1 root root 8.0G Feb 2 13:28 mpx.vmhba1:C0:T0:L0
-rw------- 1 root root 4.0M Feb 2 13:28 mpx.vmhba1:C0:T0:L0:1
-rw------- 1 root root 5.0G Feb 2 13:28 naa.60060160444400006449ed60a14de111
-rw------- 1 root root 6.0G Feb 2 13:28 naa.6006016044440000cab17497a14de111
lrwxrwxrwx 1 root root 19 Feb 2 13:28 vml.0000000000766d686261313a303a30 -> mpx.vmhba1:C0:T0:L0
lrwxrwxrwx 1 root root 21 Feb 2 13:28 vml.0000000000766d686261313a303a30:1 -> mpx.vmhba1:C0:T0:L0:1
lrwxrwxrwx 1 root root 36 Feb 2 13:28 vml.020000000060060160444400006449ed60a14de111565241494420 -> naa.60060160444400006449ed60a14de111
lrwxrwxrwx 1 root root 36 Feb 2 13:28 vml.02000100006006016044440000cab17497a14de111565241494420 -> naa.6006016044440000cab17497a14de111
Notice the 2 vml.xxx that point to naa.xxx. Those are my LUNs.
Query existing RDMs
Command :
vmkfstools -q Test_VM.vmdk
result :
Disk Test_VM_.vmdk is a Passthrough Raw Device Mapping
Maps to: vml.020000000060060160444400006449ed60a14de111565241494420
copy old RDM to new RDM
Command :
vmkfstolls -i -d rdmp:device
rem : This will automatically create the new RDM vmdk pointer file (destination.vmdk)
Example :
vmkfstools -i TestVM_RDM1.vmdk -d rdmp:/vmfs/devices/disks/vml.02000100006006016044440000f8b164674b51e111565241494420 TestVM_NewRDM.vmdk
result :
Destination disk format: pass-through raw disk mapping to '/vmfs/devices/disks/vml.02000100006006016044440000f8b164674b51e111565241494420'
Cloning disk 'TestVM_RDM1.vmdk'...
Clone: 100% done.
Result :
Before :
fdisk -l /vmfs/devices/disks/vml.02000100006006016044440000f8b164674b51e111565241494420
Disk /vmfs/devices/disks/vml.02000100006006016044440000f8b164674b51e111565241494420: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /vmfs/devices/disks/vml.02000100006006016044440000f8b164674b51e111565241494420 doesn't contain a valid partition table
After :
Command :
fdisk -l /vmfs/devices/disks/vml.02000100006006016044440000f8b164674b51e111565241494420
Result :
Disk /vmfs/devices/disks/vml.02000100006006016044440000f8b164674b51e111565241494420: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/vmfs/devices/disks/vml.02000100006006016044440000f8b164674b51e111565241494420p1 1 522 4192933+ 87 NTFS volume set
/
Enjoy,
Eric.
EricDeWitte1
13 Posts
0
February 7th, 2012 02:00
Hi Christopher,
You are right, if you can leverage SRM, that be a good approach.
Conditionned of course that you remain within compatible arrays. ; )
Eric.
christopher_ime
2K Posts
0
February 7th, 2012 20:00
Eric,
Thanks for sharing!
In response to: "within compatible arrays", I made a subtle comment above: "require an SRM license and array-based replication (unless using v5)" and when typing it up I was thinking about VR (vSphere Replication)/HBR (Host-Based Replication) introduced in 5.0.
Then again, I like your feedback and your options would be a solution for the vast majority of users and I'm just complicating things.
hpeskens
13 Posts
0
February 8th, 2012 04:00
You could also use the EMC Open Migrator/LM software to migrate physical RDMs (also within a cluster).
For this you need to install and reboot, both before and to swap driveletters (and deinstall), but data migration is online. However during migration only one node can be active.
christopher_ime
2K Posts
0
February 8th, 2012 09:00
Good point and unlike the other options noted above, as we all know OM will let you fix the NTFS partition alignment of the volume in the guest image if needed. Keep in mind though for clusters only, we don't support, with the latest version, rebooting to swap drive letters. You must use the OM interface and select "Complete migration" as noted in the Administrator Guide.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
February 11th, 2012 10:00
if RDM is > 2TB ..can't convert to VMDK
christopher_ime
2K Posts
0
February 11th, 2012 10:00
Another scenario, if eligible, might be using this opportunity to convert RDM's to a VMDK via Storage vMotion (or Cold Migration). Of course, first verify p/vRDM requirements; for instance, in regards to MSCS clusters and ESX, refer to the following KB article:
VMware KB: Microsoft Cluster Service (MSCS) support on ESX/ESXi
http://kb.vmware.com/kb/1004617
Then depending on the version of ESX(i), search for the "Supported Shared Storage Configurations" section in the corresponding PDF which lists the possible combinations and the support of pass-through (pRDM), non-pass through (vRDM), and virtual disks.
There is a very good blog post that lists the migration from RDM to VMDK combinations (Storage vMotion/Cold Migration). Please note the scenarios (when you don't select a format to convert to) where you are only moving the mapping file, but then again it would be obvious as the migration would complete in very little time as it is not moving data off of the original LUN.
Migrating RDMs, and a question for RDM Users
http://blogs.vmware.com/vsphere/2012/02/migrating-rdms-and-a-question-for-rdm-users.html
When following along in the blog post:
1) If selecting "Same format as source", it would only move mapping file
2) If selecting either "Thin provisioned format" or "Thick format", this would convert from RDM to a VMDK
And please respond to his request for feedback at the end of the post and support a pRDM to pRDM migration tool within vSphere. I think we all agree it would be useful.
[...]
How useful would you find an RDM -> RDM migration tool, i.e. the ability to move data from one LUN to another LUN via the vSphere migration wizard?
[...]
christopher_ime
2K Posts
0
February 11th, 2012 11:00
Good point. Thanks for the feedback dynamox!
Even with VMFS-5 which supports larger LUNs/datastores (without extents), the individual files (VMDK) still have a 2TB-512 byte limit. Good reminder.
krkl4d
1 Message
0
December 7th, 2016 07:00
Hi,
Though this is quite an old post, I'm working on a plan for similar migrations of Windows failover cluster VMware VMs using physical RDM disk migration from old data center to new data center.
The server and storage will be new but similar models.
There is also an array level replication between two data centers
VMware SRM is also set up
But I need help on actual steps of migrating the WFC VMs to new DC using SRM and array level replication.
As these are clustered serves the process different from typical SRM procedure ?
Even if SRM mirates the VMs, how does the cluster come up at new DC.
Any help appreciated.
Thanks
krk