Highlighted
6 Indium

VM boots without RDMs after mirrorview failover

Dear forum people, I have 2 VNX storage arrays. One on each of the company's two locations. On VNX1 I have a vCenter containing a VMFS filesystem with a VM. This VM also has 2 RDMs. Each of the three LUNs is replicated to VNX2 using Mirrorview. On location 2 a second vCenter is running. After promoting the LUNs on VNX2 and trying to boot the VM on the 2nd location (from VNX2), the replicas of the original RDMs aren't visible to the VM. I can add them manually though. Suppose I have 500 VMs and hundreds of RDMs. How can I tell which RDM belongs to which VM after everything has failed over from VNX1 to VNX2?
Labels (3)
0 Kudos
Reply
1 Reply
Highlighted
6 Indium

other than documenting the wwns of the source and target LUNs and matching these to VM on the DR site, I couldn't come up with any solution. Another tip is to create a small txt file on each RDM from the OS level of the VM and note the name of the VM, drive letter and other properties you want to know when you need to manually attach RDMs to VMs. If you then accidently attach tje wrong RDM to a VM, at least you can see which VM that RDM actually does belong to.

 

Or buy a product like SRM (Site Recovery Manager). I'm not sure if that does all this for you, but if it does, it sure saves you from a headache in case you have many many VMs with oh so many RDMs.

0 Kudos
Reply