Unsolved
This post is more than 5 years old
5 Posts
0
2618
Migrate SQL Cluster LUNs from one array to another
We plan to migrate SQL cluster luns from one VNX array to another using EMC Openmigrator, below are the steps, can you please let me know if these are correct, we will also migrate Quorum drive with openmigrator
Open migrator version: 3.12
SQL/Windows Server - 2012
- Create and assign new luns on new VNX SAN
- Format the new drives ( SQL 64K)
- Migrate data from active node - old VNX to new VNX (Drives – D,E,F,M,N,K,S,Q,R,T – K is Quorum disk
- Before reboot of active node shutdown passive node
- Reboot active node
- Ensure all disks are visible with correct drive letters on active node
- Start passive node, Check cluster working
Are above steps OK
What will be the rollback plan if there are issues, can we remove new VNX luns, re-assign/arrange drive letter of old VNX luns reboot servers?
umichklewis
1.2K Posts
2
February 17th, 2016 12:00
All of your steps look correct, but I have two questions.
First, why not simply define a new Quorum disk? MSCS will let you select a new disk for the Quorum drive and will use it on its own. It just seems simpler than having a migration tool do it.
Second, have you considered using PowerPath Migration Enabler? If you're using PowerPath on your servers, PPME has a simpler rollback plan and allows you to move your devices one at a time or in groups. PPME is free, but will use CPU resources on the host to move the data. We were able to move a 2TB SQL LUN in about 90 minutes with the default settings and no noticeable impact on the server. You can always throttle a migration to reduce CPU impact or reduce the time needed to complete the migration.
Let us know if that helps!
Karl
Agiyapal
5 Posts
0
February 18th, 2016 01:00
Thanks Karl,
Yes, we can create new Quorum drive in new VNX, so am I right that we first move the quorum drive to new VNX and then carry out steps from 3 to 7
I have not used PPME but I can try that
umichklewis
1.2K Posts
2
February 18th, 2016 03:00
The documentation is a good start, and there are plenty of articles on EMC Support for various issues that can arise. You can start with the user guide - https://support.emc.com/docu56488_PowerPath-Migration-Enabler-6.x-User-Guide.pdf?language=en_US
Let us know if that helps!
Karl
Agiyapal
5 Posts
0
February 24th, 2016 09:00
Hi,
I have tested PPME and it works with no issues, will use migrator to move data, just one question.
The disk numbers assigned are different in both clusters, will that cause any issues
ie when I do diskpart on active node
Disk1 to Disk 11 are newly assigned luns
Disk 12 to Disk 22 are the old luns
so I will copy disk12 to disk1 - Powermig setup –techtype hostcopy –src harddisk12 –tgt harddisk1 –cluster –no
However in passive node it is reverse, disk1 to disk 11 are old luns and disk 12 to disk 22 are newly assigned luns
DISKPART info as below
ACTIVE NODE
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 100 GB 0 B
Disk 1 Offline 1024 GB 1024 GB
Disk 2 Offline 900 GB 900 GB
Disk 3 Offline 1250 GB 1250 GB
Disk 4 Offline 1350 GB 1350 GB
Disk 5 Offline 1025 GB 1025 GB
Disk 6 Reserved 2048 GB 0 B *
Disk 7 Offline 100 GB 100 GB
Disk 8 Offline 100 GB 100 GB
Disk 9 Offline 512 MB 512 MB
Disk 10 Offline 300 GB 299 GB
Disk 11 Offline 100 GB 100 GB
Disk 12 Reserved 1024 GB 1024 KB
Disk 13 Reserved 750 GB 1024 KB
Disk 14 Reserved 1325 GB 1024 KB
Disk 15 Reserved 100 GB 1024 KB
Disk 16 Reserved 300 GB 1024 KB
Disk 17 Offline 200 GB 1024 KB
Disk 18 Reserved 1024 GB 0 B
Disk 19 Reserved 900 GB 1024 KB
Disk 20 Reserved 500 MB 1920 KB *
Disk 21 Reserved 100 GB 1024 KB
Disk 22 Reserved 100 GB 1024 KB
DISKPART>
PASSIVE NODE
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 100 GB 0 B
Disk 1 Reserved 2048 GB 0 B *
Disk 2 Reserved 1024 GB 1024 KB
Disk 3 Reserved 750 GB 1024 KB
Disk 4 Reserved 1325 GB 1024 KB
Disk 5 Reserved 100 GB 1024 KB
Disk 6 Reserved 300 GB 1024 KB
Disk 7 Reserved 1024 GB 0 B
Disk 8 Reserved 900 GB 1024 KB
Disk 9 Reserved 500 MB 1920 KB *
Disk 10 Reserved 100 GB 1024 KB
Disk 11 Reserved 100 GB 1024 KB
Disk 12 Offline 200 GB 1024 KB
Disk 13 Offline 1024 GB 1024 GB
Disk 14 Offline 900 GB 900 GB
Disk 15 Offline 1250 GB 1250 GB
Disk 16 Offline 1350 GB 1350 GB
Disk 17 Offline 1025 GB 1025 GB
Disk 18 Offline 100 GB 100 GB
Disk 19 Offline 100 GB 100 GB
Disk 20 Offline 512 MB 512 MB
Disk 21 Offline 300 GB 299 GB
Disk 22 Offline 100 GB 100 GB
DISKPART>
umichklewis
1.2K Posts
1
February 24th, 2016 11:00
Unless you're doing something very specific, Windows doesn't care about the Logical disk number, it cares about the disk signature. After you fail cluster disks from Node A to Node B, Node B activates the cluster disk based on signature, not logical disk number. If the disks are under cluster control, be sure to follow the steps for handling cluster disks, as per the guide.
Let us know how it goes!
Karl
chrismahon
18 Posts
0
February 24th, 2016 12:00
Your steps for Open Migrator are correct. I would add that to attach Open Migrator filter driver to the source and target LUNs it would require a reboot. Also, to uninstall OM would require another reboot. I see Karl directed you to PPME which would be preferred as you can significantly minimize your downtime involved with this migration. Also, SANcopy would do the job if you needed an array-based approach. I almost prefer SANcopy incremental push over OM due to the number of outages I need to incur with OM.
Agiyapal
5 Posts
0
March 2nd, 2016 03:00
Hi,
We tested the migration using PPME and it works with no issues, we tested it between VNX and XtremeIO ( to and fro)
We checked with EMC and I was told that XtremeIO is supported.
Once question, can I ignrore the powermig cleanup command and remove the abanondend drives first before carrying out the -force cleanup?
If we need to reverse the migration for any reason, we can attach the old source drives as it will still have data if we do not cleanup?
Also as it is Micrsocft cluster, can we leave sync running in sourceselected mode for as long as we want?
umichklewis
1.2K Posts
0
March 2nd, 2016 12:00
Is there a reason you need to use -force with your powermig cleanup? I only ask, because I can't remember the circumstances it's required. In my experience, I have always put LUNs into a committed state before taking any further actions. I would not recommend removing drives once committed - that's what I would use powermig cleanup for.
If you're concerned about an issue that would force you to reverse the migration, I would go into TargetSelected for an extended period of time, and leave things like that for a while. We let a server stay this way for 30 days, to see the impact of month-end processing, then committed and cleaned up the migration. If we had issues during month-end processing, we were planning to use powermig abort and scrap the whole project.