Do not use dynamic disks for your task for a couple of reasons;
- dynamic disk is not supported on iSCSI, and even if you just want to use it for a migration, once you go dynamic, you can't go back (without first deleting all partitions off the disk) (see page 14 of the uguide (user guide) of the iSCSI initiator
here)
- dynamic disk should only ever really be used if you need raid, but can't budget in a hardware raid controller
I'd suggest to plan for a complete outage and do the following:
- shut down all but 1 cluster node
- shut down and (temporarily) disable all services you may be running in the cluster other than the cluster service itself (e.g. SQL, Exchange, Oracle, etc)
- add the new AX150i to this server (so that this server can see the existing SANs and the new SAN)
- partition the new disks and format them (use volume labels or little files in the root to help identify which new disk is supposed to get which data)
- now you copy the data from the old disks to the new disks (drag and drop, or some kind of sync tool or so (check Microsoft's synctoy))
- at this time you remove all but 1 of the existing disks from the cluster (go to the cluster resource group and delete the disk) (e.g. leave the Quorum for now)
NOTE: this may need removing some dependencies depending on your cluster setup, so be sure to write down all dependencies on each disk
- now you check disk management to see if the disks you just removed from the cluster are fully readable (the cluster disk driver should have released them). If not, perform a reboot.
- Now you change the drive letters on the existing disks to something else to free up the drive letters for the 'new' disks.
- Now change the drive letters on the new disks to become the same as the old
- At this time you add the new disks to the cluster as a resource
- restore any dependencies in the cluster that you had to remove previously to be able to remove the disks
- now you move the Quorum to one of these new disks (right click the cluster and go to properties and then the Quorum tab and just select another drive letter)
- At this time you remove the disk that held the Quorum from the cluster
- change the drive letter on your Quorum disk
- change the drive letter of the new Quorum disk to be the old drive's letter
- add this disk as a cluster resource
- move the quorum back to the old drive letter (now the new lun on the new SAN)
- bring up all services and such to verify everything is working in the cluster
- go into Navisphere for the old SANs and remove access to the virtual disks/LUNs from all servers (don't delete the virtual disks for a while; till you feel 100% comfortable all data was copied properly)
- reboot the server and verify all comes up 100%
Now you power up 1 of the other servers and verify it joins the cluster properly. If so, test failover.
Now do the same for the other servers 1 at a time.
This list probably is missing certain steps, but I think you should be able to get a general idea about what's involved.
The main thing is; you can't use dynamic disk, and because you're using iSCSI, you can't use SANcopy to 'push' the LUNs to another SAN (the AX150 fibre channel does offer this).
Thanks for your time, and appreciate your comments. Now here is the small thing that probably i have not included in my first post;
Our idea regarding this data migration is to get all the data on the LUNs on new SAN and take the SAN to new location(different city), expose these LUNs to new server and start using the same data.
Now we have tried certain folder copy tools like windows copy, viceversa etc but the problem here is they are eating up the memory and thus slowing down the copy process and we have only weekend time to cutover. The data we are trying to copy is referanced with each other so if all the LUNs carry the data without any problem then only this cycle will complete at the new location.
I have exposed same size LUNs from new SAN to same server where the source LUN is, so it will be like drive to drive copy on the same machine for all data types. Now how can I get max speed? considering both the drives are SAN drives so any file transfer between them should be block level (correct me if I am wrong).
Sorry for posting it like a story, but I am just trying to explain as much as possible for you to get overall idea of our situation.
The AX150i doesn't offer this. It would require a fibre channel version of the array (which offers SANcopy).
You're going to have to do a file level copy.
In this case, a different setup is possible:
- connect the existing cluster nodes to the new SAN
- determine which server owns which original virtual disk
- give each new LUN to the server that has the LUN that contains the original data (e.g. you have a LUN with SQL data on cluster node 3, so you give the LUN from the new SAN that needs to have a copy of this data so this one "cluster node 3")
- rescan in device manager for hardware changes and then go to disk management and verify you can see the disk(s)
- partition and format if needed
- use some kind of sync tool to copy the data
By having each server do part of the job (the part that that server owns the resource for at that time) you can maybe speed up the process.
Remember: servers cannot share disk space unless they are clustered; so with the new SAN, only give any given virtual disk to only 1 cluster node.
Another method is just to move the new SAN to the new location, hook up the server(s), and then do a restore from your tape backup from 'last night' of your production data.
laibhari
19 Posts
0
August 13th, 2008 19:00
Sorry i forgot to mention details of my setup, I have following things with me
3 Existing SAN - AX150i
1 New SAN - AX150i
5 Cluster Pair server - PE1955
Dev Mgr
4 Operator
•
9.3K Posts
0
August 14th, 2008 11:00
- dynamic disk is not supported on iSCSI, and even if you just want to use it for a migration, once you go dynamic, you can't go back (without first deleting all partitions off the disk) (see page 14 of the uguide (user guide) of the iSCSI initiator here)
- dynamic disk should only ever really be used if you need raid, but can't budget in a hardware raid controller
I'd suggest to plan for a complete outage and do the following:
- shut down all but 1 cluster node
- shut down and (temporarily) disable all services you may be running in the cluster other than the cluster service itself (e.g. SQL, Exchange, Oracle, etc)
- add the new AX150i to this server (so that this server can see the existing SANs and the new SAN)
- partition the new disks and format them (use volume labels or little files in the root to help identify which new disk is supposed to get which data)
- now you copy the data from the old disks to the new disks (drag and drop, or some kind of sync tool or so (check Microsoft's synctoy))
- at this time you remove all but 1 of the existing disks from the cluster (go to the cluster resource group and delete the disk) (e.g. leave the Quorum for now)
NOTE: this may need removing some dependencies depending on your cluster setup, so be sure to write down all dependencies on each disk
- now you check disk management to see if the disks you just removed from the cluster are fully readable (the cluster disk driver should have released them). If not, perform a reboot.
- Now you change the drive letters on the existing disks to something else to free up the drive letters for the 'new' disks.
- Now change the drive letters on the new disks to become the same as the old
- At this time you add the new disks to the cluster as a resource
- restore any dependencies in the cluster that you had to remove previously to be able to remove the disks
- now you move the Quorum to one of these new disks (right click the cluster and go to properties and then the Quorum tab and just select another drive letter)
- At this time you remove the disk that held the Quorum from the cluster
- change the drive letter on your Quorum disk
- change the drive letter of the new Quorum disk to be the old drive's letter
- add this disk as a cluster resource
- move the quorum back to the old drive letter (now the new lun on the new SAN)
- bring up all services and such to verify everything is working in the cluster
- go into Navisphere for the old SANs and remove access to the virtual disks/LUNs from all servers (don't delete the virtual disks for a while; till you feel 100% comfortable all data was copied properly)
- reboot the server and verify all comes up 100%
Now you power up 1 of the other servers and verify it joins the cluster properly. If so, test failover.
Now do the same for the other servers 1 at a time.
This list probably is missing certain steps, but I think you should be able to get a general idea about what's involved.
The main thing is; you can't use dynamic disk, and because you're using iSCSI, you can't use SANcopy to 'push' the LUNs to another SAN (the AX150 fibre channel does offer this).
laibhari
19 Posts
0
August 14th, 2008 17:00
Thanks for your time, and appreciate your comments. Now here is the small thing that probably i have not included in my first post;
Our idea regarding this data migration is to get all the data on the LUNs on new SAN and take the SAN to new location(different city), expose these LUNs to new server and start using the same data.
Now we have tried certain folder copy tools like windows copy, viceversa etc but the problem here is they are eating up the memory and thus slowing down the copy process and we have only weekend time to cutover. The data we are trying to copy is referanced with each other so if all the LUNs carry the data without any problem then only this cycle will complete at the new location.
I have exposed same size LUNs from new SAN to same server where the source LUN is, so it will be like drive to drive copy on the same machine for all data types. Now how can I get max speed? considering both the drives are SAN drives so any file transfer between them should be block level (correct me if I am wrong).
Sorry for posting it like a story, but I am just trying to explain as much as possible for you to get overall idea of our situation.
Dev Mgr
4 Operator
•
9.3K Posts
0
August 14th, 2008 18:00
The AX150i doesn't offer this. It would require a fibre channel version of the array (which offers SANcopy).
You're going to have to do a file level copy.
In this case, a different setup is possible:
- connect the existing cluster nodes to the new SAN
- determine which server owns which original virtual disk
- give each new LUN to the server that has the LUN that contains the original data (e.g. you have a LUN with SQL data on cluster node 3, so you give the LUN from the new SAN that needs to have a copy of this data so this one "cluster node 3")
- rescan in device manager for hardware changes and then go to disk management and verify you can see the disk(s)
- partition and format if needed
- use some kind of sync tool to copy the data
By having each server do part of the job (the part that that server owns the resource for at that time) you can maybe speed up the process.
Remember: servers cannot share disk space unless they are clustered; so with the new SAN, only give any given virtual disk to only 1 cluster node.
Another method is just to move the new SAN to the new location, hook up the server(s), and then do a restore from your tape backup from 'last night' of your production data.