dkakljkml's Posts

dkakljkml's Posts

we did check all the parameters and we did all the required ugrades prior to the migration , it still dont make sense why the new devices will become not accesable when we remove the old drive fr... See more...
we did check all the parameters and we did all the required ugrades prior to the migration , it still dont make sense why the new devices will become not accesable when we remove the old drive from the system.
No, i didnt configure the quorum drive . As I was making sure everything come up good on the active node . once thats done was  going to configure the quorum drive and as well recreate the resouc... See more...
No, i didnt configure the quorum drive . As I was making sure everything come up good on the active node . once thats done was  going to configure the quorum drive and as well recreate the resouce group.
OM version (3.12)  , OS -2003 R2 sql cluster.
hi Talijic, I followed these below steps with version 3.12 (OM) Below are the steps that were followed while migration using open migrator for this sql cluster. Step 1 to 5 was successf... See more...
hi Talijic, I followed these below steps with version 3.12 (OM) Below are the steps that were followed while migration using open migrator for this sql cluster. Step 1 to 5 was successfully implemented, but as soon as we remove the old San from the host we lost the connectivity to the new volume. Do you see anything I missed in order for the migration to fail? 1. Assign the new san to both the node on the cluster (active/passive) 2. reboot he active node which has the OM install after the filter divers are attached. 3, After the reboot start the migration 4, once the migration is done , complete the migration and shut down the passive node , so the active node can be rebooted, 5. Once the active node comes up the drive letters will be swapped to the new volumes. 6. Then rebuilt the Quorum disk.
Can you  please let me know the steps to follow for cluster migration using OM. do we have reboot he active . with passive node shut down , after we hit the migration complete in the OM. can you ... See more...
Can you  please let me know the steps to follow for cluster migration using OM. do we have reboot he active . with passive node shut down , after we hit the migration complete in the OM. can you please assist.
I had  a issue last week , while trying to migrate the old SAN to the new one using open migrator . being the sql cluster the passive node was shut down before the server reboot,  after the migra... See more...
I had  a issue last week , while trying to migrate the old SAN to the new one using open migrator . being the sql cluster the passive node was shut down before the server reboot,  after the migration so that the new luns get the old san drive letter . As per the process it new san did get the drive letter , but as soon as we removed the old san we lost the connectivity to the new one . Can anyone has come across any issue like this as before .As well I have listed the steps as follow , please assist if I am wrong. 1. Assign the new san to both the node on the cluster (active/passive) 2. reboot he active node which has the OM install after the filter divers are attched. 3, after the reboot start the migration 4, once the migration is done , complete the migration and shut down the passive node , so the active node is rebooted, 5. then rebuit the qorum disk , and as well recreate the cluster resource. please advise.
Thanks Seamus for the response. In my scenario I have add the management host which is directly connect to the array to the console. once this host was added I can see all the other host that are... See more...
Thanks Seamus for the response. In my scenario I have add the management host which is directly connect to the array to the console. once this host was added I can see all the other host that are connected on the console, but when it comes to storage scope I can see the mangement host I added but not the other host which are connected to the array. I suspect that we have to install master agent on the other host to populate its details in the stoarge scope . please correct me if I am wrong .
Going through this discussion I got some questions to ask . Do we need to install master agents on the all hosts which are now visible in the ecc console after adding the array , so that the info... See more...
Going through this discussion I got some questions to ask . Do we need to install master agents on the all hosts which are now visible in the ecc console after adding the array , so that the information of these host are populated to the storage scope.
I have some questions to verify . 1. Do the all hosts connected to the storage array (vmax) has to have the master agent installed, inorder to show up in stoarge scope. 2. do we have insta... See more...
I have some questions to verify . 1. Do the all hosts connected to the storage array (vmax) has to have the master agent installed, inorder to show up in stoarge scope. 2. do we have install master agent on the VM level or on the ESX host.