Start a Conversation

Unsolved

This post is more than 5 years old

19194

February 2nd, 2010 05:00

Upgrade Cx3-20 disks from 2Gb to 4 Gb

Hello...

We currently have a CX3-20 disk box (all 2Gb 10K disks) and we want to replace all the disks with new 15k 4Gb...  (So we can run at 4Gb)..

We purchased 15 x15k 4Gb disks...

What is the best method for replacing the 10k Disks...  just to start from the slot 0 and replace 1st disk.. let it fully rebuild...  then then replace the next one?  (or should i start from the Non FLARE OS disks (i.e.  not the first 5)..  and replace them first and only do the FLARE OS at the end?)

or...  we actually purchased the new disks with a DAE.. so should be plug that into the loop and then somehow move the FLARE OS across...

any help would be appreciated...

Cheers

Anthony

9.3K Posts

February 2nd, 2010 06:00

Before I post an answer, I want to be sure I understand your setup fully as this is important.

 

Do you only have the DAE-OS (bus 0 enclosure 0) with the 2Gb/s drives? No other disk enclosures (attached)?

 

One note: just replacing the drives isn't the only step; you have to run the backend bus speed wizard, which will require the whole SAN to go down (both SPs reboot at the same time), so a complete downtime window will be needed eventually.

4 Posts

February 2nd, 2010 06:00

...  Yep... no other DAE is attached....  the new 15k 4Gb disks did come in a DAE but we are probably going to sell it with all the older 10K 2Gb disks... 

So.. just need to replace the existing 15 x 10k 2Gb drives with the new 15 x 15K 4Gb drives while maintaing the integrity of the storage (.i. lun's and keeping it running)... 

.. On the backend bus speed wizard...  will the new drives function correctly without running the the backend bus speed wizard (i.e..  so will remain at 2 Gb with no problems until i schedule a time to down the system)...

CHeers

4 Posts

February 2nd, 2010 07:00

Yep...  it is a DAE3P...  so no probs....

Thanks for the process...   i am in the middle of backing everything up...  so the whole process is going to take some time.....

Cheers

9.3K Posts

February 2nd, 2010 07:00

You have a tag on your post for CX300. The following steps will only work if your DAE-OS is a DAE3P. If it's a DAE2 or DAE2P, none of this will work and you may be stuck (I'll explain if this turns out to be the case).

 

If you don't have any data LUNs on the flare drives (disk 0 to 4 in the DAE-OS), make a raid group and put a small LUN in there of a redundant raid type (e.g. a 5GB raid 5 LUN). If you already have LUNs on there, you can ignore that step.

 

Assuming you want to do this with data in place, I would recommend the following steps:

- run a backup (just in case) and verify this backup is 100% good

- unbind any hotspare you may have

- disable email home (why have Dell calling you each time you pull a drive to replace it)

- run SPcollects and transfer them off the array (save them on your desktop or so) -> Just in case

Note: swapping out the first 5 drives will disable write cache, so you do take a performance hit during the first stage in this migration.

- replace disk 0 with one of the new 15k drives (I do assume it's the same size or larger)

- wait for the rebuild to completely finish (check each LUN in the raid group that contains this drive and verify it's 100% rebuilt; the big T is a good way to tell, but checking individual LUNs is the 100% certain way)

- with these same steps you replace 1 disk at a time from the left side to the right side in the enclosure (don't forget the hotspare). Note: if just 1 drive/device is only 2Gbit/s capable, you cannot change to 4Gbit/s.

- run a setsniffer against all raid groups (naviseccli -h ip_of_SPA setsniffer -rg raidgroupnumber -bv -bvtime ASAP)

- give the setsniffer about 1 minute per GB on a single disk in the raid group (so with 146GB drives, give it 2.5 hours)

- run a getsniffer to verify there are no uncorrectable errors (naviseccli -h ip_of_SPA getsniffer -rg raidgroupnumber)

 

Then, when you can shut down all servers that are using storage on the array you:

- run another backup (it's been at least a couple of days probably since you started swapping drives)

- pull another set of SPcollects

- shut down all connected servers

- in Navisphere (under Tools if I remember correctly), select the backend bus speed wizard and go through this. This will end up rebooting the array.

- wait for the array to come back up

- check in Navisphere under physical and then the enclosure properties if the enclosure now shows to be running at 4Gb/s.

- you can power up your servers again

4 Posts

February 2nd, 2010 07:00

also...  just one more thing...   is it possible just to test one of the new hard drives on a non flare OS first (i.e not one of the first 5 slots) ..  just as a precaution...  will that affect anything ?

9.3K Posts

February 2nd, 2010 07:00

If you want to test a drive, do the hotspare first.

 

As the hotspare itself isn't redundant do the following:

- note the hotspare raid group number and LUN number (it's a private LUN, so you have to expand the private lun section in the raid group)

- unbind the hotspare LUN

- destroy the hotspare raid group

- swap the disk

- recreate the raid group

- recreate the hotspare

 

Remember to unbind this hotspare prior to your drive swapping or it'll increase the time you need (it'll first rebuild to the hotspare and once that's finished equalize to the newer disk).

 

Also; with just 1 disk enclosure I highly doubt you'll notice any difference in speed between a 2Gbit/s backend or a 4Gbit/s backend, but once you start adding more enclosures (be sure to only buy 4Gbit/s drives (you can get SATA, 10k and 15k 4Gbit/s drives now)), it'll give more breathing space for all the drives combined.

Any immediate performance difference you see are due to the increase in spindle speed (10k vs 15k).

No Events found!

Top