Start a Conversation

Unsolved

This post is more than 5 years old

S

2570

January 16th, 2011 03:00

Move DAE's

Hi All,

We're planning to move 10 FC DAE's from CX3-80 on to our new CX4-960 system.

CX4-960 is running Rel 29 and has 5 DAE's with id's 0_0,0_1,1_0,2_0,3_0.

CX3-80 is running Rel 26 patch 31.

1.Need your input on what pre-checks need to be completed?

2.After mounting DAE's on to CX4-960 will it automatically pick up Enclosure ID's or need to set it manually? what would be the best practise for doing this?

3. Any other considerations?

All inputs will be appreciated.

regards,

samir

542 Posts

January 16th, 2011 07:00

Some of the first things to check is are there any 2G disks in the DAE's you are moving?  If there are, you will want to isolate those DAE's to one bus so the other buses can run at 4G.  Also are these DAE's your moving all DAE2p & DAE3P?

Also make sure that you unbind all raid groups and hot spares on those CX380 DAE's before removing them.

How are your current DAE's racked?  can i assume that starting from the bottom, they are 0_0, 1_0, 2_0, 3_0, 0_1?  If so then i would recommend the following order so that you dont have to purchase longer backend cables.

1_1, 2_1, 3_1, 0_2, 1_2, 2_2, 3_2, 0_3, 1_3, 2_3    So your rack will look like this from the back:

TOP

2_3

1_3

0_3

3_2

2_2

1_2

0_2

3_1

2_1

1_1

0_1  Current

3_0  Current

2_0  Current

1_0  Current

0_0  Current

BOTTOM

You will need to set the enclosure ID for each.  I would recommend that you do one at a time and use the NST/USM.  Rack the DAE and apply power to one side.  Do not attach any BE cables.  Set the EnclosureID and then the BE cables.

I would also disable your email home on the CX4-960 untill your done to avoid a lot of emails.

1 Rookie

 • 

20.4K Posts

January 16th, 2011 07:00

all DAE3P/DAE4P enclosures ? You will need to set enclosure ID with a pencil or something sharp. Use NST and it will walk you through connecting all the cables and such. I would also decommission those 10 DAEs cleanly before you disconnect them (delete all LUNs, RAID groups, hot spares that use those DAEs)

234 Posts

January 16th, 2011 21:00

Thanks a lot Kenn and Dynamox this inputs are really helpful. The CX4 rack houses two 5300 switches on the top, so I would need to reduce some DAE's than, we will use CX3-80 rack.Was thinking to disconnect CX3-80 SPE, SPS and DAE-OS and install some DAE's in that space will that work?

We won't be able to move the DAE-OS but is there a possibility to move remaining 146GB 15K disks from this enclosure and install them in a enclosure consisting of 300GB 15K disks?

All DAE's are DAE3P.

regards,

Samir

1 Rookie

 • 

20.4K Posts

January 17th, 2011 06:00

so CX3 is going away completely ? I am sure you can re-use EMC 40U rack for CX4 expansion. Yes, you can put different capacity FC drives in the same DAE , obviously you would not want to bind them in a RG with 300G drives.

5 Practitioner

 • 

274.2K Posts

January 17th, 2011 18:00

Samir, as well as the tips give to you already, before you move any disks between a cx3 and cx4 array please read and understand primus solution emc251613

http://csgateway.emc.com/primus.asp?id=emc251613

jim

1 Rookie

 • 

20.4K Posts

January 17th, 2011 19:00

Jim,

so what happens during in-place hardware upgrade of CX3 to CX4 ?

542 Posts

January 17th, 2011 20:00

Jim,  that support article is a litle wierd.  First, none of the senarios show going from CX3 to CX4.  all the issues with the zero mark are from CX4 to Cx3.  but at the end it say that moving disks from CX3 to CX4 (vise versa) can lead to data lose.

I can tell you from personal experience, i just finshed a 6 month migration project in which i moved 12 FC and SATA DAE's from a CX380 and placed them on (2) CX4's. Then recarved them up and the lun's have been in production for months with out any COH or soft media errors.

I think that article has a typo in it.   I understand taking a disk that was zeromarked on a CX4 and putting it in a CX3 would be bad

234 Posts

January 17th, 2011 20:00

Hi Jim,

We're going to move the disks and DAE's from CX3-80 on to CX4-960 after destroying any raid groups on C3-80; the CX3-80 will be decommissioned.

regards,

Samir

234 Posts

January 20th, 2011 00:00

The no. of FC DAE3P's to move is now 13 and there are two DS5300 switches on CX4-960 rack already.

Need your input on Encl ID's to be set, between two racks now.

regards,

Samir

1 Rookie

 • 

20.4K Posts

January 20th, 2011 05:00

encl id depends on where you are going to place them on the bus, i would try to balance them between the available buses. Hopefully you have long cables that can stretch between the two racks.

51 Posts

January 23rd, 2011 16:00

If you need to use 2gb bus, "downgrade" the bus before you take cx4 to production. Operation reboots both sp's at the same time.

--

Jussi

159 Posts

January 24th, 2011 18:00

Be careful using disks from the old in the new DAEs with exisiting disks.  You need to make sure that you are not mixing 4G and 2G disks or you will need to downgrade and that is probably not what you want to do.

234 Posts

January 26th, 2011 23:00

Hi all,

The disks to be moved are all 300GB FC 4Gb/s 15k in DAE3P. I've destoryed Hot spares,Storage Groups, Luns, meta luns, private luns and all Raid Groups. the disks are all in UNBOUND state on CX3-80.

Hope we can move the DAE's on to CX4-960 and use NST/USM to connect the DAE's.

regards,

Samir

234 Posts

February 1st, 2011 23:00

Thanks for all your inputs, the task was completed successfully.

regards,

Samir

4.5K Posts

February 2nd, 2011 11:00

Just a closing note on this topic. The CX4-960 can utilise a lot of system cache, up to about 10GB for Write cache, but there is a requirement that the disks and DAE's on bus 0 enclosure 0 must be running at 4Gb in order to utilize the maximum cache. If the bus is running at 2Gb, then the maximum Write cache is limited to about 8GB. This is due to the time required to dump cache to the vault in case of a SP failure. At 2Gb bus speed, you can't dump 10GB, but you can dump 8GB before the SPS batteries run out.

This is covered in the latest Best Practices for R30.

EMC CLARiiON Performance and Availability Release 30 Firmware Update Applied Best Practices.pdf

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/h5773-clariion-best-practices-performance-availability-wp.pdf

glen

No Events found!

Top