Start a Conversation

Unsolved

This post is more than 5 years old

K

1 Rookie

 • 

358 Posts

1230

June 18th, 2014 07:00

Move DAE from one NX4 to another

What roughly is involved in moving a DAE from one NX4 to another?  Can it be done "hot" ?

Assuming the filesystems and luns were removed from disks in that DAE before hand, is it plug and play?  The other NX4 has a DAE on it already so I assume it has the "enabler" to accept more DAE's.

I would imagine if you can do it "hot", you would then log into the storage processors and discover the new disks, create luns and present storage.

1 Rookie

 • 

358 Posts

June 19th, 2014 04:00

Celerra NX4 with Clariion AX4-5F8 backend to another of the same model and configuration.

Both running 6.0.60-2

Clariion says version 2.23.50.5.711,6.23.8 (1.2)

Source SN: NX4 SL7E90917000150000 AX4-5F8 SL7E9091700015  Take DAE containing 12 600GB 15k SAS drives from here.

Destination SN: SL7E91030001110000 AX4-5F8 SL7E9103000111   Add DAE with those 12 drives here.

Its most important that the destination stays up the whole time, or else it will have to be done after hours, or perhaps a few critical VM's vmotioned to local storage and then back when done.

June 19th, 2014 04:00

This may be possible, however, there is a procedure generator document on this.I would strongly suggest to take support help on this.

June 19th, 2014 04:00

May I know from which NX4 model to which NX4 model you are planning to swap the DAE?

1 Rookie

 • 

358 Posts

June 23rd, 2014 07:00

Ok I opened a case so hopefully I hear back today because I will actually be in the other site tomorrow and it would be great to get this done this week rather than wait another month until I'm there again.

I found the luns on the disks in "Enclosure 2" and now I am curious is there a command to see which LUNs each filesystem is using?

We use unified (NFS exported filesystems to vmware).

I only have 1 filesystem that I do not want to disturb.  Others can be wiped out.

1 Rookie

 • 

358 Posts

June 23rd, 2014 07:00

Actually I found a post here:

How to identify where a file system is residing on a backend array

This is great because I can see raid group 5 and 6 are on this enclosure, and both of those have 3 luns each, they match up from what I see on the back end, and these filesystems are empty so I am safe to remove these disks.

My only thing from support is finding out how to inform the system that once this DAE is gone, its ok... meaning do not keep sending alerts and put the array into an alarmed state since the DAE is missing.

So my thoughts are:

remove these two filesystems from vmware hosts.

remove these two filesystems nfs exports

remove these two filesystems

on backend destroy the raid groups 5 and 6.

- somehow inform the system that it is ok if this DAE is removed

remove the DAE

Install the DAE at new site.

On backend initialize the storage.

Create raid groups, hot spare and luns

create filesystem(s)

Attach filesystem(s) to the vmware hosts at this site.

Enclosure 2

Disk 0, 1, 2, 3, 4, 5 - LUN IDs 30,31,32 - Raid Group 5 - 2682.498 GB

Disk 6 - Hot Spare

Disk 7, 8, 9, 10 , 11 - LUN IDs 33,34,35 - Raid Group 6 - 2145.999 GB

SL7E9091700015-0005      30   (d20 )         vm4-sas

                                        32   (d21 )         vm4-sas

                                        31   (d19 )         vm5-sas

SL7E9091700015-0006      35   (d23 )         vm4-sas

                                        34   (d24 )         vm4-sas

                                        33   (d22 )         vm5-sas

1 Rookie

 • 

358 Posts

June 23rd, 2014 08:00

I found this post and have all the stuff done except for physically removing the DAE.

Re: Removing DAE

The reason I haven't removed it yet, is becasue I cannot get to navisphere on SPA.  SPA is not responding to web gui, but SPB does fine.  I need to make sure I can "restart the management server on both SP's at the same time".  But I can only get to SPB.

Do you know the command-line equivalent since the httpd service is hosed on my SPA?

I did all of the unbinding the storage group, removing the luns, the hot spare and deleting the raid groups on this DAE.  All 12 disks are now free and "unbound".

1 Rookie

 • 

358 Posts

June 23rd, 2014 17:00

Able to use LYNX text base browser to restart mgmt agent on SPA and then it freed it up to be able to be accessed in other browsers.  So I removed it and restarted the mgmt agents.  Still showing Enclosure 2 faulted.  Support said reboot each SP one at a time.  I did that and it still shows Enclosure 2 faulted.  Also after rebooting the SP's for some strange reason a hot spare disk is all of a sudden "removed" (not sure how that happened) and two power supplies Shelf 0/2 Power Supply A and B show error, but I think this is to the DAE I removed.

Anyway latest SP Collects uploaded to my case.

So anyone reading, my advice is to engage EMC technical support.  When you remove a DAE or reboot SP's its going to call home anyway!

No Events found!

Top