This post is more than 5 years old

856

January 7th, 2009 13:00

Adding/modifying additional storage to a NS500G

We have a NS500G running 5.5.33-2 getting storage from a CX500. This is a fairly old system and I forget the original details but it appears that it has LUNs from a RAID5 (3+1) and a RAID5 (4+1) group.

I had to create a new file system and when I attempted to add a couple of LUNs, the one from the RAID5 (4+1) group was recognized without any problem using the server_devconfig command.

However the LUN from the RAID5 (3+1) group could not be created because the Celerra didn't recognize the 3+1 RAID group.

"... stor_dev=0x0011, RAID5(3+1), doesn't match any storage profile"

So my two questions are:

1. How could the original LUNs from the RAID5 (3+1) have been created and recognized by the Celerra?

2. If I expand the RAID5 (3+1) group on the CX500 to make it a RAID5 (4+1) will it automatically be recognized by the NS500G without affecting the existing LUNs on the system?

The only thing that changed since the original configuration was implemented was the upgrade of the FLARE OS and the NAS code.

Thanks.

674 Posts

January 8th, 2009 00:00

1. How could the original LUNs from the RAID5 (3+1) have been created and recognized by the Celerra?


Created maybe manually. There are additional plausibility checks in the newer NAS Codes.So it will no longer use "unsupported" Lun Configurations

2. If I expand the RAID5 (3+1) group on the CX500 to make it a RAID5 (4+1) will it automatically ...


Normaly there are 2 (or 4) Luns of the same size in the Celerra raid group. If you extend the raid group with the original 2 luns, you have to create a third Lun in this raid group in order to use the additional space. If you want to use the automated volume manager (AVM) you should avoid creating a third Lun in a raid group.
So I would recommend deleting the Luns (of the 3+1 disks) from the Celerra first (this is only possible if there is no FS on it!), then building a 4+1 disk group with 2 Luns of the same size and then rediscovering the Luns on the Celerra. If this is FC disks, then one of these Luns should be owned by SPA, the other by SPB, for ATA they have to be owned by the same SP.
When putting these Luns into the Celerra storagegroup, please assign a HLU >= 16 for them.

9 Legend

 • 

20.4K Posts

January 8th, 2009 04:00

Peter,

i think this question was asked already ..but i can't find the thread. Do you still have to put LUNs on the same SP if you are using the new dual ported SATA II drives ?

674 Posts

January 8th, 2009 06:00

Dynamox,

although it should no longer be necessary for dual ported SATA II drives, we still recommend:
"For ATA disks, all LUNs in a RAID group must belong to the same SP. "

9 Legend

 • 

20.4K Posts

January 8th, 2009 06:00

thank you Peter

9 Legend

 • 

20.4K Posts

January 8th, 2009 06:00

thank you Peter

9 Legend

 • 

20.4K Posts

January 8th, 2009 06:00

thank you Peter

January 8th, 2009 07:00

I had also opened a service request in case this was a known issue and basically after the upgrade of the NAS code, there were additional checks as you mentioned that prevent the use of RAID5 (3+1). Since there are file systems on the LUNs, one option mentioned was to use the file system copy command, to migrate the data, shut down the NAS clients and then export the new LUN to the clients. So I will try that to minimize the time the servers have to be without their NAS storage.

Thanks for the response.

4 Operator

 • 

8.6K Posts

January 8th, 2009 07:00

That would certainly be the cleanest option if you temporarily have the extra storage

using full+incremenal fs_copy you should be able to keep the downtime pretty short.
If you're not that familiar with the CLI you could ask your EMC sales contact to get a Replicator loan/eval license

Rainer
P.S.: technically under very controlled circumstances with support approval you could either expand the raidgroup
or use LUN migration, but then the Celerra would still "think" this file system was created using 3+1 and make expanding it more cumbersome.

Top