This post is more than 5 years old
29 Posts
0
856
January 7th, 2009 13:00
Adding/modifying additional storage to a NS500G
We have a NS500G running 5.5.33-2 getting storage from a CX500. This is a fairly old system and I forget the original details but it appears that it has LUNs from a RAID5 (3+1) and a RAID5 (4+1) group.
I had to create a new file system and when I attempted to add a couple of LUNs, the one from the RAID5 (4+1) group was recognized without any problem using the server_devconfig command.
However the LUN from the RAID5 (3+1) group could not be created because the Celerra didn't recognize the 3+1 RAID group.
"... stor_dev=0x0011, RAID5(3+1), doesn't match any storage profile"
So my two questions are:
1. How could the original LUNs from the RAID5 (3+1) have been created and recognized by the Celerra?
2. If I expand the RAID5 (3+1) group on the CX500 to make it a RAID5 (4+1) will it automatically be recognized by the NS500G without affecting the existing LUNs on the system?
The only thing that changed since the original configuration was implemented was the upgrade of the FLARE OS and the NAS code.
Thanks.
I had to create a new file system and when I attempted to add a couple of LUNs, the one from the RAID5 (4+1) group was recognized without any problem using the server_devconfig command.
However the LUN from the RAID5 (3+1) group could not be created because the Celerra didn't recognize the 3+1 RAID group.
"... stor_dev=0x0011, RAID5(3+1), doesn't match any storage profile"
So my two questions are:
1. How could the original LUNs from the RAID5 (3+1) have been created and recognized by the Celerra?
2. If I expand the RAID5 (3+1) group on the CX500 to make it a RAID5 (4+1) will it automatically be recognized by the NS500G without affecting the existing LUNs on the system?
The only thing that changed since the original configuration was implemented was the upgrade of the FLARE OS and the NAS code.
Thanks.


Peter_EMC
674 Posts
0
January 8th, 2009 00:00
Created maybe manually. There are additional plausibility checks in the newer NAS Codes.So it will no longer use "unsupported" Lun Configurations
Normaly there are 2 (or 4) Luns of the same size in the Celerra raid group. If you extend the raid group with the original 2 luns, you have to create a third Lun in this raid group in order to use the additional space. If you want to use the automated volume manager (AVM) you should avoid creating a third Lun in a raid group.
So I would recommend deleting the Luns (of the 3+1 disks) from the Celerra first (this is only possible if there is no FS on it!), then building a 4+1 disk group with 2 Luns of the same size and then rediscovering the Luns on the Celerra. If this is FC disks, then one of these Luns should be owned by SPA, the other by SPB, for ATA they have to be owned by the same SP.
When putting these Luns into the Celerra storagegroup, please assign a HLU >= 16 for them.
dynamox
9 Legend
•
20.4K Posts
0
January 8th, 2009 04:00
i think this question was asked already ..but i can't find the thread. Do you still have to put LUNs on the same SP if you are using the new dual ported SATA II drives ?
Peter_EMC
674 Posts
0
January 8th, 2009 06:00
although it should no longer be necessary for dual ported SATA II drives, we still recommend:
"For ATA disks, all LUNs in a RAID group must belong to the same SP. "
dynamox
9 Legend
•
20.4K Posts
0
January 8th, 2009 06:00
dynamox
9 Legend
•
20.4K Posts
0
January 8th, 2009 06:00
dynamox
9 Legend
•
20.4K Posts
0
January 8th, 2009 06:00
g.srinivasan
29 Posts
0
January 8th, 2009 07:00
Thanks for the response.
Rainer_EMC
4 Operator
•
8.6K Posts
1
January 8th, 2009 07:00
using full+incremenal fs_copy you should be able to keep the downtime pretty short.
If you're not that familiar with the CLI you could ask your EMC sales contact to get a Replicator loan/eval license
Rainer
P.S.: technically under very controlled circumstances with support approval you could either expand the raidgroup
or use LUN migration, but then the Celerra would still "think" this file system was created using 3+1 and make expanding it more cumbersome.