I suggest to upgrade to 5.6.45 - this way you can use the provision storage wizard on the Celerra Manager GUI to configure disks the way Celerra likes them.
you can also delete LUNs and raidgroups via Celerra Manager GUI and CLI
I order to get an idea on the terminology and how Celerra uses storage I suggest to take a look at the Managing EMC Celerra Volumes and File Systems with Automatic Volume Management manual, which is on the Celerra documentation CD or via Powerlink
Basically the way it works: on the Clariion disks are configured into a raidgroup - note that Celerra supports and recognizes only a subset of RAID levels and size of whats possible with Clariion LUNs are created (bound) from a raidgroup and put into the Celerra storage group each LUN is seen by the celerra as a disk volume - dvol the Celerra AVM put LUNs with similar performance charactistics in a storage pool - like clar_r5_perfomance the Celerra AVM volume manager cuts slices from these dvols and combines them with other slice via striping or concatenation onto a meta volume on top of that meta a file system is built
so are storage pool would be (loosely) similar to an aggregate (not completely since an aggregate is a hard concept and a storage pool only management grouping) a file system would be a volume
as far as why the Celerra didnt see your new dvols the most likely causes are: - wrong RAID level or number of disks - HLU <16 - LUN not in Celerra storage group
take a look at the Powerlink location that dynamox quote - it will give you details on how to configure the disks for Celerra use
setup_clariion will most likely not help - it only works if you have a NAS only system (not a NS20FC) and it will only do fixed shelf-by-shelf configs
EMC has to update the DART code for the Celerra, be advised it will most likely also need to update the flare code for the clariion portion as well (two separate upgrades). Once you configure the disks on the clariion the setup_clariion command isn't going to show you that space as it will only configure unbound disk space.
For a NS20 (without FC) and 5.5 code I suggest to get in touch with your local EMC technical guy It can get a bit complicated to change the RAID setup there.
However, unless you want to change the RAID levels, you might not have to. If you just want to start clean just delete all file systems.
In terms of upgrading to 5.6 that has still to be done by EMC or partner personal. Just open a support case (if you have a maintenance contract)
Mainly what I am trying to do is get our storage "back." However it is configured now, I have 1.1TB. There's 15 300 GB HDD and I only have 1.1TB of storage? It did not make sense to us, so we figured we would change it to something a little more reasonable. So, if I delete the file systems, I still have the, "Where did all my disk space go?" issue. I know that you lose disk space with RAID 5, but I see no reason why it would be 75% of my total space.
I have the 1TB of perf, but the economy one is nowhere to be found. I tried creating a new Storage Pool, but it says there are no Volumes to choose from.
dynamox
9 Legend
•
20.4K Posts
1
August 17th, 2009 06:00
start out by looking over NS20 documentation at
Home > Support > Product and Diagnostic Tools > Celerra Tools > NS20, NS20FC
specifically Step 5 -Configure for Production
http://corpusweb130.corp.emc.com/ns/common/post_install/Configure_storage_Non_FC.pdf
Rainer_EMC
4 Operator
•
8.6K Posts
1
August 17th, 2009 08:00
what DART version are you using ?
do you have a NS20 or a NS20FC ?
I suggest to upgrade to 5.6.45 - this way you can use the provision storage wizard on the Celerra Manager GUI to configure disks the way Celerra likes them.
you can also delete LUNs and raidgroups via Celerra Manager GUI and CLI
I order to get an idea on the terminology and how Celerra uses storage I suggest to take a look at the
Managing EMC Celerra Volumes and File Systems with Automatic Volume Management
manual, which is on the Celerra documentation CD or via Powerlink
Basically the way it works:
on the Clariion disks are configured into a raidgroup - note that Celerra supports and recognizes only a subset of RAID levels and size of whats possible with Clariion
LUNs are created (bound) from a raidgroup and put into the Celerra storage group
each LUN is seen by the celerra as a disk volume - dvol
the Celerra AVM put LUNs with similar performance charactistics in a storage pool - like clar_r5_perfomance
the Celerra AVM volume manager cuts slices from these dvols and combines them with other slice via striping or concatenation onto a meta volume
on top of that meta a file system is built
so are storage pool would be (loosely) similar to an aggregate (not completely since an aggregate is a hard concept and a storage pool only management grouping)
a file system would be a volume
as far as why the Celerra didnt see your new dvols the most likely causes are:
- wrong RAID level or number of disks
- HLU <16
- LUN not in Celerra storage group
take a look at the Powerlink location that dynamox quote - it will give you details on how to configure the disks for Celerra use
setup_clariion will most likely not help - it only works if you have a NAS only system (not a NS20FC) and it will only do fixed shelf-by-shelf configs
dynamox
9 Legend
•
20.4K Posts
0
August 17th, 2009 17:00
/nas/sbin/model
Chainsaw1
12 Posts
0
August 17th, 2009 17:00
/nas/sbin/setup_clariion -init
Found CLARIION(s) APMxxxxxxxx260
Setup CLARiiON APMxxxxxxxx260 storage device...
System xxx.xxx.xxx.28 is up
System xxx.xxx.xxx.29 is up
Clariion Array: APMxxxxxxxx260 Model: CX3-10 Memory: 1024
No matching template found.
Failed
Setup of CLARiiON APMxxxxxxxx260 storage device not completed.
Chainsaw1
12 Posts
0
August 17th, 2009 18:00
I tried using the Navisphere to upgrade the firmware like you mentioned, but it told me to contact a support rep.
Chainsaw1
12 Posts
0
August 17th, 2009 18:00
Chainsaw1
12 Posts
0
August 17th, 2009 20:00
CLARIION(s) list config APMxxxxxxxx260 will be setup.
Setup CLARiiON list storage device...
Enter the ip address for A_list:
dynamox
9 Legend
•
20.4K Posts
0
August 17th, 2009 20:00
setup_clariion2 list config APMxxxxxxxx260
RobertDudley
2 Intern
•
448 Posts
0
August 18th, 2009 04:00
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 18th, 2009 05:00
It can get a bit complicated to change the RAID setup there.
However, unless you want to change the RAID levels, you might not have to.
If you just want to start clean just delete all file systems.
In terms of upgrading to 5.6 that has still to be done by EMC or partner personal.
Just open a support case (if you have a maintenance contract)
hope that helps
Rainer
Chainsaw1
12 Posts
0
August 18th, 2009 05:00
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 18th, 2009 06:00
you need to use setup_clariion2 as root and might have to cd into /nas/sbin/setup_backend before
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 18th, 2009 07:00
15x300GB would normally be setup as 4+1R5 HS 8+1R5
so I would expect around 3 TB - 1TB in clar_r5_performance and 2 TB in clar_r5_economy
Chainsaw1
12 Posts
0
August 18th, 2009 16:00
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 19th, 2009 11:00
in terms of pools - these are system pools that get created automatically
you can check what you have with "nas_pools -size -all"