Unsolved
This post is more than 5 years old
5 Posts
0
1424
November 25th, 2009 07:00
RAID 1/0 on Celerra NS20
We have a Celerra NS20 with 2 DAE's, 1 Fiber Channel, the other SATA. The FC DAE has a full set of 260GB Fiber Channel drives. Drives 0-4 are a RAID5 pre-setup and installed with the NAS/OS by EMC. These drives are seen by the Celerra as a clar_r5_performance storage pool. Drive 5 is a FC hot spare. The next 5 drives (6-10) I created a RAID5 Raid Group with Navisphere, and bound 1 LUN 1 TB in size from it. I then presented the LUN to the Celerra and created a Storage Pool from it. This config has been working since July. Now I want to take the last 4 FC drives and create a RAID 1/0 to present to the Celerra. I referenced article emc138143 (something I wasnt familiear with until after I had created the RAID5) and it says RAID 1/0 is supported with 4 disks and to create 2 LUNs, each on a different SP. The article says after I scan for the new disks, the Celerra should create a storage pool named Clar_r10. Any forseen problems with this? I was getting ready to do this today, but after seeing some discussions about Celerra's not fully supporting RAID 1/0, but rather create two Raid 1 groups on the Clarion and stripe them in the Celerra, this has caused some 2nd guessing on my part. We are on DART 5.6.43-8.
Thoughts?



gbarretoxx1
366 Posts
0
November 25th, 2009 09:00
Hi,
NS20, which is based on CX3 series does not support Raid 1/0. It supports Raid 1 instead.
The solution you mentioned is a general rule, but each model ( or series ) have their own supported configurations.
In your case, if you really needs mirroring, use two Raid 1 Raid Groups, then I recommend you to create your filesystems manually, and not using AVM.
The manual "Managing EMC Celerra Volumes and File Systems with Automatic Volume Management" available on powerlink has this table :
But, if you want to share, we can discuss the reasons do why you want to use Raid 1 on NAS.
Most NAS workloads have a read-to-write ratio heavily skewed toward reads, so the benefit of Raid 1 is rarely visible in a NAS context.
Gustavo Barreto.
ringman
5 Posts
0
November 25th, 2009 19:00
Peter_EMC
674 Posts
0
November 25th, 2009 22:00
with less than 5 disk drives, the only way to go is Raid 1
With your 4 drives, create 2 Raid 1 groups, and stripe your FS across them.
ringman
5 Posts
0
November 30th, 2009 07:00
If I stripe the File System, can I still create a Storage Pool from it? Or will I be stuck with just volumes?
Peter_EMC
674 Posts
0
November 30th, 2009 08:00
kaba1
1 Rookie
•
95 Posts
0
December 1st, 2009 11:00
I have NS20FC and was able to build the RAID1/0 with 4 disks. So it is supported.
Just make sure your disks are empty before you delete raid group and unbind your luns.
After you have done creating the raid you would need to rescan clariion in Celerra Manager and your pool will apear
Hope it helps
gbarretoxx1
366 Posts
0
December 1st, 2009 15:00
Hi,
It worked, but this does not mean that is "supported".
All documentation I consulted states Raid 1/0 is supported only on with two disks, and for CX4 and NX4 only.
This configuration might be "unsupported", even if it's technically possible.
Look this table :
Gustavo Barreto.
ringman
5 Posts
0
December 9th, 2009 14:00
Article number emc138143 shows 2, 4, 6, or 8 disks as possible for RAID 10 so I'm not sure which document is the supported config. But can we be in agreement that if I create 2 Raid 1's on the Clarion and stripe them in the Celerra , it will be the same peformance as a RAID10? Correct me if I'm wrong but RAID 10 will be massive performance over RAID 5 correct?
Thanks!
kaba1
1 Rookie
•
95 Posts
0
December 9th, 2009 15:00
Yes RAID 10 will definitely improve the performance. Example of applications would be mostly used for DBs such as exchange or SQL. best practices by Microsoft suggesting RAID 10 for db and RAID 1 for logs
Rainer_EMC
4 Operator
•
8.6K Posts
0
December 9th, 2009 16:00
I think you have some misconceptions about Celerra storage pools
they (system defined pools) are just a tool so that the system can recognize which dvols have similar performance characteristics so that AVM knows which dvols to use to build a file system. and yes - if it makes sense AVM will also stripe
If you are striping manually (MVM) then pools dont come into play - then its your responsibility which dvols (LUNs) should be used where
i.e. to avoid mistakes like striping onto the same raidgroup or only using LUNs from one storage processor, or ....
user-defined pools are a bit differently - there you should do your own striping and put stripes into the pool
but they really only make sense if you have lots of disks and want to make it easier for seperating them
either for performance or capacity - like one pool per department
a pool isnt a "hard" object like a raidgroup or an aggregrate (in another vendors speak)
if doesnt make a performance difference if a certain config is done via AVM with a pool or completely manual if the config is the same
hope that helps
Rainer
Rainer_EMC
4 Operator
•
8.6K Posts
0
December 9th, 2009 16:00