Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2032

January 13th, 2009 13:00

RAID5 vs RAID6 for SATA

We have an NS20 with a single tray of 15x 1TB SATA Disks. These are currently configured as such:

0-4 -> Vault
5 -> Hot Spare

Leaving 6-14 available (9 total). There are two configurations I'm torn between at the moment:

6-15 -> RAID5

and

6 -> Hot Spare
7-14 -> RAID6

Given the 1TB size of the disks I'm concerned that the rebuild times for the array are a little long for a RAID5, however given that the RAID6 can't use the 9 disks left (only allows 8) the alternative looses another 2x 1TB disks.

The data on these disks is to be replicated to another NS20 and backed up to tape, however the implications of loosing an array to dual disk failure are significant.

What are people's thoughts on this? What about things like RAID5 vs RAID6 performance and rebuild priority?

Any suggestions or advice would be very much appreciated.

8.6K Posts

January 14th, 2009 03:00

Here's what's supported for NAS LUNs on the NS20:

Disk Group Type Attach Type Storage Profile/Storage Pool Default Number of Disk Volumes

RAID1 Fibre Channel clar_r1 2
4+1 RAID5 Fibre Channel clar_r5_performance 2
4+2 RAID 6 Fibre Channel clar_r6 2
6+2 RAID 6 Fibre Channel clar_r6 2
8+1 RAID5 Fibre Channel clar_r5_economy 2
12+2 RAID 6 Fibre Channel clar_r6 4

4+1 RAID5 ATA clarata_archive 1
4+1 RAID3 ATA clarata_r3 1
4+2 RAID 6 ATA clarata_r6 2
6+1 RAID5 ATA clarata_archive 1
6+2 RAID 6 ATA clarata_r6 2
8+1 RAID5 ATA clarata_archive 2
8+1 RAID3 ATA clarata_r3 2
12+2 RAID 6 ATA clarata_r6 4

sorry for the formatting - there doesnt seem to be a way to nicely do table in the Forum simple editor

2 Intern

 • 

20.4K Posts

January 13th, 2009 19:00

Carwyn,

i was looking at this page and did not even see RAID6 supported for NS20, maybe old information:

http://corpusweb130.corp.emc.com/ns/common/post_install/Config_storage_integrated.htm

I have an NS80 so for my SATA enclosures i use ATA_RAID5_HS_6+1_6+1 template. This is used for your typical file shares and so far so good ..no complains on performance.

January 14th, 2009 03:00

We have an NS20FC so we have direct access to the Clariion CX-10 at the back.You end up having to manually configure elements of the storage but it can use it. Basically it's a user defined pool.

Do you have any experience with SATA raid5 rebuilds? One of the disks in our vault had an issue a while back (1TB RAID5) and the rebuild took many many hours as you might imagine. We've very concerned about second disk failure in the window between failure and the hot spare being built back into the array.

Having looked at some stats I'm leaning more towards accepting the capacity loss in favour of the extra resilience. This is perhaps a moot point though given the vault is on a 4+1 R5 also on 1TB disks and the data is also on tape.

Carwyn

8.6K Posts

January 14th, 2009 04:00

difficult question

one thing you should keep in mind is that you shouldn't build a file system across RAID5 and RAID6 - so if you want
to do that it would be better to use 8+1R5 for the other disks

Note that you wouldnt have to use user defined pools (unless you want to utilize all disks in one file system)

Also a rebuild doesnt necessarily mean that the disk is really bad and all the data is unrecoverable
The Clariion does a lot of stuff (block checksums, sniffer,pro-active sparing...) to predict drive problems and it possible to start
a rebuild before you get into trouble
Quite a number of the disks we get in as "failed" for analysis are actually good

2 Intern

 • 

20.4K Posts

January 14th, 2009 08:00

Carwyn,

search on PowerLink for paper titled "The Influence of Priorities on EMC Clariion LUN Management Operations". There is a section on LUN rebuild, priorities ..etc. You might find it helpful.

2 Intern

 • 

20.4K Posts

January 14th, 2009 08:00

Rainer ..are these types only for ns20 fc-enabled or for integrated as well ?

January 15th, 2009 15:00

clar_r6
High availability at low cost using CLARiiON CLSTD disks from 4+2, 6+2, or 12+2 RAID 6 disk groups

clarata_archive
Archival performance lowest cost using CLATA drives in RAID 5 configuration


As long as one disk fails the rebuild priority will be the same for both raid configurations.If you consider the R5 config
The more the disks the rebuild time is going to increase.

8.6K Posts

January 16th, 2009 06:00

So the following would be unsupported when used with the Celerra?

RAID1/0 using 4x FC
RAID5 using 9+1 FC
RAID5 using 7+1 FC


yes (it also depends on the Celerra model - on the NX4 with 12-disk shelves we support some more RAID configs)

Would you recommend against using these even in a user defined pool?


actually you cant - newer DART codes wont accept them - i.e. the Celerra will not create a device for an unsupported RAID config

The raid 10 is pretty standard issue for Databases and VM setups.


If you want that then just do the RAID1 on the Clariion as 1+1 and then do striping in the Celerra using MVM

If you are concerned about layout for database we have quite a number of reports with best practices and layout examples for SQL and Exchange
Some of them even have performance numbers in there comparing the different options (for that you might have to ask your EMC contact - that might not be all customer-facing)

January 16th, 2009 06:00

So the following would be unsupported when used with the Celerra?

RAID1/0 using 4x FC
RAID5 using 9+1 FC
RAID5 using 7+1 FC

Would you recommend against using these even in a user defined pool? The raid 10 is pretty standard issue for Databases and VM setups. It seems odd not to have that option to the Celerra for iSCSI.

Our other Disk Tray is a 15x 400GB FC so I was thinking either:

(4x FC in RAID1/0) + (9+1 RAID5) + (HS)
or
(4x FC in RAID1/0) + (4+1 RAID5) + (4+1 RAID5) + (HS)
or
(7+1 RAID5) + (7+1 RAID5) + (HS)

Thanks again for these answers, it's proving very useful.

January 21st, 2009 04:00

actually you cant - newer DART codes wont accept them
- i.e. the Celerra will not create a device for an
unsupported RAID config


Interesting, it can see RAID10 LUNs as volumes and even adds them to a clar_r10 system pool automatically. This is on DART 5.6.42-5. RAID10 isn't in the supported list you mentioned though.

Does this mean RAID10 is supported?

Carwyn

8.6K Posts

January 21st, 2009 06:00

Does this mean RAID10 is supported?


sorry, I think i mixed up the Clariion terms

RAID1/0 with two disk members is supported and does get into the clar_r10 or clarata_r10 pool

259 Posts

January 21st, 2010 07:00

I apologize for asking a slightly different question here but when does 1 decide Raid 5 vs Raid 6?

Does the rebuild time on 600GB FC drives justify a Raid 6 configuration or is the rebuild time acceptable for Raid 5?

Thanks.

Jim

5 Practitioner

 • 

274.2K Posts

January 21st, 2010 09:00

Hi Jim,

The answer to your first question:  There is a good handy summary of many common and not so common RAID types which can be found at http://bytepile.com/raid_class.php, including the advantages and disadvantages.  The bottom line is that RAID 6 provides 2 independent distributed parity schemes whereas RAID 5 provides only 1 distributed parity scheme.  RAID 6  would provides additional protection in case of multiple simultaneous drives failure.  To me this is the main reason for choosing RAID-6 over RAID-5, not the rebuild time.

Regards,

259 Posts

January 21st, 2010 10:00

thanks. the link provides great detail. I will definitely save this info.
No Events found!

Top