Unsolved

This post is more than 5 years old

61 Posts

20250

April 23rd, 2013 08:00

RAID 5 (3+1) or (7+1) thoughts?

We are looking at our RAID striping options on the VMAX 10k. I'm looking for thoughts on RAID 5 using 3+1 vs 7+1. Obviously you get more usable disk space out of 7+1, but has any one been burned by taking the extra risk of using the 7+1 configuration?

How do you evaluate these kind of risks, and quantify them to management?

Thank you kindly,

Steven

61 Posts

April 23rd, 2013 08:00

We are an IaaS company, so we have static offerings. We don't "currently" provide FAST caching.

We are using

Bronze: SATA (RAID 5 in thin pools, for an effective RAID 50) - just slow bulk storage for things like backups

Silver: 15k SAS (RAID 5 in thin pools, for an effective RAID 50) - OS partitions, and general applications

Gold: This is where we are looking to offer FAST caching a 50 / 50 mix of mirrored SAS and SSD.

Our customers pay based on disk tier.

Thanks for the input,

Steven

9 Legend

 • 

20.4K Posts

April 23rd, 2013 08:00

We are using 71 for EFDs, trying to squeeze as much usable capacity as possible. 31 for FC and 6+2 for SATA. I believe EMC best practices is to use Mirror for your middle tier in a box with EFDs and SATA

4 Operator

 • 

2.1K Posts

April 24th, 2013 01:00

I agree dynamox that the mirror is better choice.


Regarding Raid-5 7+1 & Raid-5 3+1, I think only difference is the chance of second drive failure is double after one disk failure in Raid-5 7+1 comparing Raid-5 3+1. But VMAX do have sparing drive that can handle it. It's definitely an availability or space utilization consideration.  


I will select 7+1.

108 Posts

April 25th, 2013 01:00

RAID 5

RAID 5 configurations are parity-protected. In the event of a physical drive failure, the missing data is rebuilt by reading the remaining drives in the RAID group and performing XOR calculations.

RAID 5 may offer excellent performance for many applications since data is striped across back-end disk directors, as well as disks. However, there can be a performance disadvantage for write-intensive, random workloads due to the extra disk operations and the parity generation. RAID 5 can be configured with either four members 3+1 RAID 5 or eight members 7+1 RAID 5 in each RAID group. In most cases, the performance of 3+1 RAID 5 and 7+1 RAID 5 will be similar.

If a drive failure occurs for a larger RAID group on a bigger physical drive (with a longer rebuild time), there is an increased chance of a second drive failure during the rebuild. RAID 5 volumes are protected by permanent sparing.

Other option is:

RAID 6

Protection schemes such as RAID 1 and RAID 5 can shield a system from data loss in the case of a single physical drive failure within a mirrored pair or RAID group. With these schemes, an array containing 10 RAID groups can tolerate 10 drive failures if only one drive in each RAID group fails. But what if two drives of the 10 failures are within the same RAID group? RAID 6 takes parity protection a step further and supports the ability to rebuild data in the event that two drives fail within the same RAID group.

EMC’s implementation of RAID 6 calculates two types of parity in order for data to be reconstructed following a double drive failure. Horizontal parity is identical to RAID 5 parity, which is calculated from the data across all the disks. Diagonal parity is calculated on a diagonal subset of data members.

RAID 6 provides high data availability but as with any multiple parity implementations, it is subject to parity generation impacting write performance. Therefore, RAID 6 is generally not recommended for write-intensive workloads.

Permanent sparing is used to further protect RAID 6 volumes.

0 events found

No Events found!

Top