Start a Conversation

Unsolved

This post is more than 5 years old

A

2348

May 3rd, 2012 11:00

Recommedn RAID Group size

Hi,

I have a shelf with 15 x 600 GB 15k disks on a CX4-120 iSCSI connectivyt (in use) array. One would go out for hot spare.

I need to configure these disks for LUNs to be configured on a Linux host as backup device. The backup application will write deduplicated, compressed, encrypted data on these LUNs so the files will be small as well.

What way should I configure this considering that these are for a backup application?

Should it be a 4+1 RAID 5 RGs and one LUN per RAID Group?

Would a 13+1 RAID 5 RG with multiple LUNs compromise a lot on performance?

Would a pair of 6+1 RAID 5 RGs with 2 LUNs each be good enough for performance?

I would love to get maximum capacity but since this is a cloud service, I don't want to be reading slow when recoveries are desired.

Please suggest.

Regards,

Anuj

392 Posts

May 3rd, 2012 12:00

For highest availability, a 14-drive RAID 6 (12+2) with a single hot spare should also be considered.

1.4K Posts

May 3rd, 2012 12:00

IMO in RAID 5 (13+1) would be the best bet for you considering you want capacitiy utilization with performance.

392 Posts

May 4th, 2012 06:00

The problem with the 13+1 is that the stripe size is wacky.  With a 832 KB stripe the write cache isn't going to be able to work terribly efficiently. 

The other problem is, that rebuild times on very large RAID groups like that is very long.  Hot sparing is not infallible and drive failures tend to cluster.  You'd have a very long window of vulnerablity on your backup with a 13+1 group rebuilding.

Information on stripe size and its affect on back-end write performance, hot sparing, and rebuild times can be found in EMC Unified Storage System Fundamentals for Performance and Availability.  The Fundamentals document is available on Powerlink.

HTH

May 4th, 2012 10:00

I just wanted to add to jps00's great points, while a double-drive failure is certainly possible, the main reason EMC makes the recommendation for RAID-6 at the capacities being propsed in a single RAID group: 16x 600GB (or say 6x 2TB/3TB SATA/NL-SAS drives), is the much higher probability (relative to a double drive fault) of an Unexpected Error Rate (UER)

During the rebuild not only will it take a while as jps00 mentions, as an operation that has to read in every block, this becomes a concern especially at large capacities.  As drives get even larger, can we imagine a time when we need to look into triple parity?  Again, the main reason would not be to protect from +1 drives failing from what the RAID type is tolerant of, but to protect from UER.

May 4th, 2012 10:00

Oops sorry about that jps00.  Thanks for the correction. 

Just to add to it as I failed to mention it, UER is a condition where a single block (versus double drive failure) is unreadable during the rebuild.

392 Posts

May 4th, 2012 10:00

The 'Hard drive reliability classifications' section of EMC Unified Storage System Fundamentals for Performance and Availability contains a discussion of Unrecoverable Error Rate (UER) .  Coincidently, the example given uses 600 GB Enterprise drives like the OP describes using. 

The Fundamentals document is available on Powerlink.

2K Posts

May 4th, 2012 11:00

Thanks friends for your responses.

I am keen to know the performance difference between the various RAID Group sizes.

1.4K Posts

May 5th, 2012 21:00

Generally speaking, the larger the RG size, the better the performance will be. But it more depends on the I/O profile.

224 Posts

May 7th, 2012 09:00

Hello amediratta,

Peromance and RAID Group will entierly depend upon what kind of I/O is coming in.

For example FOR small blocks of data like SQL/Database etc the best RAID TYPE is RAID TYPE-10

However this RAID would require more numbe of disks.

So whenever you choose a RAID Type, the first question that needs to be addressed is .

Availibiliy or performance.

18 Posts

May 8th, 2012 03:00

Hi amediratta

I am assuming the shelf of disks (15) contain a HS.

I would recommend you this:-

create a 4+1 and a 8+1. with the remaining disk, use that as HS.

4+1 and 8+1 are the best practice RAID setup for RAID 5.

4+1 should able to give 2149GB and 8+1 give 4298.

in both 4+1 and 8+1, select max. capacity and select bind 2 luns - make sure lun balanced on SPs.

with 13+1, yes you will get maximum capacity however its not a nice RAID configuration because stripe size is "dodgy".

another issue with 13+1 is that rebuild time will take alot longer as well. increase the potential risk if another disk in that RAID setup decide to take a hike while a rebuild is happening.

No Events found!

Top