Start a Conversation

Unsolved

This post is more than 5 years old

A

981

April 20th, 2010 09:00

Is a 4+1 better than a 5+1

I have researched this topic extensively and cannot find any authorative answer.

IOP calculation would suggest that the more disks the more IOPs available.

However, someone is putting forward the theory that only a 4+1 or a 8+1 should ever be created as a RAID5 group.

I think this comes from the idea that the default 64K chunks of data written to the RAID members is a multiple of 4 or 8.

But, does anyone know (or can cite a source/link etc...) which actually works this out.

(a) Is 4+1 better than 5+1 or 6+1, and if so why ?

(b) has anyone here ever observed IOPs at 100K !!!!!!!!!!!

thanks for any help from fellow SAN sufferers.

44 Posts

April 20th, 2010 09:00

thanks for your reply, however the document i can find is for release 29 which i have donwloaded.  Can you please give me a paragraph to search for casuse page37 in release 26 you refer to is obviously not the same page37 in release 29.

ok, lets assume a 7+1 RAID5 (15K FC disks) with a 1:1 RG to LUN relationshiop:
this would give us 7 x 150=1050 IOPs, correct ?
(a) how come navisphere analyzer reports IOPs on this lun in excess of 1250 ? is this due to cache ?
(b) assuming a default chunk size of 64K, giving stripe width of 64x7=448KB, should i try and make the host IO block size 448KB or a multiple of this ? is this possible ?

thanks.

139 Posts

April 20th, 2010 09:00

Search for this document on powerlink and look at page 37, “EMC Clariion Best Practices for Fibre Channel Storage: Flare Release 26 Firmware Update Best Practice Planning”

It talks about why this is a myth.

Also, you can’t keep on adding disks to a RAID group and expect the performance graph to look linear. At some point the most disks you add to a RG, the less performance increase you get out of it. I don’t know what this number is.

And IOPS at 100K? Maybe out of a entire Clariion system, but I have never seen it, and you would need solid state drives to achieve that. 100K IOPS would mean ~500 FC drives.

392 Posts

April 20th, 2010 09:00

I believe this question was addressed in: https://community.emc.com/message/464894#464894

139 Posts

April 20th, 2010 11:00

Document is here https://community.emc.com/docs/DOC-6300  It is for Release 26

a) The 150 IOPS per disk is for the worst case scenario, that is small block random IO. Any other combination of IO, such as sequential large block results in much higher IOPS.

b)  I don't know the answer, but I wish I did

4.5K Posts

April 20th, 2010 13:00

The Best Practice release 26 document is now in the Documents section.

When you look at the LUN in Analyzer, the Total IOPS are what the LUN is receiving - this is coming from the host. If you look at the disks in the raid group, the Total IOPS are what the disks are doing - this is writes from cache, read pre-fetches, reads and any other IO that might be going to the disks. If this is the only LUN in the raid group, do you have snaps or mirrors on the LUN?

For example, say the host is doing 100% Writes and the Writes are random. That means you will not do any Full Stripe Writes - if you then look at the Disk IO, you'll see 1/2 is Writes and 1/2 is Reads - the Reads are coming from the Raid 5 calculation - you need to perform four operations for each Write - read/read, calculate the new parity, then write/write.

glen

44 Posts

April 21st, 2010 00:00

thanks everyone for the helpful comments. It looks like 4+1 or 8+1 are banded about because they work out nicely to a stripe size of 256KB and 512KB respectively. This is making the assumption that (a) you're using the default 128K element size (which you don't have to) and (b) you have standard IO block sizes of 256/512KB.  So although I can see 'some'  reasoning behind the 4+1 and/or 8+1 vs any other RAID group sizes (e.g. 5+1, 6+1, 7+1), this only holds true if the assumptions mentioned are made.

2 Intern

 • 

5.7K Posts

April 23rd, 2010 03:00

> ok, lets assume a 7+1 RAID5 (15K FC disks) with a 1:1 RG to LUN relationshiop: this would give us 7 x 150=1050 IOPs, correct ?

No, there are 8 disks in the RG, so it's 8 x 150 (or 180, which is the best practice value for 15k drives). In the calculations the write penalty deals with all disks, including the parity, so therefore you need to add the "parity disk" as well to the total amount of available IOps.

No Events found!

Top