July 16th, 2014 00:00

I would suggest you to follow this:

https://mydocuments.emc.com/

2 Intern

 • 

1.2K Posts

July 16th, 2014 00:00

You could refer to the document attached:

1 Attachment

July 16th, 2014 05:00

Yes, We are from EMC. And let us know if you need anymore info please.

2 Intern

 • 

236 Posts

July 16th, 2014 05:00

Thank you for your input.

First of all, your best practices doc is outdated, I was referring to release 33 when starting the discussion. Second of all it does not have any info that would shed light on the question I asked.

I could find some more or less relevant info in the BP guide for release 33 on page 15, where it says the following:

"Use RAID 5 with a preferred count of 4+1 for the best performance versus capacity balance.

     Using 8+1 improves capacity utilization at the expense of slightly lower availability" - what about 8+1 performance?

I think EMC could have given us more info than that.

And lastly, Suman and Jiawen, do you work for EMC?

July 16th, 2014 06:00

I would suggest you to go for 8+1 configuration. It will give better striping, in turn performance.

Thnx

Rakesh

2 Intern

 • 

236 Posts

July 16th, 2014 16:00

Which document exactly you are referring to?

July 17th, 2014 04:00

Refer this (with R33) http://www.emc.com/collateral/software/white-papers/h10938-vnx-best-practices-wp.pdf It gives you a self-explanation and Overview.

Thnx

Rakesh

2 Intern

 • 

236 Posts

July 17th, 2014 05:00

Thank you, I have already read through it many times, it does not give a distinctive differences between 4+1 and 8+1, except this:

"Use RAID 5 with a preferred count of 4+1 for the best performance versus capacity balance.

     Using 8+1 improves capacity utilization at the expense of slightly lower availability"

And not a word about 8+1 performance under different workloads. I assume it is the same as 4+1, or even better, if you judge by the IOPs.

July 17th, 2014 10:00

Striping on 5 drives and striping on 9 drives, doesn't look similar to me. Should be better. Moreover, we have MCx so shouldn't be worrying about hot-spares as long as there's 2 hot spare drives for a pool of 30 drives (EMC recommends min 1 with R33). (Of Course, we have to consider other factors as well, such as I/O size n numbers).

Thnx

Rakesh

4 Operator

 • 

8.6K Posts

July 22nd, 2014 03:00

Rakesh,

this is an apple to oranges comparison.

of course given the same raid level a raidgroup with more drive can provide more I/Os

you need to compare two setups with the same number of data drives.

There two 4+1R5 will provide better performance than one 8+1R5


For reasons please see Michaels explanation below

July 22nd, 2014 07:00

Rainer_EMC wrote:

Rakesh,

this is an apple to oranges comparison.

of course given the same raid level a raidgroup with more drive can provide more I/Os

you need to compare two setups with the same number of data drives.

There two 4+1R5 will provide better performance than one 8+1R5


For reasons please see Michaels explanation below

I could not follow the above post. Are we saying that if the number of drives will increase then RAID penalty will also increase in the same amount, or it will again depend (although I hate this word "depend/s") on I/O size, if this is the case, then number of penalty should be more, than the showcased? I agree to the smaller I/Os, what if the I/Os are big enough (as I also pointed out at the IO/s) and can't be written on a set of 4 Drives. As far as I know there are only these operations take place at a time of writing :

First, it Reads the old data

Second, Reads the old parity

Third, Writes the new data

Fourth, Writes the new parity

This means that each write against a RAID 5 set causes four IOs against the disks. Here the first two Operations must be completed in order to perform the last two, this will give some latency.

How do we decide Penalty of a RAID type, is it decided by IO size or by number of Disks in a RAID Group or pool?

Please provide your inputs, if I am making any mistakes here... thanks for your patience..

Thanks

Rakesh

4 Operator

 • 

8.6K Posts

July 22nd, 2014 07:00

rakesh.pandey@zensar.in wrote:

I would suggest you to go for 8+1 configuration. It will give better striping, in turn performance.

Thnx

Rakesh

all I am saying that this isnt a good comparison.

IF you are comparing 4+1 and 8+1 you should compare it with the same number of disks.

And of course you should assume that the load is equally distributed across the two 4+1 raidgroups.

Then two 4+1 RG's will give you the same read performance as one 8+1 RG but better write performance.

2 Intern

 • 

236 Posts

July 22nd, 2014 08:00

Rainer_EMC / rewalmilo thank you for your input.

However I am still not clear which RAID configuration will be more preferred over the other and why?

July 22nd, 2014 08:00

ad_astra wrote:

So all the IOPs calculations are irrelevant in this case, and the statement that more spindles in the RAID group will give better performance is not true? And this is all due to the penalty difference - 4 vs 8, right?

Of course, the statement is true. But the calculation of RAID penalty can be decided on I/O size and type.Most important thing is to look at the I/O size which really matters. Plus look at Read and Write percentage of I/Os and disk drives I/O capacity (with rpm).

Thanks

Rakesh

2 Intern

 • 

236 Posts

July 22nd, 2014 08:00

So all the IOPs calculations are irrelevant in this case, and the statement that more spindles in the RAID group will give better performance is not true? And this is all due to the penalty difference - 4 vs 8, right?

No Events found!

Top