2 Intern

 • 

392 Posts

September 2nd, 2009 09:00

Aligning your stripe size with your I/O size to get full stripe writes is the way to get the best performance. In addition, aligning the destination LUN to abet full stripes is recommended.

The stripe size is the amount of user data in a RAID group stripe. This does not include drives used for parity or mirroring. The stripe is measured in KB. It is calculated by multiplying the number of stripe disks by the CLARiiON's stripe element size (64KB).

What is your applications I/O size? Is it 256KB?

For example, an eight-disk RAID 1/0 has a stripe width of four and a stripe element size of 64 KB has a stripe size of 256 KB (4 * 64 KB). A five-disk RAID 5 (4+1) with the CLARiiON's 64 KB stripe element size also has a stripe size of 256 KB. An eight-disk RAID 6 (6+2) has a stripe size of 384KB.

Stripe size calculation is discussed in detail in the EMC CLARiiON Storage System Fundamentals for Performance and Availability. LUN alignment is discussed in the EMC CLARiiON Best Practices for Performance and Availability, FLARE Revision 28.5. Both documents are available on PowerLink.

6 Posts

September 2nd, 2009 09:00

The best practices planning guide for B2D says R3 4+1 and leave the file system block and element sizes at their default settings.

50 Posts

September 2nd, 2009 12:00

JPS00,

I am glad you mentioned Raid10. What raid stripe is more effective, Raid 10 or Raid 5? I was told that Raid 10, even when properly sized for full stripe writes is not as effecient as a Raid 5 full stripe write.

Thoughts?

Mike

75 Posts

September 2nd, 2009 15:00

BTW no such thing as Full Stripe Write for RAID 1/0 the FSW counter in Analyzer counts how many times we can get a full stripe of a PARITY LUN in memory & compute parity on the fly. Stripe size is not real important in RAID 1/0 as there is no parity to compute, though it is most efficient when random IO & stripe size are aligned.

75 Posts

September 2nd, 2009 15:00

RAID 1/0 is less efficient for backups. In a mirror yuo have to write data to both sides of the mirror, so for example, yuo write 256 KB form the host, the back end gets 512 KB. A parity RAID (RAID 3 or RAID 5- they are so close in perf it doesn't make a big difference) the amount written to the back end is stripe size + 64 KB.

So in your case, 256 KB + 64 = 320 KB on the back end. Far less load for the same incoming job.

4+1 would be quite efficient for 256 KB, if you do not plan to bypass write cache, you can be a bit looser with the stripe size -- as the cache will bundle up multiple stripes (up to 8) and dispatch them all at once. The amount the cache can bundle is based on the cache page size - 16 KB gets yuo 2 MB max backend, 8 KB gets 1 MB etc.

So if it makes more sense for the client to use 7+1 or 6 +1, as long as writes are seuqential and hitting cache, you can use those odd size groups and save some $/GB.

If IO is random or NOT hitting cache, then as John said, match your IO to stripe size (or multiple of the stripe)

50 Posts

September 29th, 2009 13:00

In Analyzer, if I have successfully sized my Raid Group with my write size correctly, Full Strip Writes correlate to Write Bandwidth almost 1 to 1, is this true? If so, any lun whose full stripe write doesn't correlate in this manner has not been sized properly, is that true statement?

Thanks,

Mike
No Events found!

Top