2 Intern

 • 

392 Posts

May 21st, 2009 06:00

A gross performance estimate can be made by multiplying the IOPS or bandwidth times the number of hard drives in the RAID group.

For example, assume a single 15k rpm Fibre-Channel hard drive is capable of about 185 IOPS. For a four-disk RAID 5 (3+1), data is spread across all four drives. This RAID group has a potential throughput of 740 IOPS (185*4).

For 'estimating' RAID 1/0 you only use the IOPS of the primary stripe.

This calculation is discussed in detail in the 'Throughput Calculation' section of the "EMC CLARiiON Storage System Fundimentals for Performance and Availablity". A more detailed calculation is described in the "EMC CLARiiON Best Practices for Performance and Availablity, FLARE Revision 28.5" in the 'Performance Planning' section. Both of these documents are available on Powerlink.

261 Posts

May 21st, 2009 07:00

As an extra note on the raid 1/0 side. You just use the primary stripe when doing the calculation, but you can also note that the data on the secondary drives is the same as the primary drives, and the array will send reads to the secondary drives.

This guide just came out and is worth a look: http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H1049_emc_clariion_fibre_channel_storage_fundamentals_ldv.pdf

-Ryan

19 Posts

May 21st, 2009 12:00

OK Ryan,

I had already read .PDF and I understood but I continue simple doubt.

Let´me try explain my doubts, I am considering 50%/50% ratio read/write

I´ve Aplication that needs use 2 LUN with size and performance below.

LUN01 - Size 780GB - Needs 1250 IOPS
LUN02 - Size 150GB - Needs 950 IOPS

The following "Calcutate number of disk required" on .PDF CLARiiON Best Practices and aplied the RAID Penalty we have:

I am simulating R5 and R10

LUN01

RAID5 (0.5*1250)+(4*0.5*1250) = 3175 IOPS
RAID10 (0.5*1250)+(2*0.5*1250) = 1875 IOPS

LUN02

RAID5 (0.5*950)+(4*0.5*950) = 2375 IOPS
RAID10 (0.5*950)+(2*0.5*950) = 1475 IOPS

To calculate numbers applyed of disk and I am not considering Disk vault and Spare we have:

LUN01

RAID5 (3175/180)+(3175/180) == ??? Maybe 36 Drivres ?? Is correct ?
RAID10 (1875/180)+(1875/180) == 22 Drivers Is correct ?

I believe correct (TOTAL IOPS NEEDS/IOPS DISK)

The same doubts happen in LUN02.

Could someone to get my doubts?

Thank you

261 Posts

May 21st, 2009 14:00

1/2 right.

When you calculate the number of drives needed, you take the total IO calculation and divide it by the IOPs your drive type can handle. In you calculations you factored the IOPs in twice.

Instead of: RAID5 (3175/180)+(3175/180)
Its just: RAID5 (3175/180) = 17.63, so a 17+1 (but really only possible in Metaluns and you would want RGs of the same disk numbers...)

In the Best Practices Guide (I'm looking at 26 but I think they all have this) is a section that says "Determine the number of drive required" in the "Sizing Example" section where I think you are looking. In that example they are trying to figure out how many drives you would need to get the to handle 38,000 IOPs if using raid 5.
In the example the calculation says:
38,000/180 + ((38,000/180)/30)+5 = 211+7+5=223

What they have done is figured out how many drives to handle the raid 5 load + how many hot spares you need in the system (based on the 1 per 30 drive rule) and the 5 vault drives.

Seeing you are not going that far, you don't need to worry about 2/3s of that example.

Your raid 1/0 would be (1875/180) also and that also has factored in primary and secondary drives. So (1875/180) =10.41 so you should use a 5+5 or a 6+6.

Hope this helps.

-Ryan

19 Posts

May 22nd, 2009 10:00

OK Ryan,

Thank you for help me, I read .PDF but isn´t so clear for me.

In the my example:

Raid 1/0 (1875/180) = 11 drivers. (6+6) Sounds Good.

Let me confirm if I understood correctly.

After I apllyed RAID "Level" Penalty I should divide (IOPS/180)=Number of the disk

Correct?

1-)In my example R1 (6+6) is OK to get around 1875 IOPS

2-) In some post in RAID1/0 someone said to use half IOPS or considering the IOPS just primary stripe, in this case 6 drivers ==> around 1080 IOPS.

Sincerely I am confused.

What do you think about the 1-) and 2-) rule to calculate IOPS?

261 Posts

May 22nd, 2009 11:00

Correct, After you do the following calculation, you just divide by 180 (or whatever the IO max is for the drive you are using)

So: RAID10 (0.5*1250)+(2*0.5*1250) = 1875 IOPS
1875/180 = 10.41 (5+5 or 6+6, but go with 6+6)

Keep in mind this 6+6 is based on the IO type you are planning to send to this setup in our example for Lun01 (1/2 reads and 1/2 writes).

In regards to what the other post said, if you want to calculate worst-case, lowest IO you can expect from the group, you can just take the number of primary drives and multiply it by the IO for those drive. Worst case is all writes.

In your example, lun01 was to do 1250 IOPs. To find out in raid 1/0 how many drives we need we just divide 1250 by 180 to get the number of primary drives needed. 1250/180=6.9444 (call it 7). This would be a 7+7 because we need to mirror the writes.

Say we use the formula: (0*1250)+(2*1.00*1250)=0+2500=2500/180=13.888 which is a 7+7.

They are really just using a shortcut to find the minimum number of IOPs for the Raid 1/0, but they also know that reads can come from the secondary drives.

Hope this explains it.

-Ryan

19 Posts

May 24th, 2009 19:00

Thanks Ryan, I guess is better for me now.

I understood the calculation to how many drivers I need to get 1250 IOPS.

But I have another doubts between R5 / R10.

May put my doubts here this post? Do I need put another post?

261 Posts

May 26th, 2009 07:00

You can put them here.

June 4th, 2009 14:00

Can I ask you all a question?

I understand the math to calculate spindles required to gather a certain IOPS number.

But I believe that the number of spindles calculated here is the one needed in case the cache fills and host requests are cut-through to spindles... am I right or not?

In "normal" conditions, cache will - sorry for the word game - cache, accumulate writes and try to sequence them at its best (for writes only, I know that) onto disks.

So I imagine that actual IOPS on a AX/CX is usually much higher and that above calculation considers the worst case only.

Can I have your comments on this?

If I am right, is there a rule of thumb that allows you to tell "dear customer, based on your IOPS request you need 100 spindles, but caching will decrease such amount - if you can accept some performance degradation from time to time - to, let's say, one third or whatsoever?

I hope I can make myself understood!

Rgds,
Andrea

261 Posts

June 4th, 2009 14:00

We need to plan for the worst case scenario to keep the cache from filling. Write cache full will mean that no writes can happen to the array. Also, writes do not bypass cache while cache is full. If the array does not use cache them we would lose that data if by chance the power failed while the write was in flight.

If you need a 5+5 to get the right performance, you will probably not get by on a 2+2 (unless your host can tolerate the long response times).

-Ryan

2 Intern

 • 

392 Posts

June 5th, 2009 10:00

If I am right, is there a rule of thumb that allows you to tell "dear customer, >based on your IOPS request you need 100 spindles, but caching will decrease >such amount [of disks] - if you can accept some performance degradation from time to >time - to, let's say, one third or whatsoever?


The Rule-of-Thumb number is what is called a 'back-end number'. To a degree, it has built into it the performance of the Fibre Channel backend bus, and drives. It also (to a degree) has the performance of the cache built into it.

The full spectrum of back-end bus, disk, and cache performance under every operational condition is difficult to capture in a single number. The ROT number contains the assumptions needed to ensure needed performance under a wide range of circumstances. Of necessity, the ROT number is conservative.

If you have a lot of experience and a detailed knowledge of your: workload, storage system's component hardware, and the storage system's provisioning you could 'do better' than the ROT calculation. It is not possible to determine how much better. However, this analysis typically requires more experience, time, and labor than many users have available.

Through your EMC representative, you can contact a CLARiiON SPEED (CSPEED) engineer. CSPEEDers have access to tools and training that can provided more accurate number than the ROT. However, this more accurate number requires analysis of detailed information on: workload, hardware, and provisioning (same as mentioned above).

The CLARiiON's cache operation is complex. I suggest you read the 'Memory' section of the "EMC CLARiiON Storage System Fundamentals for Performance and Availability". This document is available on Powerlink. This section contains a description of the operation of the CLARiiON's cache. The document also contains information helpful in understanding the relationship of IOPS to drives in the 'Storage Objects' section.

In addition, the "EMC CLARiiON Best Practices for Performance and Availability, FLARE Revision 28.5" (also available on PowerLink) describes tuning techniques that based on analysis, may allow you to decrease the number of provisioned disks without losing any performance.

jps00

jps00

32 Posts

March 31st, 2011 13:00

Hello,

I have some quick questions about best Raid Performance measurement and theoretical IOPS for each Raid/Disk type.

* How to calculate the IOPS  partical limits on:


RAID5 (4+1) SATA 1TB/2TB

RAID5 (4+1) FC 146GB/268GB/FC 402GB

RAID10 (6 disks) FC 146GB/268GB

* What is the IOPS limit that SP can service on CX4-960 and CX4-480.

* How to interpret the latency number generated from analyzer file for certain LUN/Rad Group.. Are “9 ms” latency on one LUN is bad OR very bad OR seriously

bad. How are the common causes for latency?

* I was wondering about the best number of disks on R5 Fiber or SATA for VMWARWE use. I have read somewhere that 8+1  R5 provides superior performance

on VMWARE from the IOPS prospective..

Any feedback is appreciated.

No Events found!

Top