This post is more than 5 years old

2 Intern

 • 

138 Posts

54871

February 28th, 2011 12:00

IOPs calculation?

When the Oracle DBAs ask for disks, they always specify that they want XXXX IOPs.  For example, I got a request today for 3 TB (20 LUNs x 150 GB) of space which must support 4500  IOPs.  RAID 5 (4+1).

So, how do I calculate this?

If it's RAID 5 (4+1), and the IOPs of the disk 180 IOPs per disk (15K RPM, 4 Gb/s). Do I count ALL of the disks, including the parity disk, or do I just count the "data disks".  i.e. - If I have 1 RAID Group R5(4+1) is that 4 x 180 IOPS/disk ro 5 x 180 IOPS/disk?

If the disks are RAID 1/0, do I count ALL of the disks, or just 1/2 of the disks?

   Stuart

474 Posts

February 28th, 2011 12:00

To accurately calculate, you also need to know what percentage of the IOPS is writes vs reads. Writes increase the backend disk load, the amount of the increase is dependent on the RAID type. You do include the parity or mirror disks in the IOPS calculations, but obviously not in the capacity calculation.

So for example…

4500 IOPS at 100% Read = 4500 disk IOPS regardless of RAID Type

4500 IOPS at 100% Write = 9000 disk IOPS in RAID10, 18000 IOPS in RAID5, and 27000 IOPS in RAID6.

At 180 IOPS per spindle, you’d need 25 disks for 100% Read environment, or 100 disks in a 100% Write environment using RAID5

Let’s assume a more reasonable 2:1 read/write ratio (33% writes)

3015 Read IOPS and 1485 Write IOPS and 20TB of usable capacity.

Backend/Disk IOPS equate to:

5985 RAID10 – 5985 IOPS / 180 = 34 disks

8955 RAID5 – 8955 IOPS / 180 = 50 disks

11925 RAID6 – 11925 IOPS / 180 = 68 disks

To meet the capacity requirement you need 38 x 600GB data disks to get to 20TB usable which is >76 disks in RAID10, 48 disks in RAID5. In essence, with a 15K drive, you would meet the IOPS requirements just by meeting the capacity requirements (this is not always the case).

For CX3/CX4, the least cost option for this workload would be 65 x 450GB 10K in RAID5 which would provide the best balance of the capacity and IOPS requested. Larger disks would give more capacity that you don’t need and faster disks would provide performance above and beyond what was requested. This may be good depending on your confidence in the performance requirements.

For VNX, the least cost option is 50 x 600GB 15K SAS in RAID5

2 Intern

 • 

727 Posts

February 28th, 2011 12:00

Yes, you do include the parity disks in the calculation. See https://community.emc.com/thread/91112?start=0&tstart=0 for a detailed discussion on this topic.

2 Intern

 • 

138 Posts

February 28th, 2011 12:00

That thread is only 1/2 helpful, as everybody contradicts everybody else.  I should have found it in a search, though.  My bad.

4 Operator

 • 

5.7K Posts

March 1st, 2011 05:00

Waaw, I wish all DBAs were like that ! Can we exchange DBAs please ?

4 Operator

 • 

5.7K Posts

March 1st, 2011 05:00

so are you fine now ? Do you understand how to do the math ?

random I/O

RAID10: write penalty = 2, read = 1; available space = number of disks devided by 2

RAID5: write penalty = 4, read = 1; available space = number of disks minus 1 disk

RAID6: write penalty = 6, read = 1; available space = number of disks minus 2 disks

Always count all the drives involved, since the write penalty takes care of that.

An app does 1000 IOps, where the read / write ratio is 3 / 1, so 3 times as many reads as writes. These 1000 IOps are 750 reads and 250 writes

Backend IOps:

RAID10: 750 + 2 x 250 = 1250; you'll need 1250/180 15k = 7, so at least 8 drives or 1250/130 10k = at least 10 drives

RAID5: 750 + 4 x 250 = 1750; you'll need 1750/180 15k = at least 10 drives or 1750/130 10k = at least 14 drives

RAID6: 750 + 6 x 250 = 2250; you'll need 2250/180 15k = at least 13 drives or 2250/130 10k = at least 18 drives

Are you ok with this math ?

2 Posts

March 2nd, 2011 22:00

Well I don't about him, but man im fine with the math.  Thank you.  I just glanced at this thread and it is awsome.....I get it....finally  

4 Operator

 • 

5.7K Posts

March 3rd, 2011 01:00

You're welcome. Don't forget to mark this question as answered and any helpful answers as well. this helps future readers to nuderstand the issue as well.

115 Posts

March 8th, 2011 09:00

when calculating the IOPS is there a formula to then factor in the write cache on the system

so for example if i have a 70/30 read/write mixture in a RAID 5 array how would the cache affect this

i know there are different size caches on the various cx4s but i'm just looking for a rough indication of how the cache will improve performance IOPS wise ?

i do understand the cache cannot speed up the actual IOPS of disks but it does speed up the IO's and i just want to factor this into my configuration

clear as mud as you see :-)

2 Intern

 • 

392 Posts

March 8th, 2011 12:00

The EMC CLARiiON Storage System Fundamentals for Performance and Availability whitepaper in its 'Write Cache' section has a good explanation of the write cache's function.

The affect of cache on I/O is dependent on a large number of factors.  The difference in response time between cached and uncached performance can be the difference between a few milliseconds and tens of milliseconds, perhaps as much as 100 milliseconds.

Pick your scenario?  It can range from a small-block read I/O finding its result already waiting in read cache as a result of read ahead, to an uncachable 1MB write I/O to a LUN on a 4+2, 7.2K RPM 1 TB SATA drive RAID group with the storage processor already at 95% utilization that is furiiously fast flushing.  The difference between these scenarios captures many of the considerations.

I guess, it depends.

115 Posts

March 8th, 2011 13:00

i guess its a real how long is a piece of string question ;-)

i guess a big part depends on your write size, i'll have a read of that document again there's a lot in it :-)

ta jps00 you assistance is greatly appreciated :-)

115 Posts

March 8th, 2011 14:00

hi jps00 i would mainly use vmware (vmfs) with the clariion

how does the vmware 1mb (or any size) block size work with say the 64kb stripe size, is there a specific vmware/clariion document

how would you align a vmware volume with the stripe size to make the most of your cache

sorry for all the questions the more you read that clariion performance document the more questions you have :-)

ta

Beag

474 Posts

March 8th, 2011 14:00

First, 64KB is the stripe element size. Actual stripe size is determined by the number of data disks in the RAID Group. A RAID5 4+1 Group has a stripe size of 256KB. Cache memory is broken up into pages that can range from 2KB to 16KB. You should leave this at the default 8KB unless you know that your IO sizes across the entire array are either predominately over 16KB or under 4KB.

The VMFS block size does not affect the IO size that the array sees, the block size only affects file/block allocation within the VMFS volume and determines the maximum file size within the VMFS volume.

IO Size is determined by the Guest OS filesystem and application. An NTFS filesystem formatted with 4KB allocation would issue IOs as small as 4KB. But a SQL database might issue IOs around 64KB in size.

If you create a VMFS volume within vSphere, it will be automatically aligned to the 64KB stripe on the Clariion. You still need to align the Guest filesystem however to 64KB or a multiple of 64KB. 1MB and 1GB alignment offsets are becoming more common. Windows 2008 automatically aligns NTFS as well now so it’s only a concern for older versions of Windows and other OS’s (Linux, etc).

There are EMC White papers on PowerLink for specific applications that would help with stripe size/filesystem allocation for best performance. Generally a Windows fileserver is left at 4KB allocation, but SQL/Exchange best practices from EMC are to format the NTFS filesystem with 64KB allocation units since those applications use larger IOs.

2 Intern

 • 

1.3K Posts

April 8th, 2011 03:00

1GB alighnment !? Hearing it for the first time ..  Also as per emc56324 4KB for Exchange 2000 and 2003, 8KB for exchange 2007 and 64KB for SQL

474 Posts

April 8th, 2011 09:00

Windows 2008 has a default offset of 1MB. 1GB could be used for those that want to align the partition to the 1GB Virtual Provisioning chunks in CX4/VNX.

As far as the primus article, the 4KB, 8KB, and 64KB is the NTFS allocation unit size. I’ll have to go back and look at the whitepapers from EMC on Exchange 2000/2003/2007 but the current Exchange 2010 white paper says 64KB. (Deployment Guidelines for Microsoft Exchange 2010 with EMC Unified Storage – Best Practices Planning)

2 Intern

 • 

247 Posts

April 11th, 2011 04:00

Why would anyone want to align to the VP chunk size?

I can understand someone would reason that a hot 1GB file will now only cause a FAST promotion of one 1GB chunk to EFD instead of two chunks. But that theory would also crumble as soon as a tiny 4KB file sneaks in before the big file and pushes it over the chunk boundary, again using two chunks.

I'm not seeing the benefit... even in a theoretical world where storage is free and capacity unlimited.

0 events found

No Events found!

Top