Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1302

March 18th, 2010 03:00

Analyzer, spindle showing 900 IOPS.

While doing some performance tuning on a CX3-80, a came across a couple of FC spindles doing around 900 read IOPS.

900_IOPS.jpg

Average seek distance is constantly around 10GB for these 133GB FC spindles.

Can this be true or is this some miss measurement by analyzer?

Regards,

John

4.5K Posts

March 18th, 2010 11:00

Some vaules will lag others in the graph.

There is no on-board cache on any of the drives EMC uses.

If the Read IO is very sequential you can get very good performance from the FC disks - big if - you need to ensure that the Read Cache Hit Ratio is very high - close to 1 for efficient pre-fetching to take place - look at the LUN with the high IOPS, then look at Read Cache Hit Ratio when you get that spike - or look at Total Read IOPS and Read Hits/s. High pre-fetch rates will increase the bandwidth on the disk.

You want to keep queue length (for both the disks and the LUN) under 10 queued IO's. Disk Service times should be under 4ms if possible.

Look at your SP IOPS and queue length at that time - are you seeing a large incease if IO or bandwidth?  Look at the IO sizes - what's happening there? Look at the host that is using that disk/LUN - why would it be sending that much IO?

Are your users seeing performance issues?

glen

2 Intern

 • 

5.7K Posts

March 18th, 2010 03:00

I'd think it's a cache issue. A single spindle cannot perform like that.

2.1K Posts

March 18th, 2010 07:00

Unless I'm reading this wrong (and it doesn't show very clearly on my screen) I think it is only showing a momentary huge spike in activity to that drive. I can't remember the detailed numbers on what a single drive is actually capable of in a burst, but the "normal" numbers we refer to are only for sustained IO while still maintaining any semblance of performance as far as response time and such. So when we say a drive should be "capable" of 180 IOPS, the drive can actually handle significantly more. Just not for a sustained period and not without affecting overall performance.

That being said, I have to admit that it isn't entirely unusual to have the occasional odd spike show up in Navi Analyzer graphs of various stats. For the most part I ignore them when I see them and deal with the consistent data (unless I'm expecting something odd investigating a specific event or incident).

2 Intern

 • 

5.7K Posts

March 18th, 2010 08:00

Ah, of course.... the mentioned 180 is based upon an average I/O size. Very small I/O's result in much higher IOps indeed.

4.5K Posts

March 18th, 2010 09:00

Yes it is possible - as mentioned, the limits on the drives are a "sustained" load - 180 IOPS for 15K disks. You can send 900 IOPS to a disk, but don't expect performance to match what a load of 100 IOPS will be. The smaller the IO, the more you can send, the larger the IO, the less you can send. If IO size is over 128KB, you should be looking at bandwidth rather then IOPS - each 15K disk can handle about 180 IOPS OR 12MB/s.

Check queue length for the disks - look at all the disks in the raid group together - look at queue length, Total IOPS, Bandwidth, Average Busy Queue vs Queue Length, Seek Distance, etc. Most of the time all the disks in a single raid group will be similar. There are exceptions - metaLUN configuration, very small Writes - the first disk in a raid group holds the dirty page pool, so this disk normally gets more Writes as the write cache fills and empty's.

glen

99 Posts

March 18th, 2010 10:00

This is one of the devices doing 900IOPS, here you can see the sudden 32MB/s read bandwidth from one off the spindles at the time of the 900 IOPS.

32MB.jpg

What's the cache size within the drive, might these IOPS come directly from the cache of the drive.

It's not in the image above, but service/response times also not increasing, even a decrease during the 32MB/s 900 IOPS read.

Regards,

John

No Events found!

Top