Unsolved
This post is more than 5 years old
1 Message
0
7204
September 21st, 2012 09:00
IO Usage Reports - Servers vs Controllers
When I look at the IO usage tab in EM I seem to get different IO reports between the servers and the controllers. For example, if I add all the total IO for my servers together for one point in time I get around 2500 IO/sec. However, if I look at the IO on the controller at that time it shows around 8000 IO. I understand that the extra might be cache IO, but there is no good way to distinguish this. And if that is the case, how I do know what is real disk IO versus cache IO. Setting threshold alerts is harder to determine if I can't tell what the disk IO is by itself.
I also noticed that if I look at IO on a particular day for a server, and then look at the IO over a week, the peak IO usage doesn't match up. The days will peak out at 2300 a few times a day, but the week doesn't show any peaks above 1200. Has anyone else seen this particular behavior?
J


hallidayr
48 Posts
0
September 24th, 2012 08:00
First one sounds like you're looking at total I/O on the controller. This is going to include all of the backend traffic such as shifting data between tiers, fast track, etc and any ongoing raid scrubs or data migration. You might to well to look at the I/O for the individual back end ports as a mixed tier system will have different thresholds per disk class. Setting a threshold of 8000 IOPS on the back end won't do much if all of the traffic is on a SATA shelf that blows up around 2000. I think CoPilot can provide some guidance with setting up decent alerting.
Second sounds like a common graphing error. If you've ever used software like PRTG you'll rember that as you increase the sample size you lose resolution. Quite frequently I've seen graphs drop off peaks and take a more averaged approach.