I understood by the "Average" values, but what does it mean by the "Minimum" & "Maximum" values in the IOmeter results.
Does "Maximum" means that LUN or Pool is capable of handling upto that much IOPs?
What does "Maximum" MB/s value means w.r.t to that LUN or Pool?
So looking at the results for DISK (mapped drive) what does below values means?
Read IOPs: 118112
Write IOPs: 50982
Average Response Time: 5.32
Maximum Response Time: 1500
Here I want to know below;
1) What is the Maximum IOPs that my LUN/POOL can handle
2) What is the lowest Response Time that my LUN/POOL can achieve
3) What is the Maximum bandwidth my LUN/POOL can handle
IOPs: 187.82 - this means the disk was able to handle 187 IOs per second
Read IOPs: 118112 - this is the total number of Read IO's
Write IOPs: 50982 - this is the total number of Write IO's
MBpS: 98.47 - this is the bamdwidth - the MB per secind
Average Response Time: 5.32 - this is the average response time of all the IOPS divided by the time of the test
Maximum Response Time: 1500 - this is the longest time that an individual IO took
1) What is the Maximum IOPs that my LUN/POOL can handle - that depends on the IO Size and the type of IO (reads or write) and the mix of Reads vs Writes and the randomness of the IO.
2) What is the lowest Response Time that my LUN/POOL can achieve - see answer for #1 - it all depends on a lot of factors
3) What is the Maximum bandwidth my LUN/POOL can handle - same as #2
In general we can not provide specific performance for a Pool as the performance of the pool depends on a lot of factors (see above) and what the requirements are for response time. If you don't care what the response time is, then the maximum Throughput (IOPS) or Bandwidth (MB/s) can be high. If you have a response time requirement, for example, response times must be less than 10ms, then the Throughput/Bandwidth will be lower.
I've attached two documents that will help explain this.
Actually I framed my question differently, my fault.
I know there are a lot factors that depends on. But on these IOmeter values, can I relate like below to get rough idea;
Note: I have taken 512kb file size with 70-30 Read-Write ratio
1) IOPs: 187.82 = 187.82 * (total disks in RAID 6 where my CIFS share exist) = 187.82 * 8 = ~ 1500 IOPs
So is my current RAID 6 capable of handling ~ 1500 IOPs ?
2) MBpS: 98.47
So is my current RAID 6 capable of handling ~ 98 MBpS of traffic?
3) Latency: 5.32 ms
So my current RAID 6 can handle ~ 5.32 ms of latency, not below that?
When you run IOmeter and select a disk, that disk on the array is one device (at least if this is a Block configurations - the LUN which is in a Pool) and the IOPS (187.82) is what the LUN can handle, not the individual physical disks under the LUN that make up the Pool. For FILE it's a bit different as File takes a LUN(s) and creates a file system on the array and then presents that to a host as a device (I think this is correct - I'm not a File person). That File system may behave differently than a LUN on the Block side.
If you have 8 physical disks in the Pool using Raid 6, then you have a 6+2 configuration. Each individual disk can handle about 80 IOPS. So the total Pool can handle about 8 (disks) * 80 IOPS = 640 IOPS for the Pool. The 80 IOPS is based on a small block (less than 32KB IO Size), random IO (combination of Reads and Writes 70/30) workload. If you increase the IO Size, then IOPS will be less, but Bandwidth increases. Check page 115 in the "EMC Unified Storage Best Practices for Performance and Availability - Common Platform and Block Storage 31.5 — Applied Best Practices.pd" document that I attached in my last post.
I agree with your later theoretical calculation. This is what I showed to my manager in the very beginning, but my manager wanted realistic figures.
So I posted in this community for any such tool, for which IOmeter is ok.
Now, about the realistic numbers, that also I got from IOmeter,
But difficulty is interpreting the results, where I am getting confused.
How to interpret these results which I have tested on CIFS share (mapped as network drive to one of the VM). And I ran IOmeter from this VM on ONLY this mapped CIFS share.
Please help me understand my results based on 512kb, 70-30 read / write ratio.
Sorry - been tied up.
The results are a from a benchmark that does not really represent real world performance as it's not likely that your applications will be a consistent stream of data all the same IO size and type (read/write). What a benchmark provides is at best an idea of what the array is capable of - sort of like what's the fastest performance that the array can provide. From this benchmark you could say that under these specific conditions, the performance of the array will be this. It's a starting point from where you would know that this is the best that can be achieved under these test conditions.
Your applications would not behave in the same manner, so when you start using the array for real applications the level of performance would be less than what you can get on a benchmark. What you do know from this benchmark is that the level of performance is X.
Once you start to use the array with real applications then you will need to understand how to configure the array and the File side for beset performance. There are a number of White Papers (I've attached the ones for the Block side) for File performance that you'll need to review in order to get the best performance from the File configuration. I'm not a File expert so I'm not able to provide advice on how to setup the File side for best performance.
I'd suggest that you look on the support.emc.com site for White Papers for configuring the File for best performance. How you setup the array will determine the level of performance that you can achieve.