Reply to Message

Reply to Message

View discussion in a popup

Replying to:
kelleg
4 Ruthenium

Re: Test CIFS/Pool load capability

The FAQ for IOmeter explains how to run it. If you want to test for Throughput you would use a small block IO Size - say 4KB or 8KB size. If you want to test for Bandwidth you would use a larger IO Size - say 32KB or 64KB. You start with 100% Sequential Reads or 100%  Sequential Writes, then try 100% Random Reads or 100% Random Writes. Generally 100% Sequential Writes should be the fastest. followed by 100% Random Writes, then 100% Sequential Reads and last 100% Random Reads.

You can also set the Queued IO's to get better performance - I've found that 10 queued IO's seems to provide the best results. You can also set more users (each "User" is a thread) as multiple threads leads to better performance. Try a single User first, then add more to see what you get.

In IOmeter it will show the response times while running - you can set the "update" rate to 1 second for the fastest rate. IOmeter will just send out IO as fast as it can, it does not bother with what the response times are, just how fast it can transmit data. so at the fastest rate the latency should be high.

What this will show is the raw performance of the array and the raid group that you will be using. This will also show the effect of the pre-fetching for the 100% Sequential Reads. The Writes will show the effects of the amount of Write cache configured on the array. You want to use a "Maximum Disk Size" to a size (in sectors) that is greater than the size of the Write cache otherwise the Write tests can run in the Write cache and not really test the underlying disks (the write cache flush to disk when Write cache gets full). If your Write cache is 8GB, then the size of test file should be 2x that size (16GB). In IOmeter you would set the "Maximum Disk Size" to 64,000,000 sectors. This will create a 32GB file on the LUN under test. Not sure how you do this on the File side of the array though. I've only run this on the Block side with a host connected to the Block and not the File side.

glen

0 Kudos