Unsolved
This post is more than 5 years old
1 Rookie
•
32 Posts
0
1674
August 18th, 2009 02:00
RAID 5 - 8+1 or 9+1
Hope somebody can help me here
We have a workload we are about to move onto our CLARiiON CX4-120
A 2 week I/O monitoring excercise through perfmon was run by EMC and they came up with the maximum and average I/O profile
Based on what EMC worked out we have a maximum IOPS read of 1268 which we should plan for and we have a 90% read and 10% write split, feeding this in to the formula that is in the performance and availability guide for FLARE 28 we get the following.
RAID 5 Disk IOPS = (0.9 x 1268) + (4 * (0.1 x 1268)) = 1141.20 + (4 *126.8) = 1648.4 IOPS
Total disks = 1648.4 / 180 = 9.15 disks
Our 3rd party that is responsible for the SAN are recommending 8+1 as this is RAID 5 best practice. Based on the formula above I would have said 9+1 to ensure it can deal with the I/O spikes witnessed.
I'm sure 8+1 will be fine in this instance but I'm keen to understand why 9+1 would be such a problem.
Thanks in advance
VirtualProUK
We have a workload we are about to move onto our CLARiiON CX4-120
A 2 week I/O monitoring excercise through perfmon was run by EMC and they came up with the maximum and average I/O profile
Based on what EMC worked out we have a maximum IOPS read of 1268 which we should plan for and we have a 90% read and 10% write split, feeding this in to the formula that is in the performance and availability guide for FLARE 28 we get the following.
RAID 5 Disk IOPS = (0.9 x 1268) + (4 * (0.1 x 1268)) = 1141.20 + (4 *126.8) = 1648.4 IOPS
Total disks = 1648.4 / 180 = 9.15 disks
Our 3rd party that is responsible for the SAN are recommending 8+1 as this is RAID 5 best practice. Based on the formula above I would have said 9+1 to ensure it can deal with the I/O spikes witnessed.
I'm sure 8+1 will be fine in this instance but I'm keen to understand why 9+1 would be such a problem.
Thanks in advance
VirtualProUK
0 events found
No Events found!


jps00
2 Intern
•
392 Posts
0
August 18th, 2009 04:00
Important information you might have included is the I/O type, which I assume is large-block sequential. (Please correct me, if I'm mistaken.)
A couple things you need to know about I/O. The 8+1 'aligns' better for large-block I/O to get full-stripe writes. (9+1 is an 'odd' stripe size.) The second thing is that SATA drives have almost Fibre Channel-like performance when performing large-block, sequential I/O. The 180 IOPS multiplier is for a mixed read/write ration. Your I/O is mostly reads. (Reads are 'easy' I/O.) The high percentage of sequential reads gives you an extra margin of IOPS per disk above the 180 IOPS.
You may want to review the 'EMC Storage System Fundamentals for Performance and Availability' whitepaper (available on PowerLink), which discusses this in the 'Describing the workload' section. It also contains information on alignment, the operation of the cache, and stripe size.
Don't forget to align your LUNs, if you are not using MS Server 2008, which automatically aligns. You may also want to experiment with increasing the size of your read cache to increase performance, if the SATA I/O is a majority of your overall workload.
virtualprouk
1 Rookie
•
32 Posts
0
August 18th, 2009 05:00
It's basically a windows file server based on windows 2003 at the moment, moving to windows 2008 when we move to the SAN. I believe this is probably formatted with the default NTFS block size which is 4KB. Any benefits in us moving to a larger block size?
We were going to go with 300GB FC disks for this particular system as we have spare capcity on these at the moment. No SATA space left I'm afraid and no budget to buy any for the remainder of this year. Sounds however that based on our I/O profile SATA disk would be fine for this type of usage.
Thanks again for your help on this, much appreciated
tonydcdi-ymiT1
70 Posts
0
August 19th, 2009 14:00