You mention that iops is key. This pretty much means raid 10 is the way to go. On larger raid 10s and at a read/write ratio of 70/30, you typically will see a pretty major jump in the iops capability compared to a same sized raid 5 or 6.
Example:
8-drive raid 5 with 15k rpm 2.5" drives and 70/30 ratio yields about 833 iops
8-drive raid 10 with 15k rpm 2.5" drives and 70/30 ratio yields about 1217 iops
The larger the raid set the bigger the difference between the 2 becomes.
As for swap space; it may be cheaper to buy enough (more) memory and by doing so limit swap necessity. This would allow you to pick up 2 146GB drives for a raid 1 for the OS (rpm is less relevant for the physical server's boot drive). This gives you 10 bays for VM storage. SSD would be nice, but a decent (enterprise class and preferably Dell to prevent permanent error messages) SSD is pretty expensive.
I have to agree with you and DevMgr. The fastest IOPS, with redundancy, would be the Raid 1 or 10. In regards to the OS, I would suggest just a 2 drive raid 1. I agree with isolating the OS on its own array, as it helps safeguard you from data loss in the case of an OS failure as well.
Raid 1 for the OS, it is not the choice for decent IOPs, raid 10 is.....
Raid 1 reads come from a particular raid member (chosen by the adapter), NOT both members, writes must be committed to one member, then there is a delay as the other member is written to, under any appreciable load this is a factor. If you do not have a separated raid 1 for the OS, as stated your lowering your safety factor; lose the array,get a decent virus or have a partition issue, and you will thank yourself for the separate array. Go with 15k drives all round, even the raid1, the added speed will help in alleviating the raid 1 write delay penalty, speed up reads. Also if you have two arrays, you could vary the stripe sizes, the raid 1 would be best left at the default, but the raid 10, with SQL IOPs in mind might benefit from a non default size.
Benchmarks of different raid configurations, partially in Dutch but understandable....
Chris I've been speaking with a Dell Rep and I don't know if he and I are on the same page with this build. Could you expand on "your" perfect build, with cost in mind, using a T620, dual Xeons, 98GB Ram and 15k drives? What array pattern would you use to support Server 2012 full with Hyper V as the OS, two VM's - one being a SQL business app with a large document storage area and the other being a 6 user RDS server.
OS: mirror 10k drives, 300GB (I don't see the need and cost of 15k drives even with a write delay, the OS should be 99% idle
VMs: Raid 10, 6 drive 300mb 15k, both VM's on same array
or break that VM array up? VM1 (SQL Express) Raid 10 4x600GB 15k, VM2 (RDS) Raid 1, 2x300GB 15k
I've even considered mirrored SSD for RDS but the Dell rep didn't go very far with that idea.
Dev Mgr
4 Operator
•
9.3K Posts
0
November 12th, 2013 06:00
You mention that iops is key. This pretty much means raid 10 is the way to go. On larger raid 10s and at a read/write ratio of 70/30, you typically will see a pretty major jump in the iops capability compared to a same sized raid 5 or 6.
Example:
8-drive raid 5 with 15k rpm 2.5" drives and 70/30 ratio yields about 833 iops
8-drive raid 10 with 15k rpm 2.5" drives and 70/30 ratio yields about 1217 iops
The larger the raid set the bigger the difference between the 2 becomes.
As for swap space; it may be cheaper to buy enough (more) memory and by doing so limit swap necessity. This would allow you to pick up 2 146GB drives for a raid 1 for the OS (rpm is less relevant for the physical server's boot drive). This gives you 10 bays for VM storage. SSD would be nice, but a decent (enterprise class and preferably Dell to prevent permanent error messages) SSD is pretty expensive.
DELL-Chris H
Moderator
•
9.7K Posts
0
November 12th, 2013 08:00
JBDive,
I have to agree with you and DevMgr. The fastest IOPS, with redundancy, would be the Raid 1 or 10. In regards to the OS, I would suggest just a 2 drive raid 1. I agree with isolating the OS on its own array, as it helps safeguard you from data loss in the case of an OS failure as well.
Let me know if this helps.
pcmeiners
4 Operator
•
1.8K Posts
0
November 12th, 2013 09:00
Raid 1 for the OS, it is not the choice for decent IOPs, raid 10 is.....
Raid 1 reads come from a particular raid member (chosen by the adapter), NOT both members, writes must be committed to one member, then there is a delay as the other member is written to, under any appreciable load this is a factor. If you do not have a separated raid 1 for the OS, as stated your lowering your safety factor; lose the array,get a decent virus or have a partition issue, and you will thank yourself for the separate array. Go with 15k drives all round, even the raid1, the added speed will help in alleviating the raid 1 write delay penalty, speed up reads. Also if you have two arrays, you could vary the stripe sizes, the raid 1 would be best left at the default, but the raid 10, with SQL IOPs in mind might benefit from a non default size.
Benchmarks of different raid configurations, partially in Dutch but understandable....
http://tweakers.net/benchdb/suite/13
JBDive
1 Rookie
•
2 Posts
0
November 12th, 2013 10:00
Chris I've been speaking with a Dell Rep and I don't know if he and I are on the same page with this build. Could you expand on "your" perfect build, with cost in mind, using a T620, dual Xeons, 98GB Ram and 15k drives? What array pattern would you use to support Server 2012 full with Hyper V as the OS, two VM's - one being a SQL business app with a large document storage area and the other being a 6 user RDS server.
OS: mirror 10k drives, 300GB (I don't see the need and cost of 15k drives even with a write delay, the OS should be 99% idle
VMs: Raid 10, 6 drive 300mb 15k, both VM's on same array
or break that VM array up? VM1 (SQL Express) Raid 10 4x600GB 15k, VM2 (RDS) Raid 1, 2x300GB 15k
I've even considered mirrored SSD for RDS but the Dell rep didn't go very far with that idea.