Start a Conversation

Unsolved

This post is more than 5 years old

32189

November 11th, 2013 20:00

Raid layout or recommendations for multiple VM server

I'm getting a pretty wide variety of suggestions and feedback from research and asking around as to the best solution in a raid layout for a specific server build. Cost is somewhat of a concern but performance and long term use are the primary concerns.

Server will host two VM's, a SQL Express business app with large document storage folder as well as Terminal Server or RDS. Host will be Server 2012 full GUI with Hyper V to take advantage of 1_2 licensing and downgrade as guest must be Server 2008r2 (granted one is RDS so let's skip that license issue for now as I'm only concerned with hardware). Server product will be a T620 with 12 drive bays, dual Xeons and 64-98GB of ram. I am more interested in comments from Dell techs and Dell experts that have worked at length with various Raid builds and can give some real answers as to why they suggest one build over another. As IOPS are key here even if it is SQL Express and I want long term use of the server let's also assume 15K SAS drives.

My thoughts were to break out the host OS on a set of mirrored SAS 10k drives. The host disk should be all but idle 99% of the time. If I use 10k drives this does give me an option for splitting off a VHD for the SQL VM's paging file so it is on it's own drive set. For the SQL business app I was thinking Raid 10, 4 drives or maybe 6 (300GB vs 600GB drives to get to near same resulting space) then a mirrored set of 10k SAS drives for the RDS VM, all on a H710 1GB nvram.

Some comments have said breaking out to many arrays on the PERC will slow down performance. Other comments have said to just go one big array. Based on my currently deployed servers I'm pretty sold on the host OS being on it's own set of mirrored drives so given that should I just go with RAID 10 and 6 drives? This would leave 4 bays open if I needed to add drives in a year or two, build another Raid 10 array and move a VM from one to the other with little downtime. Raid 6 really isn't an option as the requirement to get any gain would increase the server price due to the number of drives and result in marginal gains. I'm not sold on SSD's for long term use especially for SQL with all the writes and I've not seen any real information on deploying SSD's in anything other than a mirror and how higher raid levels might cause more memory wear and gain you nothing in IOPS.

 

4 Operator

 • 

9.3K Posts

November 12th, 2013 06:00

You mention that iops is key. This pretty much means raid 10 is the way to go. On larger raid 10s and at a read/write ratio of 70/30, you typically will see a pretty major jump in the iops capability compared to a same sized raid 5 or 6.

Example:

8-drive raid 5 with 15k rpm 2.5" drives and 70/30 ratio yields about 833 iops

8-drive raid 10 with 15k rpm 2.5" drives and 70/30 ratio yields about 1217 iops

The larger the raid set the bigger the difference between the 2 becomes.

As for swap space; it may be cheaper to buy enough (more) memory and by doing so limit swap necessity. This would allow you to pick up 2 146GB drives for a raid 1 for the OS (rpm is less relevant for the physical server's boot drive). This gives you 10 bays for VM storage. SSD would be nice, but a decent (enterprise class and preferably Dell to prevent permanent error messages) SSD is pretty expensive.

Moderator

 • 

8.8K Posts

November 12th, 2013 08:00

JBDive,

I have to agree with you and DevMgr. The fastest IOPS, with redundancy, would be the Raid 1 or 10. In regards to the OS, I would suggest just a 2 drive raid 1. I agree with isolating the OS on its own array, as it helps safeguard you from data loss in the case of an OS failure as well.

Let me know if this helps.

4 Operator

 • 

1.8K Posts

November 12th, 2013 09:00

Raid 1 for the OS, it is not the choice for decent IOPs, raid 10 is.....

Raid 1 reads come from a particular raid member (chosen by the adapter), NOT both members, writes must be committed to one member, then there is a delay as the other member is written to, under any appreciable load this is a factor. If you do not have a separated raid 1 for the OS, as stated your lowering your safety factor; lose the array,get a decent virus or have a partition issue, and you will thank yourself for the separate array. Go with 15k drives all round, even the raid1, the added speed will help in alleviating the raid 1 write delay penalty, speed up reads. Also if you have two arrays, you could vary the stripe sizes, the raid 1 would be best left at the default, but the raid 10, with SQL IOPs in mind might benefit from a non default size.

Benchmarks of different raid configurations, partially in Dutch but  understandable....

http://tweakers.net/benchdb/suite/13

2 Posts

November 12th, 2013 10:00

Chris I've been speaking with a Dell Rep and I don't know if he and I are on the same page with this build. Could you expand on "your" perfect build, with cost in mind, using a T620, dual Xeons, 98GB Ram and 15k drives? What array pattern would you use to support Server 2012 full with Hyper V as the OS, two VM's - one being a SQL business app with a large document storage area and the other being a 6 user RDS server.

OS: mirror 10k drives, 300GB (I don't see the need and cost of 15k drives even with a write delay, the OS should be 99% idle

VMs: Raid 10, 6 drive 300mb 15k, both VM's on same array

 

or break that VM array up? VM1 (SQL Express) Raid 10 4x600GB 15k, VM2 (RDS) Raid 1, 2x300GB 15k

I've even considered mirrored SSD for RDS but the Dell rep didn't go very far with that idea.

No Events found!

Top