I've recently taken delivery of a new MD3200i and 2x R710's and plan on deploying a VMWare Essentials Plus kit onto them. I was just looking to see what was the best bet for deploying the LUN/s on the Md3200i. I was thinking about going just one large LUN and have the 6 VM's I plan on deploying running off it. We aren't a huge place and our actual data needs are modest compared to most so 6x 500gb VM's is fine for us. I'll be deploying:
1 Exchange 2010 VM
1 Backup DC VM
1 Fileserver VM
1 SQL VM
1 Gateway VM
1 Sharepoint 2010 VM
So...1 big LUN is fine? 6 seperate LUN's?? Or break it down further with seperate LUNs for OS drive, program files, logs for the exchange and SQl VM's? I think that might be overkill but I am happy to listen to options 😉 Any advice is greatly appreciated.
If you're using vSphere 4.x, your LUN cannot exceed "2048GB minus 512 bytes", as that is the maximum disk size supported by vSphere 4. So, if you were thinking of a 6-drive raid 5 (no hotspare), you'll have to carve into at least 2 virtual disks.
Your 500GB drives will be slow (7200rpm) drives. Your current servers may only have a raid 1 each and you don't think anything about their performance, but when virtualizing, you're putting multiple systems all on the same storage array. If you are currently using 10k or 15k SCSI or SAS drives in a raid 1 for each server, you have a performance potential of about 100-150 IOPS per server. A 7200rpm drive can do about 80 IOPS. Putting 6 in a raid 5 means you can get about 400 IOPS (5 data drives times 80 IOPS) out of the setup. If your current servers are really using the available disk performance that they have, your array could become a performance bottleneck due to having too few and/or too slow drives.
An option to help separate the IO intensive VMs is to create 2 3-drive raid 5's and then determine your most disk IO intensive servers (rank them). Now you go back and forth between the LUNs on the 2 raid 5's to prevent having your heavy hitters (IO intensive VMs) on a single raid 5.
Member since 2003
Cheers for the response 🙂
I should have mentioned, the MD3200i has 12x 600gb 15k SAS drives and I was thinking of setting it up in RAID 10. I don't mind the space hit, as I mentioned, we don't chew a lot of data...and I have the option of adding more enclosures if needed.
Good idea or not??
One note about vSphere 5 though; read the licensing changes, as vSphere4 just licensed by processor sockets (only if you went over 6 cores per socket did you need a higher tiered vSphere), and vSphere5 licenses by virtual RAM. So, using a Dell example:
You purchased 3 PowerEdge R610's with dual 6-core processors and 96GB RAM each and a vSphere4 Essentials Plus package, you could use all of this for ~US$3500 (this was the price from what I remember when VMware still had pricing listed for vSphere4). Now the Essentials Plus package (price increased to ~US$4500) only gives you 24GB vRAM per processor. So, to go get full use out of your 96GB per host, you will need to upgrade to Standard or a higher tier and depending on which exact tier you buy, you may still need to buy extra processor licenses (to get access to the extra memory).
A comparison of the vSphere5 licensing: www.vmware.com/.../small_business_editions_comparison.html
A price list on VMware's site: www.vmware.com/.../pricing.html
Don't get me wrong, vSphere5 has some nice new features, but if you are or are planning to run memory intensive hosts, upgrading to vSphere5 could be a lot more expensive than vSphere4.
Member since 2003
True that... We are still waiting to see how the new 5 licensing pans out. Right now, I swear existing Vmware installations are close to shwoing up at Vmware with Pitch Forks and Torches. Rumor was today they were supposed to annouce an increase for Enterprise to 96GB per license. I think that would take care of us.
I hope they double the memory amounts for all tiers, even the free ESXi (only supports 8GB vRAM from what I have heard).
I know they don't make money off of the free one, but I know several people that use the free version to 'play around' at home with it and become familiar with virtualization (which later means they can use the paid for versions). With 4.1 you didn't have a cap on how much vRAM in the free version, so a decent desktop computer with an Intel or Broadcom NIC and 16GB of RAM could run a few VMs just fine (for little money). Now the free version is severely handicapped.
Member since 2003
I've got a similar setup but R610's and an MD3620 with 16 x 300gb Drives. I've carved it up with 2 RAID 10 arrays over 4 disks each for VM storage. Allowing 40Gb OS partitions I can have 12VM's per group. I have then setup a RAID 5 array over 6 disks and allow the Guest OS's direct access to the Storage for their D drive for storage purposes. I then balance the VM's over the 2 disk arrays and 2 hosts. If I have a single array failure, I don't lose everything. Our storage requirements at the moment aren't great and I have bought a cheaper SAS QNAP array for backup and slower storage.
I would suggest something similar. 2 x RAID 1 arrays over a pair of disks for the VM disks and then maybe 2 RAID 5 arrays over 4 disks for the storage. 3 of your guest VM's on each.