Start a Conversation

Unsolved

This post is more than 5 years old

45376

August 3rd, 2011 07:00

MD3200i LUN Advice

Hi Guys,

 

I've recently taken delivery of a new MD3200i and 2x R710's and plan on deploying  a VMWare Essentials Plus kit onto them. I was just looking to see what was the best bet for deploying the LUN/s on the Md3200i. I was thinking about going just one large LUN and have the 6 VM's I plan on deploying running off it. We aren't a huge place and our actual data needs are modest compared to most so 6x 500gb VM's is fine for us. I'll be deploying:

1 Exchange 2010 VM

1 Backup DC VM

1 Fileserver VM

1 SQL VM

1 Gateway VM

1 Sharepoint 2010 VM

 

So...1 big LUN is fine? 6 seperate LUN's?? Or break it down further with seperate LUNs for OS drive, program files, logs for the exchange and SQl VM's? I think that might be overkill but I am happy to listen to options ;) Any advice is greatly appreciated.

9.3K Posts

August 3rd, 2011 07:00

If you're using vSphere 4.x, your LUN cannot exceed "2048GB minus 512 bytes", as that is the maximum disk size supported by vSphere 4. So, if you were thinking of a 6-drive raid 5 (no hotspare), you'll have to carve into at least 2 virtual disks.

Your 500GB drives will be slow (7200rpm) drives. Your current servers may only have a raid 1 each and you don't think anything about their performance, but when virtualizing, you're putting multiple systems all on the same storage array. If you are currently using 10k or 15k SCSI or SAS drives in a raid 1 for each server, you have a performance potential of about 100-150 IOPS per server. A 7200rpm drive can do about 80 IOPS. Putting 6 in a raid 5 means you can get about 400 IOPS (5 data drives times 80 IOPS) out of the setup. If your current servers are really using the available disk performance that they have, your array could become a performance bottleneck due to having too few and/or too slow drives.

An option to help separate the IO intensive VMs is to create 2 3-drive raid 5's and then determine your most disk IO intensive servers (rank them). Now you go back and forth between the LUNs on the 2 raid 5's to prevent having your heavy hitters (IO intensive VMs) on a single raid 5.

August 3rd, 2011 08:00

Hmm, didnt know about that max LUN size in 4.1 .....has this been increased with v5 due out shortly??

August 3rd, 2011 08:00

Cheers for the response :)

I should have mentioned, the MD3200i has 12x 600gb 15k SAS drives and I was thinking of setting it up in RAID 10. I don't mind the space hit, as I mentioned, we don't chew a lot of data...and I have the option of adding more enclosures if needed.

Good idea or not??

104 Posts

August 3rd, 2011 08:00

ESXi 5 does support greater then 2TB luns.

9.3K Posts

August 3rd, 2011 09:00

One note about vSphere 5 though; read the licensing changes, as vSphere4 just licensed by processor sockets (only if you went over 6 cores per socket did you need a higher tiered vSphere), and vSphere5 licenses by virtual RAM. So, using a Dell example:

You purchased 3 PowerEdge R610's with dual 6-core processors and 96GB RAM each and a vSphere4 Essentials Plus package, you could use all of this for ~US$3500 (this was the price from what I remember when VMware still had pricing listed for vSphere4). Now the Essentials Plus package (price increased to ~US$4500) only gives you 24GB vRAM per processor. So, to go get full use out of your 96GB per host, you will need to upgrade to Standard or a higher tier and depending on which exact tier you buy, you may still need to buy extra processor licenses (to get access to the extra memory).

A comparison of the vSphere5 licensing: www.vmware.com/.../small_business_editions_comparison.html

A price list on VMware's site: www.vmware.com/.../pricing.html

Don't get me wrong, vSphere5 has some nice new features, but if you are or are planning to run memory intensive hosts, upgrading to vSphere5 could be a lot more expensive than vSphere4.

847 Posts

August 3rd, 2011 10:00

True that...   We are still waiting to see how the new 5 licensing pans out.   Right now, I swear existing Vmware installations are close to shwoing up at Vmware with Pitch Forks and Torches.   Rumor was today they were supposed to annouce an increase for Enterprise to 96GB per license.    I think that would take care of us.

9.3K Posts

August 3rd, 2011 11:00

I hope they double the memory amounts for all tiers, even the free ESXi (only supports 8GB vRAM from what I have heard).

I know they don't make money off of the free one, but I know several people that use the free version to 'play around' at home with it and become familiar with virtualization (which later means they can use the paid for versions). With 4.1 you didn't have a cap on how much vRAM in the free version, so a decent desktop computer with an Intel or Broadcom NIC and 16GB of RAM could run a few VMs just fine (for little money). Now the free version is severely handicapped.

August 3rd, 2011 16:00

I'm sweet, my R710's have 48gb of ram in them and 2x 6 core processors. Whichever way I got I am covered ;)

So about those LUNs ;)

1 Message

August 4th, 2011 03:00

I've got a similar setup but R610's and an MD3620 with 16 x 300gb Drives. I've carved it up with 2 RAID 10 arrays over 4 disks each for VM storage. Allowing 40Gb OS partitions I can have 12VM's per group. I have then setup a RAID 5 array over 6 disks and allow the Guest OS's direct access to the Storage for their D drive for storage purposes. I then balance the VM's over the 2 disk arrays and 2 hosts. If I have a single array failure, I don't lose everything. Our storage requirements at the moment aren't great and I have bought a cheaper SAS QNAP array for backup and slower storage.

I would suggest something similar. 2 x RAID 1 arrays over a pair of disks for the VM disks and then maybe 2 RAID 5 arrays over 4 disks for the storage. 3 of your guest VM's on each.

847 Posts

August 4th, 2011 08:00

Are you planning on using SAN snapshot and.or diskcopy for san redundancy?    Can you test it before implementation?

You have some potentially very demanding VM's here.    Consider unique LUN's for your most demanding data stores.

6 Spindles is not a lot of spindles.   Generally I find Exchange, SQL, and Sharepoint don't run well on a single drive, thus a raid 1 mirror with only 2 drives is not a great choice in my opinion.

I tend to go against the trend of how most people go about carving their sans up.   I tend to favor large spindle count diskgroups and carve those into more individual luns.

It's nearly impossible for anybody to know would be best.   Before our first iSCSI san install, we actually test converted all our servers over and tried the carve ups every which way to sunday.

847 Posts

August 4th, 2011 16:00

Nice...    You will be able to tell us what ended up being best for you!!

August 4th, 2011 16:00

Yes, I am in no rush to 'push it out the door' so to speak. Since this is my first VM Deployment I plan on ripping it all down several times and starting from scratch until I am happy that I'm confident with how it all works, the performance is spot on etc.

I wasn't planning to Snapshot anything. I have a stack of ShadowProtect Server licenses and was planning on doing regular backups via that from within the OS as we are doing now with physical servers.

No Events found!

Top