This post is more than 5 years old

31 Posts

11235

October 13th, 2011 16:00

PS6500E (SATA) recommended layout for VMWare ESXi 4.1

Hi,

Not so much a RAID level question (it's a straight choice between RAID50 and RAID 10 at less capacity but about twice the IOPS), but rather:

Is it better to have few max sized volumes (10-15TB depending on RAID level seems to be the option) 

or

a cluster of smaller volumes, say 5TB?

Obviously, the smaller volumes make for more headaches (though I think I saw that ESX can logically combine them - haven't got that far yet).

I just wondered if there was any voodoo in the PS6500E that made one or the other a better choice? I have 3 VMWare hosts...

Many thanks in advance :)

Tim

4 Operator

 • 

9.3K Posts

October 13th, 2011 19:00

VMware vSphere 4 has a maximum disk size support of "2048GB minus 512 bytes", so your question should be:

Is it better to use a few max-sized volumes (2TB), or multiple smaller ones (e.g. 500GB).

If you were to upgrade to vSphere 5, the maximum disk size has changed (though you still cannot give a VM a virtual disk larger than 2TB).

31 Posts

October 14th, 2011 02:00

Hi Dev,

Well, I did not know that... I assumed that had been surpassed after 3.5. I think that does indeed answer the question then - it would be silly to go smaller than 2TB, so 2TB it is...

It is a case that, whilst firefighting the current installation I inherited 4 months ago (new job) I've had to be systematic about going through documentation - so I have not read the full works on ESXi4.1 yet and I'm not a VMWare specialist (previously made do with Xen and KVM).

Thanks for the most helpful answer :)

However, just out of interest, how would my question have been answered if ESX 4.1 did not have the limits?

Cheers

Tim

31 Posts

October 16th, 2011 06:00

Thanks Don  - that is very useful info :)

Cheers

Tim

31 Posts

October 20th, 2011 05:00

Hi Don (and anyone else),

OK - had a little time to digest your reply...

I'm going to have >100 VMs on 3 hosts. Most are very simple little webservers with small amounts of data.

A few will have large local data requirements (raw files or database backends)

Are you suggesting it is best to bring an iSCSI LUN directly through to the guest (with large data requirements) or does it still work OK if you were to route that through ESXi as a VM disk (though that VM disk may possibly be mapped directly to a LUN).

I'm looking for best performance - but if creating a new VM becomes to complicated in terms of excessive number of steps, it might be counter productive...

Also, what would *you* folk do with 100 odd VMs (most of the VM disk is OS files)? I usually allocate 8GB to a bare bones debian VM (no SWAP[1], generous padding in /var) - that would imply if I were to have say 10 VMs sharing a LUN storage blob, I would want some OS LUNs of about 100GB each and put any serious data blobs on their own LUNS.

[1] I have an average of 3GB host RAM per VM for 100VMs, so I do not see a problem there. Many VMs will run very happily with 1GB.

Sorry - I do realise no one can answer this precisely - I'm just looking for some gut feelings to get me started :)

Ta muchly,

Tim

No Events found!

Top