s0ftimage
1 Copper

Best Practice - Volume Size - vmware esxi 4.1 free

Jump to solution

Hello Everyone,

I am trying to figure out how big I should make my volumes; we have an Equallogic PS4100X and we are running vmware esxi 4.1 (free version) with about 50 vmware images of various sizes (20gb - 300gb). 

I am trying to figure out if I should create say two large volumes (Development and Production) and leave it at that, or if I should create my volumes based on how many the individual components can support (mostly the PS4100X).

We had a technician that suggested that we run 8-9 vmware images per volume.  However, I dont have any hard facts, written, to back the statement.

Is there some math formula that I could use to figure out?  i.e.   ( 8 vmware images needing 4000 IOPS, and the PS4100X supporting 4200 IOPS per volume), etc...  Perhaps the issue is not IOPS, but connections.  Etc.  Whatever the limitation may be?

I looked through all the questions and came across a similar question, but I need a statement to the limitation, or a formula to help me compute.  Could someone help me?  This is my first time using a ISCSI, I dont have any experience.

http://en.community.dell.com/support-forums/storage/f/3775/t/19396674.aspx

Thanks!!

Frank

0 Kudos
1 Solution

Accepted Solutions
Highlighted

Re: Best Practice - Volume Size - vmware esxi 4.1 free

Jump to solution

Creating more smaller volumes is preferred over one or two mega volumes.   As you noted in the included article the exact number depends on the specific situation.   But a SCSI consideration is Command Tag Queue depth (CTQ)

Where each volume/LUN can only process a certain number of outstanding IO requests.  Typical value is 32.  So if you have many VMs on a LUN you are more likely to hit that value, so even though more IO is possible it's been artificially limited.  If you have two volumes, that's 64 possible, and so on.  

So going to the extreme of one VMFS volume / VM isn't efficient either.  It gets worse if there are mutiple ESX servers accessing common volumes.  Since at times, one node will lock out all other nodes for certain IO operations.  With only on or two volumes that can become a bottleneck.  

Regards,

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

8 Replies
Highlighted

Re: Best Practice - Volume Size - vmware esxi 4.1 free

Jump to solution

Creating more smaller volumes is preferred over one or two mega volumes.   As you noted in the included article the exact number depends on the specific situation.   But a SCSI consideration is Command Tag Queue depth (CTQ)

Where each volume/LUN can only process a certain number of outstanding IO requests.  Typical value is 32.  So if you have many VMs on a LUN you are more likely to hit that value, so even though more IO is possible it's been artificially limited.  If you have two volumes, that's 64 possible, and so on.  

So going to the extreme of one VMFS volume / VM isn't efficient either.  It gets worse if there are mutiple ESX servers accessing common volumes.  Since at times, one node will lock out all other nodes for certain IO operations.  With only on or two volumes that can become a bottleneck.  

Regards,

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

s0ftimage
1 Copper

Re: Best Practice - Volume Size - vmware esxi 4.1 free

Jump to solution

Excellent!  Thank you.  This will give me something to play with.

0 Kudos

Re: Best Practice - Volume Size - vmware esxi 4.1 free

Jump to solution

You are very welcome.  If you haven't done so already, install SANHQ in your environment.  It will help you monitor your array.   This will help you monitor performance per volume and overall.  

IOPs depends on the type and size of your IOs.   Larger IO sizes yield fewer IOs/sec but greater MB/sec.  The reverse is also true.  Lastly how random vs. sequential your IO is will have an impact as well.

Keeping heavy IO VMs on different volumes is recommended as well.  

Depending on when you are going live, I would suggest upgrading to firmware 5.2.0.   Which is currently in Early Production Access but will be GA'd  soon.   It has a number of important fixes that make it a very worthwhile upgrade.  These are detailed in the fix list on the EQL support website.

Lastly if you are doing applications like SQL or Exchange, I would recommend using the MS iSCSI initiator to connect to those data/log volumes.  This will allow you to install the HIT kit which will integrate  snapshots (and replication)  seamlessly into those applications.

What kind of switches are you using?   If they are not stacked make sure you have at least 3x GbE interfaces trunked between them.   Flowcontrol needs to be enabled on server and array ports.  Along with spanning tree portfast.  If you are using Jumbo Frames, don't use VLAN 1 (default vlan) on your switches.  Create a separate VLAN instead.

Finally, make sure you have the lastest build of ESX v4.1.  There are updated drivers that customers on the VMware forums indicate reduce latency.

Good luck!

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

0 Kudos
s0ftimage
1 Copper

Re: Best Practice - Volume Size - vmware esxi 4.1 free

Jump to solution

Thank you Don, this is excellent once again.

We will install SAN Headquaters an monitor it as you mentioned.

We upgraded the iSCSI to firmware 5.2.0.  We actually had a DELL technician configure our ISCSI, upgraded the iSCSI, configured the two switches, and both servers (this is the part number on the order -- Implementation of a DELL Equallogic Array (961-3859)).  The service was worth every penny.  We would have never been able to configured it as they did using best practices (Jumbo frames, VLAN, flowcontrol, spanning tree portfast, Link Aggreggation, and connecting to the iSCSI from the servers).

We have two PowerConnect 6224, expertly configured by the Dell Technician.  ISCSI traffic is on vlan 11.  Our LAG is only using 2x Gb2, but I will see if I can configure a third.

We also have the latest and greatest patches for the free vmware ESXI 4.1

Thanks!!!!

0 Kudos

Re: Best Practice - Volume Size - vmware esxi 4.1 free

Jump to solution

Three is min, 4x would be better.  Best still would be to get the stacking cable so you can leverage the 10GbE.  But with a single array not as critical.  If you expand arrays and servers that's something to consider.

Sounds like you're off to a GREAT start!!

I think you're REALLY going to enjoy SANHQ.  The latest version, 2.2, has a "Live information" feature so you can monitor a volume or member in real time.  Plus there are some great reports and even a RAID evaluation tool that will show you based on past loads what different RAID types would do in your environment.

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

0 Kudos

Re: Best Practice - Volume Size - vmware esxi 4.1 free

Jump to solution

FYI:  Since you're not using MEM you can improve the performance of the VMware Round Robin policy.

You can run this script on the ESX v4.1 servers to change the iOPS value.

for i in `esxcli nmp device list | grep -i -B1 "ay Name: EQLOGIC" | grep -i "naa." | grep -i -v  "ay Name"` ; do esxcli nmp roundrobin setconfig --device $i --iops 3 --type iops; done

This is only for ESX(i) v4.x.   ESXi v5.0 has a slightly different syntax.  

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

0 Kudos
s0ftimage
1 Copper

Re: Best Practice - Volume Size - vmware esxi 4.1 free

Jump to solution

Thanks Don.  I'll check it out.

Below is a link I came across showing speed differences with MEM and without.

http://www.spoonapedia.com/2010/07/dell-equallogic-multipathing-extension.html

0 Kudos

Re: Best Practice - Volume Size - vmware esxi 4.1 free

Jump to solution

You are welcome.  Also MEM like DSM for Windows, offers an even bigger increase when multimember pools are used.  So as you scale up the SAN performance will scale also.

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

0 Kudos