Start a Conversation

Unsolved

This post is more than 5 years old

55841

February 22nd, 2013 08:00

Best practice for VMware LUN size

I wrote the following blog a few months ago and wanted to include it in this community in hopes to generate more discussion on the topic.  I'm interested in different opinions and also any descriptions of implementation success or failures for different EMC products.

Best practice for VMware LUN size

http://livevirtually.wordpress.com/2012/09/21/best-practice-for-vmware-lun-size/

September 21, 2012


I was asked this question today.  Its one of my favorite questions to answer but I’ve never wrote it down.  Today I did so here it is.  Let me know if you agree or if you have other thoughts.

For a long time VMware’s max LUN size was 2TB.  This restriction was not a big issue to many but some wanted larger LUN sizes because of an application requirement. In these cases it was typically one or only a few VM’s accessing the large datastore/LUN.  vSphere 5 took the LUN size limit from 2 TB to 64TB.  Quite a dramatic improvement and hopefully big enough to satisfy those applications.

For general purpose VMs, prior to vSphere 4.1, the best practice was to keep LUN sizes smaller than 2TB (i.e. even though ESX supports 2TB LUNs, don’t make them that big).  500GB was often recommended.  1TB was OK too.  But it really depended on a few factors.  In general, the larger the LUN the more VM’s it can support.  The reason for keeping the LUN sizes small in the past was to limit the number of VM’s per datastore/LUN.  The implication of putting too many VM’s on a datastore/LUN is that performance would suffer.  First reason is that vSphere’s native multipathing only leverages one path at a time per datastore/LUN.  So if you have multiple datastores/LUN’s then you can leverage multiple paths at the same time.  Or, you could go with EMC’s PowerPath/VE to better load balance the IO workload.  Second reason is with block storage for vSphere 4.0 and earlier there was a hardware locking issue.  This meant that if a VM was powered on, off, suspended, cloned,… then the entire datastore/LUN was locked until the operation was complete thus freezing out the other VM’s utilizing that same datastore/LUN.  This was resolved in vSphere 4.1 with VAAI Hardware Offload Locking assuming the underlying storage array supported the API’s.  But before VAAI, keeping the LUN sizes small helped administrators limit the number of VM’s on a single datastore/LUN thus reducing the effects of the locking and pathing issues.

OK, that was the history, now for the future.  The general direction for VMware is to go with larger and larger pools of compute, network, and storage.  Makes the whole cloud thing simpler.  Thus the increase of support from 2TB to 64TB LUN’s.  I wouldn’t recommend going out and creating 64TB LUN’s all the time.  Because of VAAI the locking issue goes away.  The pathing issue is still there with native multipathing but if you go with EMC’s PowerPath/VE then that goes away.  So then it comes down to how big the customer wants to make their failure domains.  The thinking is that the smaller the LUN the less VM’s placed on it thus the less impact if a datastore/LUN were to go away.  Of course we go through great lengths to prevent that with five 9’s arrays and redundant storage networks, etc.  So, the guidance I’ve been seeing lately is 2TB datastores/LUNs is a good happy medium of not too big and not too small for general purpose VM’s.  If the customer has specific requirements to go bigger then that’s fine, it’s supported.

So, in the end, it depends!!!

Oh, and the storage array behavior does have an impact on the decision.  In the case of an EMC VNX, assuming a FAST VP pool then the blocks will be distributed across various tiers of drives.  If more drives are added to the pool then the VNX will rebalance the blocks to take advantage of all the drives.  So whether it’s a 500GB LUN or 50TB LUN, the VNX will balance the overall performance of the pool.  Lots of good info here about the latest Inyo release for VNX:

http://virtualgeek.typepad.com/virtual_geek/2012/05/vnx-inyo-is-going-to-blow-some-minds.html

61 Posts

February 22nd, 2013 09:00

This is a great summary.

One thing not mentioned that I think is crucially important is a discussion of queues.

Lets say everything you are doing fits into a single 2TB LUN.  Great for you.  However, I'd strongly suggest you DON'T build just a single LUN (ever!).  Why?  Queues.

For every LUN being accessed, vSphere has (effectivly) a single queue, and so there's basically 1 command at a time that can be run.  Most backend arrays have a similar scenario.  So, with a 2TB device, all your (maybe 30?  50?) VMs are having to wait in the same line.

If you break that up into 3-4 LUNs, suddenly each line is 4x shorter, and every one gets serviced faster.  In this case, 500GB / LUN would be a better choice.

This is important on both mid-tier arrays and higher end Symmetrix style systems.

So while I think Peter's guidance of 2TB is overall good, I would say that you should consider a minimum of 4 LUNs, regardless of the amount of data being stored.  Second, talk to your local expert on your storage array.  Certain arrays have certain best practices.  For example, on a Symmetrix, we'd want to see the number of LUNs be 2-10x the number of FA ports servicing the workload.

2 Posts

March 19th, 2013 11:00

Recently after attending a VMware webcast where we discussed this topic I made the decision to increase our standard LUN sizes. We went from 600 GB to 1 TB LUN's. Looking back I regret that decision as performance has noticeably decreased and I mean decreased to the point that one can experience the difference when using one of the VM's. When I looked closer I've found that our write latencies have increased as well as the command queues after having converted everything over.

Something hinted at above yet not discussed is the work required to balance out the workloads happening on these large datastores. If one has or can afford the Enterprise Plus licensing then storage DRS is available, for those of us that don't have it then it means a lot work to try and balance out workload IO on a datastore with 20+ VM's.

Another option that I've been using on our account is to create RDM drives so that a VM get's it's own message queue. This has proven too work quite well and it also has a few other management benefits. Namely, it allows for more consistent VM sizes in our main datastores which makes moving VM's around when re-sizing or adding a new drive much easier since we don't have to move a bunch of VM's off a datastore to grow a single VM by 20 GB. The one downside to this becomes apparent if you have Avamar and try to implement VADP with VMware Imaging which will not work on a Physical RDM due to the lack of snapshot capability.

7 Posts

May 10th, 2013 18:00

Another thing not mentioned above (yet I agree that it is a great discussion started and post) is that even though we can create 64 TB datastores the maximum VMDK size remains at 2 TB on VMFS.  This is rumored to change in vSphere 5.5 but for now that is something to keep in the back of your minds.  I've had many customers upgrade expecting to be able to get rid of alot of RDM's yet they ran into trouble because of this limitation.

As for lastpick's issue, I'd be curious to hear what the back end storage is.  A VNX licensed with FAST-VP or with a little FASTCache would go a long way to mitigate the latency and queueing.  If those features or hardware are not available in the array then those are things you need to consider up front before making decisions to increase your VM density on your datastores.

This is a great and ever-evolving discussion so keep it going!

8 Posts

May 31st, 2013 12:00

Just adding to the discussion based on a customer question yesterday... they expressed concern about an issue with Heap Size for large VMFS volumes.  Here's the article with a good description of the problem and resolutions.

Monster VMs & ESX(i) Heap Size: Trouble In Storage Paradise

http://www.boche.net/blog/index.php/2012/09/12/monster-vms-esxi-heap-size-trouble-in-storage-paradise/

In summary, this is an issue with but VMware released a patch to increase the max heap size and has baked the solution into vSphere 5.1 Update 1.  The only concern is that hosts that have been upgraded (Vs. fresh installs) will need the manually reconfigure the heap size.

8 Posts

July 8th, 2013 19:00

Just adding to the discussion... here's an article that essentially states that you should avoid extents unless you have a specific performance requirement.  If you need to grow/expand the LUN then you can use the VMware volume grow facility.  If you need the performance then you may be able to tune the queue depth rather than aggregate the queue depth for multiple extents.

http://blogs.vmware.com/vsphere/2012/02/vmfs-extents-are-they-bad-or-simply-misunderstood.html

8 Posts

July 24th, 2013 21:00

Since we mentioned queue depth in the comments,... I came across another vSpecialist email discussion asking about enabling the Adaptive Queue Depth setting.  The response were:

Matt Cowger

ADQ is actually off be default, unless you enable it explicitly by setting values for the 2 variables (QFullThreshold, QFullSampleSize).

Our recommendation is to leave it off unless specifically directed by support (Vmware's or EMCs).  Its functionality is fully subsumed by SIOC, and SIOC is always better.  The next version of the techbooks will contain a comment to this effect.

So, we recommend leaving ADQ off and unset.

Tyler Britten

You may also be thinking of the iops=1 setting- that only applies to round robin using NMP. The recommendation to set it to 1 from the default 1000 is for symmetrix only.

July 29th, 2013 05:00

I've been looking for this kind of information, but VMware isn't exactly forthcoming on the subject - which is understandable, since the answer vastly depends on your specific needs.

I ended up building 2 TB datastores and grouping them in datastore clusters with storage DRS enabled, and I'm very satisfied with the performance so far. The backend might help, though, we're using VNX5300 with FAST VP enabled behind a pair of VPLEX.

I'll monitor the evolution of queue depth over the next few months.

2 Posts

March 23rd, 2016 03:00

This blog is from 2012, does this recommendation apply today with vSphere 5.5/6?

I've read somewhere that for block there is no difference between 10x500GB LUN or 1x5TB LUN. Is this correct in vSpehre 5.5/6?

474 Posts

March 23rd, 2016 08:00

There IS definitely a difference in block storage between 10x500 and 1x5000. There are more queues and as such with the same workload you should have shorter queues, ie: lower latency.

Richard J Anderson

Sr Systems Engineer, Software Defined Storage, West Globals

EMC Emerging Technologies Division

richardj.anderson@emc.com

206.229.9263

VMAX SPEED, Unified SPEED, ScaleIO SPEED, Proven EMCTA

August 23rd, 2016 15:00

How would it affect performance if I configure virtual SQL servers that have multiple volumes: OS - 100GB, SQL logs - 100GB, SQL data1 - 500GB, SQL data2 - 500GB, SQL temp db - 100GB, and SQL temp db log - 50GB in the following ways:

  1. Configure separate LUNs for OS - 100GB, SQL logs - 100GB, SQL data1 - 500GB, SQL data2 - 500GB,SQL temp db - 100GB, and SQL temp db log - 50GB, then present them to the ESXi cluster and install and configure the server.
  2. Consolidate all volumes of each SQL VM into one 1,5 - 2TB LUN: OS - 100GB, SQL logs - 100GB, SQL data1 - 500GB, SQL data2 - 500GB, SQL temp db - 100GB, and SQL temp db log - 50GB. In this case I will have one LUN per SQL VM
  3. Consolidate multiple SQL VMs into one 5-6TB LUN: OS - 100GB, SQL logs - 100GB, SQL data1 - 500GB,SQL data2 - 500GB, SQL temp db - 100GB, and SQL temp db log - 50GB. In this case I will have multiple SQL VMs maybe 3 or 4 running one LUN.

The array is VNX 5200 with two mixed storage pools that have three types of drives: SSD, SAS, NLSAS, FAST and FAST CACHE.

Appreciate your input.

No Events found!

Top