Start a Conversation

Unsolved

This post is more than 5 years old

2605

November 18th, 2015 08:00

Calculating optimal LUN size on VNX

Hi Community,

I was asked what is the optimal LUN size for a LUN requested by a user.  I was trying to find out if there is a calculation for this what is it?  Basically it appears that there is an optimal LUN size for the VNX that will provide the best performance.  Again, if there is a equation or calculator available please share.

Thank you,

195 Posts

November 18th, 2015 09:00

Honestly, for capacity the answer is 'big enough to meet the needs', and for performance 'fast enough to meet the needs'.

In large ESX clusters I use LUNs as large as 16TBs in order to fit the amount of storage the cluster currently needs under the 256 LUN limit.  And across more than a decade of using ESX that size has evolved from 500GB, to 1TB, to 2TBs, to 4TBs, to 8TBs...and not too long from now it will be 24 or 32TBs.

Whether using pools, LUNs, or MetaLUNs I do consider the approximate number of random IOPS that a group of disks will provide, and attempt to not over-subscribe that number significantly.  In most cases, that means that a RAID group, or pool is composed of spindles that should be good for  some number between say 1000 and 5000 random IOPS, and we will throw guests wherever they best fit until/unless we see the workload approaching that limit.

If something falls outside our typical ranges for either size or IOPS requirements then we craft an appropriate solution, but we tend to use models for size and performance that fit 95+% of our guests, and keep the exceptions to a minimum.

Our most valuable, and difficult to expand, resource tends to be the human resource.  Maintaining model solutions rather than treating every workload as unique helps us get the most out of that resource.

195 Posts

November 19th, 2015 09:00

Generally yes.  The underlying physical structure of the RAID group or storage pool is going to be the most important part.

Beyond that I do, at times, pay attention to where the most active LUNs within a RAID group are located.  Using RAID groups, each LUN occupies a specific extent in the group.  So, for example, I avoid putting the most active LUNs within a pool at opposite ends of the disks, preferring to make sure that they are close together, and close to the center of the disks, for improved locality of reference.

For pools, particularly those using mixed disk types and or thin/dedup/etc., the array software has responsibility for positioning chunks of the LUNs, so the above is much less of a concern to the admin.

LUN size considerations come more into play when thinking about data growth and the relative ease of expanding a disk within the server OS.  Also, VSS or snapshot space may be required if those functions are used at the host layer.

2 Intern

 • 

356 Posts

November 19th, 2015 09:00

So your saying that there is no performance gain or depreciation from creating LUNs of any random size on the VNX?  There is no calculation for creating a LUN for optimal performance?  The real optimization and performance come by creating Pools with the type of disk and RAID you need based on READ and WRITE IOPS desired by the application or users and creating the LUNs in those Pools?  Let me know if I got this right.

Thank you,

2 Intern

 • 

356 Posts

November 19th, 2015 10:00

We are no longer dealing with RAID groups.  Strictly Pools, and I was not sure how to optimize something that the system automatically controls as far as the disk tiering.  We have Pools with mixed disk in it and because of this I wasn't sure if there was a way to optimize any further?  But it appears that the Pools with mixed disk is a trap of some sort?  As you are the mercy of the system to do the work...?  This is what I am getting?  I am guessing the only way to fix this will be to create new Pools strictly based on the performance we expect from them?  How would I do that using a mixed Pool?  Let me know.

Thank you,

November 20th, 2015 06:00

Pools with Mixed drive types are efficient in terms of tiering, all the blocks in the Pool are not always Hot/or frequently accessed, so only the frequently accessed will be placed in faster available drives in the Pool.

When it comes to creating LUN , you can decide to start placing those blocks to optimal tier. which in turn would give you flexibility/ease of management avoiding you to think on which disk your LUN should reside.

There is no specific LUN size which can you give performance benefit, its all about the disk type/RAID/number of drives used in the pools which would be used the match the performance required for the application/hosts. There could be some recommendations from host level to create specific size for optimal performance, as i think if there are more luns in a single datastore from VMware it would cause performance issues (at least that's what i have heard from VMware guys) so you may want to provision big size LUNs in that case.

-Sameer

195 Posts

November 20th, 2015 07:00

With pools I think that there are a couple of good things to remember/allow for.

> How ever big or busy the performance tier is, the relocation algorithm will do its best to insure that, at the end of a relocation, it has 10% free space available.  Most people interpret that as needing to keep 10% of the pool free; I think what it really means is that you need to keep unallocated space in the pool equal to, or greater than,  20% of the performance tier capacity.

At best, relocation promotes chunks that were busy today, so that they are in the performance tier tomorrow.  That is Ok, but if you have a pool with an extremely dynamic and/or high rate of change, or shifting hot spots it can end up doing things just a little too late.  Fast Cache works well with pooling as it provides a boost that it more immediate, and work with smaller chunks of data (64k for fast cache versus 1GB for tiering).

I think mixed pools work well with a mixed workload.  In addition to placing important/latency sensitive LUNs there it is good to throw in some less heavily used data; that gives tiering data that will likely be happy in the capacity tier to offset the more active workloads.

I also tend to construct several medium sized pools rather than fewer huge ones.  That give me some insulation/isolation between things so that if a LUN in a pool does something abnormal (has an exceptionally active I/O day...) it won't as strongly impact the much broader community.

No Events found!

Top