4 Ruthenium



Firstly, welcome to the forums and thank you for being an EMC customer.

Let me begin by noting that the 10% space being mentioned is unallocated capacity and not necessarily free.


Now... in regards to creating a "placeholder" LUN to reserve the 10% free space, I would have only agreed that was true with VNX OE for Block 31 and a thick LUN.  Prior to Inyo, when you created a thick LUN the space was reserved but was allocated on demand.  There were certain use cases where we suggested something such as a full format of the LUN to fully allocate it, and even went as far as suggesting simultaneous full formats so that the slices would get "interleaved".  However with Inyo, VNX OE for Block 32, thick LUNs now both reserve the space and are fully allocated.  On the other hand, thin LUNs would not make sense as a "placeholder" LUN (in either Elias or Inyo) as oversubscription is possible so it makes no effort to put aside 10% of unallocated space.  I hope this makes sense.

I'm certainly not looking to contradict an EMC level 3 support engineer, so I will just mention that I wouldn't rely on a placeholder LUN but instead make an effort to make sure your "Allocated Capacity" doesn't exceed 90% of the "Total Capacity" of the pool.  By doing this, this also makes sure that 10% is available per tier.


Basically what this bp (not necessarily mandatory) is suggesting is the following:

1) The system will make an effort to maintain 10% free per tier for new LUN assignments 

This as you had pointed out allows for free space for any new LUNs which for instance you want to assign to "Highest Available Tier".  If it didn't, then knowing that except for "Lowest Available Tier", FAST VP favors the top tiers (limited by space of course), then the initial slice placement for that new LUN would likely be on one of the lower tiers if that is all that is available.  Then on the subsequent tiering windows it will make an effort to maintain the 10% (assuming you only allocated up to 90% of the total capacity of the pool) keeping the following in mind:

a) Slices with policy "Highest Available Tier" takes precendence over those that were eligible for that tier but policy was set to "Auto-Tier" (when contending for space)

b) "active" slices that are eligible for that tier take precedence over "inactive" slices (but "inactive" slices don't get demoted just because they are "inactive")

c) Except when assigned "Lowest Available" tier, FAST VP favors the top tiers. 

A consequence of this is the following:

  • There really isn't any way to force a slice to the middle tier (Performance) in a 3 tier pool
  • It is possible (assuming there is free capacity and 10% free can be matained) that with Auto-Tier, all of your slices will be in the highest tier.  In other words if there is free space in the top tiers there isn't any reason not to use it even if they are inactive relative to the other slices, or for instance, haven't been touched in a year.

2) Having 10% free per tier allows for optimal relocation (promote/demote) of slices

This one should be intuitive how leaving free space allows for optimal tiering. 


In summary, if you were to take the total capacity of the pool and only allocate up to 90% of that, you would effectively allow for 10% per tier which the system will try to enforce.  As you create your pool LUNs, you also have to figure in the metadata overhead which is:

LUN Size (in GB) * .02 + 3GB

So for instance if you create a 100GB thick LUN, you'll see approximately 105GB allocated.

If on the other hand you create a 100GB thin LUN:

  • Initial allocated space for that LUN will be 3GB initially
  • Up to a maximum allocated capacity of 105GB


Finally, I also want to mention that this also applies when allocating pool LUNs for filestorage and for which you also enable FAST VP.  However, instead of 10% we have recommended as low as 5%.  Therefore, instead of creating your thick pool LUNs with "MAX" capacity (remember we don't recommend thin block LUNs) and while maintaining the bp for the number of LUNs:

{Per the "EMC VNX Unified Best Practices for Performance Applied Best Practices Guide" on}

  • Create approximately 1 LUN for every 4 drives in the storage pool
  • Create LUNs in even multiples of 10
  • Number of LUNs = (number of drives in pool divided by 4), rounded up to nearest multiple of 10
  • Make all LUNs the same size
  • Balance LUN ownership across SPA and SPB

Also consider the following:

1) Do not allocate more than 90 - 95% of the total capacity of the pool

2) When performing your calculations remember to keep in mind the overhead associated with a pool LUN: LUN Size (in GB) * .02 + 3GB

By making sure you only allocate 90 - 95% of the total pool capacity, this will make sure that each tier will have 5 - 10% free space available.  However, in this case the strategy isn't for new LUNs, but to optimize tiering (if FAST VP is enabled and an active consistent tiering policy is set).

0 Kudos