Start a Conversation

Unsolved

This post is more than 5 years old

6192

February 14th, 2013 07:00

FAST VP

Hi,

How do we set aside 10% in each tier while configuring FAST VP. I believe this is a requirement for FAST VP to reallocate slices effectively. Read this in a FAST VP whitepaper "Ten percent space is maintained in each of the tiers to absorb new allocations that are defined as :Highest Availability Tier" between relocation cycles"

The previous storage admin at my office created a unassigned LUN, 10% of total storage pool space to achieve this. I don't think its a right way to do it, but as per him this was recommended by EMC level 3 support.

Any insight guys?

Thanks

Sushil

February 15th, 2013 23:00

Sushil,

Firstly, welcome to the forums and thank you for being an EMC customer.

Let me begin by noting that the 10% space being mentioned is unallocated capacity and not necessarily free.

PLACEHOLDER LUN?

Now... in regards to creating a "placeholder" LUN to reserve the 10% free space, I would have only agreed that was true with VNX OE for Block 31 and a thick LUN.  Prior to Inyo, when you created a thick LUN the space was reserved but was allocated on demand.  There were certain use cases where we suggested something such as a full format of the LUN to fully allocate it, and even went as far as suggesting simultaneous full formats so that the slices would get "interleaved".  However with Inyo, VNX OE for Block 32, thick LUNs now both reserve the space and are fully allocated.  On the other hand, thin LUNs would not make sense as a "placeholder" LUN (in either Elias or Inyo) as oversubscription is possible so it makes no effort to put aside 10% of unallocated space.  I hope this makes sense.

I'm certainly not looking to contradict an EMC level 3 support engineer, so I will just mention that I wouldn't rely on a placeholder LUN but instead make an effort to make sure your "Allocated Capacity" doesn't exceed 90% of the "Total Capacity" of the pool.  By doing this, this also makes sure that 10% is available per tier.

INTERPRETING BEST PRACTICE

Basically what this bp (not necessarily mandatory) is suggesting is the following:

1) The system will make an effort to maintain 10% free per tier for new LUN assignments 

This as you had pointed out allows for free space for any new LUNs which for instance you want to assign to "Highest Available Tier".  If it didn't, then knowing that except for "Lowest Available Tier", FAST VP favors the top tiers (limited by space of course), then the initial slice placement for that new LUN would likely be on one of the lower tiers if that is all that is available.  Then on the subsequent tiering windows it will make an effort to maintain the 10% (assuming you only allocated up to 90% of the total capacity of the pool) keeping the following in mind:

a) Slices with policy "Highest Available Tier" takes precendence over those that were eligible for that tier but policy was set to "Auto-Tier" (when contending for space)

b) "active" slices that are eligible for that tier take precedence over "inactive" slices (but "inactive" slices don't get demoted just because they are "inactive")

c) Except when assigned "Lowest Available" tier, FAST VP favors the top tiers. 

A consequence of this is the following:

  • There really isn't any way to force a slice to the middle tier (Performance) in a 3 tier pool
  • It is possible (assuming there is free capacity and 10% free can be matained) that with Auto-Tier, all of your slices will be in the highest tier.  In other words if there is free space in the top tiers there isn't any reason not to use it even if they are inactive relative to the other slices, or for instance, haven't been touched in a year.

2) Having 10% free per tier allows for optimal relocation (promote/demote) of slices

This one should be intuitive how leaving free space allows for optimal tiering. 

CALCULATIONS

In summary, if you were to take the total capacity of the pool and only allocate up to 90% of that, you would effectively allow for 10% per tier which the system will try to enforce.  As you create your pool LUNs, you also have to figure in the metadata overhead which is:

LUN Size (in GB) * .02 + 3GB

So for instance if you create a 100GB thick LUN, you'll see approximately 105GB allocated.

If on the other hand you create a 100GB thin LUN:

  • Initial allocated space for that LUN will be 3GB initially
  • Up to a maximum allocated capacity of 105GB

FILESTORAGE

Finally, I also want to mention that this also applies when allocating pool LUNs for filestorage and for which you also enable FAST VP.  However, instead of 10% we have recommended as low as 5%.  Therefore, instead of creating your thick pool LUNs with "MAX" capacity (remember we don't recommend thin block LUNs) and while maintaining the bp for the number of LUNs:

{Per the "EMC VNX Unified Best Practices for Performance Applied Best Practices Guide" on support.emc.com}

  • Create approximately 1 LUN for every 4 drives in the storage pool
  • Create LUNs in even multiples of 10
  • Number of LUNs = (number of drives in pool divided by 4), rounded up to nearest multiple of 10
  • Make all LUNs the same size
  • Balance LUN ownership across SPA and SPB

Also consider the following:

1) Do not allocate more than 90 - 95% of the total capacity of the pool

2) When performing your calculations remember to keep in mind the overhead associated with a pool LUN: LUN Size (in GB) * .02 + 3GB

By making sure you only allocate 90 - 95% of the total pool capacity, this will make sure that each tier will have 5 - 10% free space available.  However, in this case the strategy isn't for new LUNs, but to optimize tiering (if FAST VP is enabled and an active consistent tiering policy is set).

7 Posts

February 18th, 2013 03:00

Many thanks Christopher.

This was very thorough and informative.

Thanks,

Sushil

February 19th, 2013 18:00

Awesome, glad it was informative.

So I did get some internal feedback and need to make a small update above.  It comes down to keeping the math simpler than what I had noted, specifically about factoring in the pool LUN overhead.  It is, per an internal PPT, already accounted for in the 5% or 10% free.

UPDATE

When I mentioned above about FAST VP configurations:

1) allocating (up to) 90% of the total pool capacity for block

2) dedicating 95% of the total pool capacity for filestorage

the remaining 10% or 5% free (respectively) is already accounting for pool overhead from what I recently read.  I, on the other hand, had suggested that you want to calculate it up front.  Not technically wrong, but basically it comes down to keeping the math simple.

Therefore, to use an example:

1) For instance, say you have a pool that you want to, per bp, dedicate entirely to filestorage with a total capacity of 10000GB (3 tiers w/ FAST VP).

2) Then using the rules above, you determine that you need 25 LUNs

3) 10000GB / 25 * .95 = 380GB

Therefore, you would simply create 20x (thick) LUNs of 380GB each instead of factoring in the pool lun overhead ahead of time.  Again, keep it simple.

7 Posts

February 20th, 2013 02:00

Thanks for the update Christopher.

In the above example, you meant "25x (thick) LUNs of 380GB" right?


7 Posts

February 20th, 2013 03:00

As per your last update, the overheads are accounted in 10% free space, that means if we allocate all of our pool space, 10% still be maintained by system?

As we are currently on Flare 31 and if I understand this correctly, in our case the placeholder LUN actually holds the space for LUN overheads and that space is in turn used for reallocating slices? And while creating LUNs the overhead space is taken from pool, that means we are wasting space by keeping the placeholder LUN?

Also, is used above formula for calculating metadata for thick luns for block "LUN Size (in GB) * .02 + 3GB" but the answers aren't seem to be matching.

I have created 5400GB thick LUN on pool. As per the formula the overhead should be : (5400 * .02) + 3=111GB and the total consumed capacity should be 5511. However the total overhead taken by pool is 147.814 GB and the total consumed capacity is 5547.814 GB.

Am I calculating it wrong?

Am really sorry for asking so many questions. But I am really confused now. I thought I understood the concept completely but your last update leaves me very confused.

Many thanks.

Sushil

February 20th, 2013 22:00

jezza wrote:

In the above example, you meant "25x (thick) LUNs of 380GB" right?


Ooops... you are correct I meant 25x (not 20x)

February 22nd, 2013 22:00

Sorry, based on the order of the automatic emails received, I missed this one.

So as for the pool LUN overhead, that is the generally accepted formula; however, it is an estimate and starts deviating further from the 2% as the LUN's get larger.  You are doing the math correctly.  Based on what you are actually seeing (147GB vs 111GB), this works out closer to 2.7%.  So as you demonstrated, the overhead seems to  range from 2 - 3% depending on how large the LUN.  I'm assuming that a 16TB LUN which is the largest possible single LUN on a VNX currently, we would find it to be closer to 3% (maybe slightly larger)?  You have piqued my curiosity now and I may do some testing.

As for leaving unallocated space in a FAST VP configuration, it really comes down to keeping the math simple when you are sizing all the available capacity (up to the recommended free space) as in the case of filestorage where the pool is dedicated upfront.  As you've already demonstrated, it really would make it difficult to size and would almost be trial and error if your goal was to align perfectly to 5 or 10% (which isn't absolutely necessary).  Therefore when you are going to allocate all of the space up front such as in the case of filestorage, once you calculate the number of LUNs per the bp mentioned above, simply divide that into 95% of the total capacity.  This is per an internal powerpoint that was brought to my attention recently and again, the goal is to keep the math simple.  Keep in mind, it isn't wrong if you also calculated the overhead first as I proposed originally.

On the other hand, with block storage, it is a bit different.  Unlike filestorage (block LUNs presented to the data movers) where you are allocating it entirely upfront, with block storage you are typically building up and adding LUNs over time.  Therefore, you will have visibility to the allocated capacity (which already includes the overhead) and can stop at 90% of the total capacity of the pool.  Does this make sense?

As for the comment regarding the placeholder LUN, as I noted, with VNX OE for Block 31 where a thick LUN only reserved the space but didn't actually allocate it, then I can see this as a strategy to leave 10% unallocated space that can be used for leaving room for optimal tiering.  You qualified it by a third level EMC engineer so I also don't want to contradict him/her.  However, since most have moved to Inyo (and in consideration of those that read this later), I personally wouldn't rely on this strategy.  As a reminder, with Inyo, thick LUNs both reserve the space and fully allocate the LUN so those slices aren't available for tiering.

7 Posts

February 25th, 2013 03:00

"Therefore, you will have visibility to the allocated capacity (which already includes the overhead) and can stop at 90% of the total capacity of the pool.  Does this make sense?"

This is where the confusion is. When you say allocate storage at  90% of total capacity of pool and the 90% already includes overheads, that means the remaining 10% is still not accounting for overheads as stated by you earlier.

So we indeed have to consider the overheads up front, isn't it? Although it may vary as explained by you.

However, if 10% is still accounting for overheads and we are keeping the capacity to 90% (including the overhead), aren't we wasting space?

It is more simpler to just keep the pool capacity to 90% and I think it would be very hard to keep a tab on overheads and calculate capacity based on that.

Thanks

Sushil

7 Posts

February 25th, 2013 04:00

With Inyo, VNX OE for Block 32, thick LUNs now both reserve the space and are fully allocated.

I went through release notes for "EMC® Virtual Provisioning™ for VNX™ OE for Block Version 05.32.000.5.011" , these notes are contradicting the statement made by you above. Please refer the excerpt below.

Thick LUN consumption per tier

When you create a thick LUN, the pool storage required for that thick LUN is not actually allocated; instead, it is reserved. Because these reservations are based on the pool rather than the tier, this reserved storage is not reflected in the tier breakdown at the thick LUN level, until the thick LUN is written to and the storage is actually allocated.

Additionally, when you set a tiering preference for a thick LUN, the storage is only reserved for the LUN even if the thick LUN appears to be fully provisioned. Because these reservations are not made on a per tier level, by the time the data is actually allocated to the thick LUN as the result of a write, the storage tier requested may no longer be available. If you enable FAST, this problem will be resolved during subsequent relocations.

Please advise.


February 25th, 2013 09:00

I have submited a question to the group internally as I continue to find contradictions.  For instance, I also know that we recommend using the SOAP tool to preallocate slices (for Exchange 2010) only for pre-Inyo as stated in the:

"Microsoft Exchange 2010 Storage Best Practices and Design Guidelines for EMC Storage

[...]

We recommend that you use this tool when deploying Exchange in storage pools only on CLARiiON CX4 and VNX systems with FLARE release prior to FLARE 32

[...]

I'll keep everyone posted on what I hear back.

February 25th, 2013 09:00

Hmmm.. yes, I agree there is a contradiction.  I have an internal PDF (and verbally been told more than once) that states the following:

[...]

Another enhancement to the VNX OE for File v7.1 and Block R5.32 code which needs to be mentioned is the pre-allocation of slices for Thick LUNs.

So first we start with a pool where no space has been allocated. As soon as we create a new Thick LUN, 1GB slices in the pool will be pre-allocated equal to the size of the LUN created. When data is written to the LUN it will be distributed all over the LUN as needed.

[...]

Let's see if someone else chimes in (I will also do some research).

7 Posts

March 1st, 2013 02:00

I will look forward to your findings.

Thanks

Sushil

4.5K Posts

March 1st, 2013 13:00

That statement in the Release Notes is not correct. Thick LUNs when created in flare 32 will pre-allocate all the slices during the initialization stage. Thin LUNs will remain the same. I've opened a case with engineering to fix this wording in the next release.

glen

No Events found!

Top