Start a Conversation

Unsolved

This post is more than 5 years old

1332

March 13th, 2015 22:00

FAST VP relocation and pool size

Is it better to have multiple pools of the same size with the same 3-tier architecture than to have one massive pool in terms of fast vp relocation speed? I've read elsewhere people have better relocation performance using multiple pools than one big massive pool.

I know it will depend on how much data there is to move but in general is it better to split 3-tiers over multiple pools anyway?

We're looking at just general overall performance e.g., the pools will have potentially high IO and low demand VMs

March 14th, 2015 20:00

Typically, more pools will yield better relocation times as you have more relocations going at the same time. If you only had 1 pool, only 1 relocation thread.

Mike

March 15th, 2015 14:00

You may get faster relocation speeds with multiple pools, but personally I wouldn't be using that as a major factor in a design.

8 Posts

March 19th, 2015 05:00

So what would be the recommended pool layout for what is basically general performance skew (some DBS, some application, some unknown) with the intention of wanting FAST VP relocations to complete within 8 hours?

If I just have one massive pool, surely this means the means its going to be moving data around all the time and relocation will never 'complete' for that 12 hour cycle?

I guess what I'm asking is - what is the downside to just one big pool?

8 Posts

March 19th, 2015 06:00

Thanks for the reply.

As you can tell, I'm going for optimised simplicity

I was actually thinking of going the 5/20/75 but if we take all the disks available (for example) and even if we evenly split it across say 3 pools, it ends up only being like 1/40/59 in other words we don't have enough flash for 5/20/75 over every pool.

We do have fast cache as well.

March 19th, 2015 06:00

So a couple of things ive noticed being a customer and running FAST on CX4/VNX and then working for EMC.

1.  Keep Random Small Block IO seperated from Sequential IO.  If you are going to put DB's in this pool, Make sure Logs at a minimum are in another pool.  You could also do RG's with Striped Metas for logs but pools will yield good performance.  Just dont fill it so much where sequential IO in nature becomes random due to how many of those you put in the pool.

2.  With out knowing your specific workload, the standard 5/20/75 for a pool is a good starting point.  I always like to err on side of caution so i typically look at 7-8% Flash, up to 30% FC/SAS and the rest NL/SATA.  Your pools should follow best practice of 4+1 R5 for EFD, 4+1 R5 for FC/SAS and 6+2 R6 for NL/SATA.

As far as relocation, I'd run it continuously.  If your array is not taxed, this is the best option in my opinion.  Every hour, a heat map is taken and slices are stack ranked.  Hotter slices are promoted, colder ones are demoted.  If i do that only once a day, im creating a situation where not only am i reactive but my change is aging so it becomes less accurate over time.  I encourage folks if they can to continuously relocate.  Not only do you get the benefit across tiers but also in the same tier. 

You should ask your EMC SE to help you size your pools using some internal tools.  If not let me know and if i have some spare cycles, ill give you a hand.

Cypherstrength

March 19th, 2015 07:00

JohnWick,

FASTCache is a definite must for sure before EFD in a pool. In VNX2 these are different drive types and cant be mixed. VNX 1 you can. FastCache should be used prior to using it a pool as it makes the overall array more efficient.

What VNX is this? The 1/40/59 isnt bad but what I would say is create either 1 pool or if you want 3 pools, create 1 high performance pool with EFD and the other 2 without it. Just a standard 2 Tier pool with FAST Cache enabled.

March 19th, 2015 15:00

johnwick wrote:

So what would be the recommended pool layout for what is basically general performance skew (some DBS, some application, some unknown) with the intention of wanting FAST VP relocations to complete within 8 hours?

If I just have one massive pool, surely this means the means its going to be moving data around all the time and relocation will never 'complete' for that 12 hour cycle?

I guess what I'm asking is - what is the downside to just one big pool?

After initial bursts, I don't think you'll see massive amounts of data relocating continuously.

I find it tends to stabilize and the relocation amounts are fairly modest. Extreme perf tier is always full though !

8.6K Posts

March 19th, 2015 16:00

We make sure that the flash tier is being used

March 20th, 2015 05:00

Brett, I agree with you but when the SSD Tier is not sized appropriately especially if he wants to break it up, constant relocations want to happen because theres not enough room to hold the working set. That’s why I would run it continuously. Slices may fluctuate causing slice 1 to promote over slice 2 in hour 1 then the inverse in hour 2 because its slightly hotter. We would really need to understand his skew but for that little bit of flash, I see constant relocation in his future.

No Events found!

Top