Start a Conversation

Unsolved

This post is more than 5 years old

2030

October 3rd, 2011 12:00

FAST VP Capacity Management

Hello,

Looking for input from others who have worked out a system for dealing with capacity mgmt when using FASTVP.  Specifically, what general rules are you using for oversubscription when extents get moved up/down to other tiers?

For example, say I have 3 tiers, EFD/FC/SATA.  Say I allocate all tdevs out of my FC pool and let FASTVP work its magic and move extents up/down to EFD/SATA.  If I keep the default 100% maxumim subscription, I will eventually hit that level of subscription even though I have space available in the FC pool (extents that moved up/down).  I could have a significant amount of capacity moved to SATA which mean the FC pool has significant unused capacity (I'll ignore the EFD capacity since that is only used for performance).

What is the best practice at this point?  Do I start cranking up oversubscription (maximum subscription > 100%)?  By how much?  I know the answer is a variation of "it depends" but in general what is best practice?  How much do I keep available in the FC pool for performance in case extents get busy and start moving up from SATA?

Another way to ask this same question, how do I utilize the SATA FAST VP tier for capacity and how do I combine with FC for performance?  Seems that it requires oversubscription since all capacity moved down to SATA frees FC capacity but I need to oversubscripe to use it.  I don't want to oversubscribe the array but I do want to use the unused FC capacity once extents are moved to SATA.

I'm using 5875, all tdevs are fully preallocated, haven't been able to find anything published giving this kind of guidance.

Thanks,

Keith

5 Practitioner

 • 

274.2K Posts

October 25th, 2011 15:00

Hi Keith

I have exactly the same sort of questions having just started my first FAST VP implementation!  Has anyone replied to you offline?

Cheers

Jenny 

6 Posts

May 1st, 2012 13:00

Sorry to interrupt but I have the same question.

Oversimplified Scenario:

I have a TDEV that is bound to FC pool with a 10 efd /100 fc /100 sata fast vp policy.

If the TDEV's extents mostly live in SATA pool say, over 50%, then could you rebind and migrate to SATA and then apply a different fast policy which promotes the difference to FC and EFD?  We don't want to oversuscribe our VMAX either.

25 Posts

May 9th, 2012 15:00

So,YMMV, but we've divided out our platforms (SQL, VMWARE, Oracle, AIX, etc) into seperate pools. Each platform has EFD/FC/SATA, but in different amounts depending on their usage. This allows us to get alot more granular with FAST polcies, but more importantly (to me) provide better capacity management and monitoring.

Generally, we bind all production to FC, and all non-production to SATA. We tune the PRC appropriatly... non-prod or SATA, has a small PRC, because I'm a little less concerned about test environments running out of extents before FAST can move them around. While prod or FC, has higher PRC (for the opposite and obvious reason) Then let FAST move things around as needed. We manage to the subscription level of all three pools combined though, using the best capacity tool around (excel). We get the subscription and allocation levels monthly and chart these out.

Once we had this reporting in place, we waited a couple months, monitoring the data to deciede where over subscription was possible. In the case of our MSSQL environment, we've actually changed our provisioning process to give out individual luns for every database (and sub luns for temp and log). Previously this would be too hard, as we have hundreds of database on the cluster and couldn't manage the different size luns, so we have standard sizes and just leverage oversubscription... currently our MSSQL pools are 230% subscribed. We also have StorReclaim fully automated on MSSQL, making this a little bit more comfortable as I know free space is going back into the pool.

Here is the SQL pool, note subscription (dotted red line) line over allocation (dotted black line)

Oracle on the other hand, we don't go over 100% subscription, but we use the reporting to tell the subscription level of all three pools combined. So FC might be oversubscribed, while SATA is under. Again, PRC is there to help us out. We're working on getting ASRU fully automated on Oracle, when we do we'll start oversubscribing.

Here is the Oracle pool

859 Posts

May 11th, 2012 02:00

Hi Keith,

As MD said, you have PRC to stop FAST moving the extent when PRC is hit for critical pool (which is enabled by default) and increase the max subscription % for less critical pools.

And MOST important of all, keep an eye on pool capacity utilization

regards,

saurabh

September 1st, 2015 05:00


Am using Keith's OP to reiterate his original question ....are there guidlines over calcs to work out max. oversubscription into FC ... for example we only subscribe into FC, which is now over 400% oversubscribed and now 85% full (10% PRC).

Mostly 100

We have signficant capacity in our SATA pool but  i'm still unclear how FAST VP behaves as the FC pool becomes full... does it simply adjust to demote  extents from FC and into SATA ?

In other words, is the free capacity available in the lower SATA pool sufficient to permit a high allocation rate in FC ?

Here are the figures presently...note the CKD (mainframe) vp not under FAST VP, all fully allocated in FC.

                               S Y M M E T R I X   T H I N   P O O L S                              
------------------------------------------------------------------------------------------------------
Pool         Flags  Dev               Total     Usable       Free       Used Full Subs Comp     Shared
Name         PTECSL Config              GBs        GBs        GBs        GBs  (%)  (%)  (%)        GBs
------------ ------ ------------ ---------- ---------- ---------- ---------- ---- ---- ---- ----------
r5_fc_CKD    TF9DEI RAID-5(3+1)     42389.4    42389.4     6434.1    35955.4   84   85    0        0.0
r5_efd       TEFDEI RAID-5(3+1)     17609.5    17609.5      179.2    17430.3   98    0    0        0.0
r6_sata      TSFDEI RAID-6(6+2)    547050.5   547050.5   332095.0   214959.1   39    0    0        0.0
r1_fc        TFFDEI 2-Way Mir      134452.2   134452.2    19230.2   115221.7   85  441    0        0.0

Total                            ---------- ---------- ---------- ---------- ---- ---- ---- ----------
GBs                                741501.6   741501.6   357938.4   383566.5   52   85    0        0.0

No Events found!

Top