Start a Conversation

Unsolved

This post is more than 5 years old

22617

July 11th, 2011 17:00

Reconfiguring FAST Cache

I just received some more EFD's and I like to reconfigure the FAST Cache. Today, it's configured with 4x200GB on a cx4-480. I'll be adding another 4 and capitalize on the max of 800GB of FAST Cache. My questions are:

1. Are there any prerequisite when deleting the existing fast cache configuration? If i'm not mistaken, system cache will be temporarily be disabled?

2. how much of the system cache will be used up with a max of 800GB fast cache on a cx4-480?

3. are there any configuration changes need to be made on the system cache such as changing the Read/Write ratio of the system cache? I think we employ an 80\20 percent rule (write\read).

thanks!

103 Posts

July 25th, 2011 09:00

Thanks everyone for your great feedback!

After looking at Analyzer, I think I may need to rethink of the whole FAST Cache expansion. As previously mentioned, we're currently at ~400GB FAST Cache. We had purchased several EFD's to expand the FAST Cache all the way to ~800GB. The remaining EFD's were to be used in a storage pool. But after looking at several days of data, the dirty pages in FAST Cache doesn't go above 60%. It seems to be steady around 40-60% range.  Based on that data, it seems I do not need to expand my cache.  What do you all think? Are there any data metrics that I need to look at to confirm that what I have in FAST Cache is enough?

1.3K Posts

August 27th, 2011 18:00

Why the system cache to be stopped for FAST cache reconfiguration?

159 Posts

August 28th, 2011 01:00

In the simple answer; a small chunk of system memory is used to store data for FAST cache to be able to operate. The system cache is brought offline to allow it to be re configured just a wee bit smaller than it was before. Don't worry, the rest of the system cache come back.

1.3K Posts

August 28th, 2011 07:00

Thanks Tkjoffs,

JSP, can you clarify the connection between the memory over head for FAST cache and  the number 1709(system cache?) you came up with?

159 Posts

August 28th, 2011 18:00

I am not sure how JSP came up with that number, but in general I use an average of around 1GB of DRAM is allcoated to FAST Memory Map per 1TB of FAST Cache used.  I am sure that is off by a chunk or two, but is has been fairly accurate in my arrays.

Ted

159 Posts

August 29th, 2011 02:00

RRR,

Ahem.   FAST Cache only suports 2TB -- that on the CX4/NX-960.  You confusing FAST VP vs FAST Cache?

https://powerlink.emc.com/nsepn/webapps/btg548664833igtcuup4826/km/live1//en_US/Offering_Technical/White_Paper/h8046-clariion-celerra-unified-fast-cache-wp.pdf

Hope that helps!

5.7K Posts

August 29th, 2011 02:00

1GB RAM per 1TB FAST Cache ? So if I have 200TB of FAST Cache, the array should have 200GB of RAM ? I'm pretty sure I need a V-MAX or DMX for that. Clariions or VNX's don't have that amount of RAM !

Message was edited by: RRR Of course I meant 200GB of FAST Cache, not 200TB, which would be great.... My mistake ;)

159 Posts

August 29th, 2011 02:00

Um, as far as I am aware, and per the FAST Cache documentation, FAST Cache can only go to 2TB (that is in the CX4-960). Are you thinking FAST VP? If so, it does not need the system cache as far as I am aware.

5.7K Posts

August 29th, 2011 03:00

Good morning to you too !

Seems as if I'm just awake

5.7K Posts

August 29th, 2011 03:00

Oops, I meant GB, not TB

1.3K Posts

September 4th, 2011 08:00

so what was the scenario that made you think first that the FAST cache is the way to go? What are you planning with those EFD at this time?

103 Posts

September 6th, 2011 11:00

When we first had our set of EFD drives, we were initially going to use those drives as a dedicated RAID Group for a high IO app. But after doing some research, forums, etc. we decided to use them as FAST Cache. We definitely saw the advantageous of this configuration. We purchased more EFD drives (9 x 200GB) to be able to capitalize on storage tiering but we didn't want to use them all for FAST. So I thought, why not use the extra 4 for FAST Cache and put it to its max? But then I realize that we may not need to expand FAST Cache and that 200GB may be enough. After looking at some data, it seems the % dirty pages on FAST Cache never goes up to 70%. So with that information, it seems I may not need to add more EFD since that will be a waste.

As of right now, those 4 EFDs are just sitting idle. We may use them for another storage pool.

I still have that question that hasn't been answered..."What performance metrics can I look at to see if I need more Fast Cache?" Dirty pages?

1.3K Posts

September 6th, 2011 12:00

FAST cache acts like a IO shock absorber , allowing VNX to handle sudden bursts of activity w/o problem.

Look at the both queue length and average busy queue length  for the LUNs. If the two numbers are NOT  close to each other that means the data is coming in bursts- the IO is bursty- this is real hard for read cache . The data in RD cache stays there only for a couple of seconds- if the reads are longer than the idle delay , the cache is empty when the next burst of reads come in.

combines these two , you may get some hints..

474 Posts

September 6th, 2011 14:00

Actually, the dirty pages number for FASTCache does not indicate the same performance characteristics as dirty pages for write cache.

FASTCache dirty pages indicate how much of the data in FASTCache has not been committed to disk (same as for write cache) but FASTCache doesn’t necessarily ever commit data to disk if it doesn’t need to. FASTCache uses an LRU algorithm for destaging data to disk, in order to make room for new FASTCache promotions. Basically, if FASTCache dirty pages is extremely high (like 95+%) then that’s more likely an indicator of a workload that is not friendly to FASTCache. If you have workloads that are benefiting from FASTCache, it’s extremely likely that they will benefit even more from having MORE FASTCache.

One thing to consider is looking at pools and/or LUNs that are seeing very low FASTCache hit ratios, or ones that contain low priority datasets, and disabling FASTCache for those datasets. That could help dedicate the FASTCache you do have to the workloads both benefit from it, and are important to you. Since FASTCache is disabled at the pool level for Pool LUNs, consider having a couple of pools, one for high priority workloads and another for low priority workloads, enabling FASTCache on only the High Priority workloads.

As a side note... the latest guidance does agree that if you have a limited budget, focusing your SSD dollars on FASTCache first provides better bang for your buck than FASTVP. Once you’ve maxed out FASTCache then add SSD to datasets that can benefit from tiering.

Richard J Anderson

1 Rookie

 • 

20.4K Posts

September 6th, 2011 14:00

Richard Anderson wrote:

Basically, if FASTCache dirty pages is extremely high (like 95+%) then that’s more likely an indicator of a workload that is not friendly to FASTCache.

Richard,

can you please elaborate ? That number could be high because i have a large dataset that gets promoted to Fast Cache ? That's not a bad thing, is it ?

Thanks

No Events found!

Top