4 Operator

 • 

2K Posts

October 30th, 2012 04:00

No there isn't any way to configure FAST cache as "read only".  In fact, the CLI command cache -fast only provides:

-mode rw

I believe though at one point there was the ability to do so when it was first introduced with the CX4 (not the VNX), but I could be wrong.  Anyways, if it was available it would have been a very early release and made only a brief appearance.

Also, you cannot control FAST VP tiering measurement (only measure based on read access) in the manner you requested.  In fact, there isn't any control that granular that is available to the user (except for the usual tiering policies, redistribution rate, and scheduling).

4 Operator

 • 

4.5K Posts

November 13th, 2012 15:00

With FAST Cache you can not control the allocation of FAST cache to either read or Write, it will use what it needs. Just because Write is enabled, does not mean that some of the cache is being reserved for only Writes.

Remember to mark your question answered when you have the correct answer and to award points to the person providing the correct answer.

glen

115 Posts

November 16th, 2012 13:00

I ask it because I didn't get better performance on EFD for write operation.

Most of the times, this happens because write operations which were previously supported by x number of physical spindles end up on very few EFDs. I would suggest opening a case with EMC and understanding why it is not what you expected.

Write intensive operations are better off on traditional raid group kind of layout (atleast thats what I think )

4 Operator

 • 

4.5K Posts

November 16th, 2012 14:00

Please see the FAST Cache White paper on PowerLink. EFD by their design do not do as well on Writes as they do on Reads. There are certain types of Write operations where traditional disks (spinning) can out perform EFD> Large block, sequential Writes are an example of IO types that are better suited for spinning disks.

See page 18

https://support.emc.com/docu32136_White-Paper:-VNX-FAST-Cache-—-A-Detailed-Review.pdf

glen

4 Operator

 • 

5.7K Posts

November 19th, 2012 00:00

So how large is your working set? (the amount of hot data)

And how many GBs does your FAST Cache offer? It could easily be that you don't have enough FAST Cache, but you'd better check with EMC as Sridhar246 suggested.

474 Posts

November 19th, 2012 10:00

Ariesh,

Do you have an idea what the write workload looks like?  Is it SQL Logs or some other similar small block sequential write?  Is it large block sequential write like backups?  Or is it expected to be small block random write?

FASTCache will accelerate small block random reads and writes assuming that there is some locality of reference, ie: certain blocks are being accessed repeatedly.  After several accesses to the same 64KB block on disk, FASTCache will cache that block on the SSD disks and future read/writes to that block will be serviced by the SSD.  There are a couple things to know about FASTCache with respect to writes and in general.

1.) Writes still go to the SP Cache first so if you are hitting the limits of the SP or Cache as far as performance, SSD's aren't going to help.  When the SP Cache flushes the block to disk, it will flush to SSD IF that block was promoted to FASTCache, otherwise the block will flush to it's permanent home in the RG/Pool LUN.  Flushing to FASTCache allows the SPCache to flush faster, helping to improve overall performance.

2.) Large block IO's (ie: >128KB) are ignored by FASTCache, so you will see NO benefit

3.) Sequential IO typically does not experience repeated accesses and thus usually will not benefit from FASTCache.  Further, True sequential IO can be serviced by disk quite efficiently so acceleration through cache is not as imperitive.

4.) Small Block sequential (ie: sequential IO with IO size < 21KB) will actually negatively impact FASTCache operation.  Small block sequential will cause every block of the sequential operation to be promoted to FASTCache, but those blocks will not be needed later so it essentially squeezes other data out of FASTCache hurting the overall performance.

It is recommended to disable FASTCache on LUNs that are predominately small-block sequential (ie: SQL Logs).  If the LUNs are in a pool, then it's recommended to have a separate pool for the log type LUNs and disable FASTCache for that pool.

Another possibility is that prior to enabling FASTCache, the write workload was being serviced by many disks (>100) and FASTCache may only be configured with 2 or 4 SSD's.  Assuming best practices of 2500-3500 IOPS per SSD, you'd need at least 6-10 SSD's to match that performance assuming no other workload on the disks.  This may vary depending on the RAID type, etc but I just wanted to mention that as a possiblity.

Most EMC sales teams have tools they can use to help you understand what your array is actually doing.  You may want to contact your SE and ask him/her to do an analysis of your array.

4 Operator

 • 

2K Posts

November 21st, 2012 14:00

Richard Anderson wrote:

Ariesh,

4.) Small Block sequential (ie: sequential IO with IO size < 21KB) will actually negatively impact FASTCache operation.  Small block sequential will cause every block of the sequential operation to be promoted to FASTCache, but those blocks will not be needed later so it essentially squeezes other data out of FASTCache hurting the overall performance.

It is recommended to disable FASTCache on LUNs that are predominately small-block sequential (ie: SQL Logs).  If the LUNs are in a pool, then it's recommended to have a separate pool for the log type LUNs and disable FASTCache for that pool.

Richard,

Awesome summary!  I know that this is the CLARiiON forum, but wanted to make mention of the VNX OE for Block v32 (Inyo) filter to look for these I/O profiles that aren't good candidates and will not promote such data to FAST Cache even though it is enabled at the pool level/FLARE LUN level.

However, it is still best to manually disable it when you know it isn't a good candidate as you suggest.  Also, having separate pools is still best practice.  It is worth noting though that when separate pools aren't possible, this filter will be working in the background.

[...]

Small block sequential and high frequency access filter

In VNX OE Release 32, there have been improvements in dealing with small block sequential and short-lived bursts of activity with high spatial locality workloads.  Previously, these workloads, with a low potential for re-hit, would trigger the promotion of pages into FAST Cache, resulting in very little benefit.  With this enhancement, the FAST Cache will more intelligently identify longer-term access patterns of data that will be most beneficial for the system to use in FAST Cache, and avoid those with little to no benefit.

[...]

No Events found!

Top