8 Krypton

how to verify VNX FAST Cache efficiency and "forecast" FAST VP usability of a VNX Gen1 storage?


a customer of us utilizes a VNX Gen1 (VNX5700) for some time, they have the FAST Suite license and 5x 200GB flash drives, that are used as FAST Cache (4x 200GB drives in mirror, one spare). There is no significant performance issue, and the customer does not really deal with performance monitoring and management.

Before engaging in a major re-configuration, we would like to get some insight into what is going on in the storage system, how efficiently FAST Cache supports the current workload, whether upgrading FAST Cache (with some additional flash drives) may boost performance etc. The first look at the system does not really provide a good overview: there is a "Fast Cache Dirty Pages %" performance counter (from Unisphere Analyzer and from "system properties") for each of the SPs (currently around 15 ... 25 %), also another SP-level counter for "FAST Cache MBs Flushed MB/s". There are a few LUN-level cache-hit/miss counters for the user LUNs.

Question: is there some other view to get a quick and high-level understanding of how FAST Cache is doing? (utilization, IOPS etc.) To my understanding, "cache dirty" (and flushing, too) only gives some info on write caching, and we have no idea about read caching...

And another aspect. FAST VP is not yet in use at this customer with this system. Now there is a willingness to start utilizing this technology as well. Adding a new tier to an existing storage pool (that is currently homogeneous, e.g. NL-SAS only, or 15k SAS only) is more or less irreversible - there is no way to remove those drives from the pool (without destroying the whole pool). So we would like to do it with "extra care". We can not simply drop in a couple of flash drives into a pool and test, and then fail-back if the benefits are not as great as expected.

We would also like to have some idea whether FAST VP (e.g. adding a couple of flash drives to a SAS/15k/RAID5 pool) will help the whole pool, or specific LUNs. We understand that the benefits of using FAST VP depends on the data and the access patterns, whether there is significant locality (spatial, temporal). We also understand that the flash strategy/recommendation of EMC is to start with FAST Cache (which is a more universal approach and the benefit is more likely and larger), and then consider FAST VP as another option (but its benefit strongly depends on the data and access pattern).

Question: is there a tool (or a view in Unisphere, or whatever) where we can find info for an existing storage environment (VNX1, existing RAID Group LUNs or [currently homogeneous] storage pool LUNs), whether certain pools/LUNs are good candidates for FAST VP, automated storage tiering? As FAST VP operates at sub-LUN levels (1GB chunks), a "heat map" (or similar) for that "sub-LUN level" could help to have an idea about locality. But we do not have such a direct info from Unisphere. Even doing the Mitrend analysis, we "only" have a heat map on the disk drive level, and LUN-level info.



0 Kudos
1 Reply
8 Krypton

Re: how to verify VNX FAST Cache efficiency and "forecast" FAST VP usability of a VNX Gen1 storage?

Hi Geza

Yes there are tools that can do that.

EMC and EMC partners have access to those tools, If you are an EMC partner look in the New partner portal under services and Support delivery options, and find Delivery tools.

Another option is to submit data to Mitrend and get a report that way, Again EMC or a Partner should be able to help with that.



0 Kudos