Start a Conversation

Unsolved

This post is more than 5 years old

E

1019

May 5th, 2011 10:00

MirrorViewS and Fast Cache

Am I to understand correctly that implementing MirrorView/S for a primary LUN that is FAST Cache will negate all of the write benefits coming from the FAST Cache since the synchronous latency of MirrorView/S would be in place?

727 Posts

May 5th, 2011 11:00

Not necessarily, it depends on what is happening on the DR storage system. If the write cache on the DR storage system is able to keep up with the incoming writes, it should not matter. Remember, FAST Cache works in conjunction with the storage system DRAM cache. Refer to the FAST Cache whitepaper on Powerlink to understand how FAST Cache works along with the DRAM cache.

10 Posts

May 5th, 2011 13:00

Thanks for the response, but as I understand it, here's how /S works.

1) Write hits primary array, and sits in DRAM.

2) Write is sent to secondary array, and hits DRAM.  A write confirm is sent to Primary.

3) Primary responds to host with write confirm.

So if the DR side can't keep up, then it is true that it would slow performance.  I'm really wondering if the above scenario is true, such that having a FAST Cache enabled LUN on the primary side would receive no benefit of the FAST Cache for writes.

727 Posts

May 5th, 2011 15:00

Yes, MV/S works in the manner that you have described.

FAST Cache works with DRAM cache and in a lot of scenarios, helps maintain the DRAM cache behaviour. For the chunks of data that has already been promoted into FAST Cache, the cache page can quickly destage to flash drives (part of FAST Cache) from DRAM cache when needed; and therefore helps avoid force flush scenario. Since you now have a higher probability that your incoming IO is going to be able to find free DRAM cache pages, the incoming IO is faster (because of FAST Cache).

On the DR side, if nothing much is happening on the array, the competition of free cache pages in DRAM cache will not be as high as that in the primary side. The MV/S copy operation from the primary array will be able to find free DRAM cache pages as it is.

In this case, FAST Cache helps fasten the IO operations on the primary side by making free DRAM cache pages available. Typically, the storage system on the DR site will not be as busy as the primary site and therefore will be able to acknowledge the incoming MV/S copy operations by using the free DRAM cache pages.

91 Posts

May 9th, 2011 00:00

Hello

Take a look at emc266584.

There is issue when you use MV/S and Fast Cache

Alex

10 Posts

May 9th, 2011 05:00

Thanks for posting the article.  I had seen that already, and the issue deals more with using Fast Cache to promote the secondary image.  

15 Posts

October 16th, 2013 02:00

Hi there, anyone has resolved this issue? I'm still seeing performance problems. I have an active\active MirroviewS configuration with two VNX 5100 replicating each other. both the VNX have storage pools containing primary images and secondary images with Fast Cache enabled (not possible to have separate storage pools). In emc266584 it states that upgrading to 05.31.000.5.502 will correct this issue but i'm on 05.32 and the problem still persist.

Any update on this?

Thanks

Sergio

No Events found!

Top