Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1705

May 21st, 2013 23:00

Mirrorview/S performance impact

Hi!


I was wondering if anyone wanted to take a shot at guestimating the performance impact a Mirrorview/S setup would have.

The setup consists of 2 identical VNX5300 systems, with similar disks and storage pool setups, running R32 software. They are separated by a dark fibre (FC) connection, approximate distance is 16km (10 miles). I know about the long distance buffer tuning on the FC-switch ports, this has helped quite a bit, but we are still seeing a 50-60% performance hit (for writes) when the mirrors are enabled. When running the same I/O tests on the LUNs with the mirrors broken, the results are back to normal.


So the question is; is this normal / something others are seeing? If not, where would be a good starting point for further investigation and tuning?

Thanks in advance!

2 Intern

 • 

5.7K Posts

May 22nd, 2013 02:00

Even if your buffer tuning has been set up for maximum performance, each write I/O still needs to travel from your source array to the target array and the acknowledgement still needs to travel back to the source array. You might see that the link between the two sites is doing well (like 100% utilization), but response times measured on the host are dropping.

Take a look at the "Ask the Expert" session on https://community.emc.com/message/691991#691991.

The SCSI protocol requires traveling the distance 4 times (you can speed that up by taking away 1 of the 4 by using the "Fast write" feature which might cost you another switch license). Although the example Jon Klaus gave with the 200km, you can see the point I hope. My experience is that adding extra SAN components will cost you in latency and so also response times. Add a second storage array in the equation and its response times are added as well. Suppose writing to an array's cache will cost you 1 ms, you need to add up the latency on the link, which in your case won't be much since it's only 10 km. You first array has the 1ms response time, but you still need to wait for the second array to send the ACK as well, adding that extra 1 ms. Caching will speed things up, but I'd say that a 50 to 60% performance hit was to be expected. If DR requires this, you have to accept this. If your DR plan is not that tight, you might want to consider using mirrorview asynchronous. MV/A will send an ACK to your server without waiting for the remote array, but your remote site won't be in 100% all the time.

Does this answer your question?

12 Posts

May 22nd, 2013 02:00

Yes it does, thanks for your input. The link itself is far from saturated, maybe 2Gb out of 8Gb available.Not sure if fast write is available on the lower end switches, I thought it was just for directors? MV/A will be considered also. Thanks!

2 Intern

 • 

5.7K Posts

May 22nd, 2013 05:00

Glad to help.

No Events found!

Top