Start a Conversation

Unsolved

This post is more than 5 years old

696

January 9th, 2009 13:00

Penalty for MV/S





2 Posts

January 12th, 2009 21:00

Dee,
The penalty will vary depending mainly on the geographic location of the secondary image, network design and contention, among many other factors. If it is on an array 100 miles away then yes, there will be a significant impact. Remember, every write must first be written at the MV/S target (secondary image) before it is written to the primary image. The delay could be calculated as the round trip time from array A to array B (including the time to commit the write to array B)

Karim

47 Posts

January 14th, 2009 11:00

But the 2 Clariions are sitting side-by-side and connected to the same Enterprise class Director. Could it be FLARE code bug?

4.5K Posts

January 15th, 2009 08:00

When you saw the 57% performance hit, was that possibility during the time that the Initial Sync was taking place?

Are both the source LUN and the mirror LUN that same raid type using the same number of disks with the same disk type and speed?

Is the Write cache settings for both arrays the same?

Are you still seeing the same performance hit now?

glen

47 Posts

January 27th, 2009 20:00

No...the initial sync was complete and the state was "synchronized" or "consistent"

Yes...same RAID type, number of disks, disk type, and speed

Yes...write cache is enabled and the same for both arrays.

Yes...we are still seeing the hit. We've gotten the same result from multiple tests.

Any other ideas?

4.5K Posts

January 28th, 2009 10:00

Just noticed that this is a CX500 - what kind of disks are you using - FC or ATA>

glen

4.5K Posts

January 28th, 2009 10:00

some possible causes:

1. not knowing a lot about director class switches, is there an ISL somewhere between the two arrays? This might be an area that can cause a slow down

2. You have a R5 using 4+1 disks, is it possible that the writes are exceeding the recommended limits for Best Practices? 10K FC disks in a 4+1 can handle about 120 IOPS per disk or 5 * 120 = 600 IOPS for the raid group. Are you exceeding this? This may also cause a slow down.

3. Do you have sufficient Write cache allocated? Make sure that Read cache is about 150MB and the rest assigned to Write cache

If these do not help, you should probably open a case with EMC.

glen

47 Posts

January 31st, 2009 13:00

There's not an ISL between the arrays.

Yes...I'm using R5 4+1. I've checked Navi Analyzer and the disks are not exceeding 120 IOPS. Not even close.

Yes...read and write cache are enabled.

These are 133gb FC disks.

4.5K Posts

February 2nd, 2009 10:00

What I would suggest then is that you open a case with EMC - you will need the spcollects from both arrays as well as NAR files from both arrays that cover the times that you are experiencing the slowdowns.

Check the times on both arrays to make sure that the time is the same or as close as you can get it. Right click on SPA and select Properties - the time is on the General tab at the bottom.

If you do change the times on the arrays to match, you should probably stop Analyzer (uncheck the Statistic Logging box on the SP Properties page), clear the Archive and set the Archive Interval to 60 seconds (you need to be in Engineering mode to change the Archive Interval and to see the "Clear Archive" box - it's next to the Archive Interval), then start Analyzer (Statistic Logging enabled). Once the Archive Interval is 60 seconds, re-run you tests - they should run longer than a couple of minutes if possible.

What Flare version is running on the arrays? They should be the same.

glen
No Events found!

Top