Dee, The penalty will vary depending mainly on the geographic location of the secondary image, network design and contention, among many other factors. If it is on an array 100 miles away then yes, there will be a significant impact. Remember, every write must first be written at the MV/S target (secondary image) before it is written to the primary image. The delay could be calculated as the round trip time from array A to array B (including the time to commit the write to array B)
1. not knowing a lot about director class switches, is there an ISL somewhere between the two arrays? This might be an area that can cause a slow down
2. You have a R5 using 4+1 disks, is it possible that the writes are exceeding the recommended limits for Best Practices? 10K FC disks in a 4+1 can handle about 120 IOPS per disk or 5 * 120 = 600 IOPS for the raid group. Are you exceeding this? This may also cause a slow down.
3. Do you have sufficient Write cache allocated? Make sure that Read cache is about 150MB and the rest assigned to Write cache
If these do not help, you should probably open a case with EMC.
What I would suggest then is that you open a case with EMC - you will need the spcollects from both arrays as well as NAR files from both arrays that cover the times that you are experiencing the slowdowns.
Check the times on both arrays to make sure that the time is the same or as close as you can get it. Right click on SPA and select Properties - the time is on the General tab at the bottom.
If you do change the times on the arrays to match, you should probably stop Analyzer (uncheck the Statistic Logging box on the SP Properties page), clear the Archive and set the Archive Interval to 60 seconds (you need to be in Engineering mode to change the Archive Interval and to see the "Clear Archive" box - it's next to the Archive Interval), then start Analyzer (Statistic Logging enabled). Once the Archive Interval is 60 seconds, re-run you tests - they should run longer than a couple of minutes if possible.
What Flare version is running on the arrays? They should be the same.
karimh
2 Posts
1
January 12th, 2009 21:00
The penalty will vary depending mainly on the geographic location of the secondary image, network design and contention, among many other factors. If it is on an array 100 miles away then yes, there will be a significant impact. Remember, every write must first be written at the MV/S target (secondary image) before it is written to the primary image. The delay could be calculated as the round trip time from array A to array B (including the time to commit the write to array B)
Karim
Dee12_978b23
47 Posts
0
January 14th, 2009 11:00
kelleg
4.5K Posts
1
January 15th, 2009 08:00
Are both the source LUN and the mirror LUN that same raid type using the same number of disks with the same disk type and speed?
Is the Write cache settings for both arrays the same?
Are you still seeing the same performance hit now?
glen
Dee12_978b23
47 Posts
0
January 27th, 2009 20:00
Yes...same RAID type, number of disks, disk type, and speed
Yes...write cache is enabled and the same for both arrays.
Yes...we are still seeing the hit. We've gotten the same result from multiple tests.
Any other ideas?
kelleg
4.5K Posts
0
January 28th, 2009 10:00
glen
kelleg
4.5K Posts
0
January 28th, 2009 10:00
1. not knowing a lot about director class switches, is there an ISL somewhere between the two arrays? This might be an area that can cause a slow down
2. You have a R5 using 4+1 disks, is it possible that the writes are exceeding the recommended limits for Best Practices? 10K FC disks in a 4+1 can handle about 120 IOPS per disk or 5 * 120 = 600 IOPS for the raid group. Are you exceeding this? This may also cause a slow down.
3. Do you have sufficient Write cache allocated? Make sure that Read cache is about 150MB and the rest assigned to Write cache
If these do not help, you should probably open a case with EMC.
glen
Dee12_978b23
47 Posts
0
January 31st, 2009 13:00
Yes...I'm using R5 4+1. I've checked Navi Analyzer and the disks are not exceeding 120 IOPS. Not even close.
Yes...read and write cache are enabled.
These are 133gb FC disks.
kelleg
4.5K Posts
0
February 2nd, 2009 10:00
Check the times on both arrays to make sure that the time is the same or as close as you can get it. Right click on SPA and select Properties - the time is on the General tab at the bottom.
If you do change the times on the arrays to match, you should probably stop Analyzer (uncheck the Statistic Logging box on the SP Properties page), clear the Archive and set the Archive Interval to 60 seconds (you need to be in Engineering mode to change the Archive Interval and to see the "Clear Archive" box - it's next to the Archive Interval), then start Analyzer (Statistic Logging enabled). Once the Archive Interval is 60 seconds, re-run you tests - they should run longer than a couple of minutes if possible.
What Flare version is running on the arrays? They should be the same.
glen