Start a Conversation

Unsolved

This post is more than 5 years old

1807

February 3rd, 2014 14:00

Time finder Clone and QoS setting

Recently we had issue where the cache WP went to 100% due to TF clone copies using Replication Manager for exchange tool....Upon recommendation we decided to change the QoS & Cache writing priority...

Can anyone tell, do we need to change the QoS on both Source & target clone devices or only target devices....?

Also, do we need to change both QoS and Symmetrix Priority control & Cache partition values to higher ..?

21 Posts

February 3rd, 2014 14:00

THanks for your reply...

we are running with latest version only i.e 5876.229.145 which we recently upgraded....interestingly we got the WP 100% full after the upgrade when the TF clone jobs ran first time after upgrade...

The TF clone target devices are thick devices from SATA disk group and during the WP % full time the SATA disk group also was 100% busy...We don't have any DCP created yet...

When I see the TF clone target devices storage group it is showing all reads instead of writes though no server is reading those target devices.....only the Repl Mgr tool will copy from source and do consistency check on it (exhcnage SGs)....

I'm thinking to start with Q0S clone copy pacing value 8 on target devices....if we make same value on source device storage groups also, will that cause any issue to production and will it reduce WP load ?

465 Posts

February 3rd, 2014 14:00

What version of microcode are you running. There was a clone change in think it was 5875 code that change the clone from a push from the source to a pull from the target. This change helped reduce the amount of write pending. If you are running a recent version of code, it needs to be set on the target.

There is a maximum amount of cache that clone can use so if you are running into WP limits, maybe the clone workload is causing WP on other workload. QOS would be a good place to start.

Keep your change as simple as possible, one change at a time until you get the desired effect. Start with the QOS.

1.3K Posts

February 3rd, 2014 14:00

Not sure what code you are running, but clone copy should not bring the system to WP limits, it is suppose to self throttle.

Also I have no idea what "Cache writing priority" is.  Priority controls only works for reads, and only on thick devices. It won't help throttle the copies.

I'm pretty sure with the newer code, you set the QoS on the target, but it won't hurt to set it on both.

Do you already have DCP setup?  If so, do you have the targets in a separate partition?  Is this thick or thin?

21 Posts

February 3rd, 2014 15:00

Thanks Sean....Yes, we have already changed the FAST VP relocation rate as part of upgrade only ( from 4 to 5) ...However, these are thick devices which are not part of FAST VP ...so that will not cause anything to these clone copy operations...

21 Posts

February 3rd, 2014 15:00

THanks for your reply...

we are running with latest version only i.e 5876.229.145 which we recently upgraded....interestingly we got the WP 100% full after the upgrade when the TF clone jobs ran first time after upgrade...

The TF clone target devices are thick devices from SATA disk group and during the WP % full time the SATA disk group also was 100% busy...We don't have any DCP created yet...

When I see the TF clone target devices storage group it is showing all reads instead of writes though no server is reading those target devices.....only the Repl Mgr tool will copy from source and do consistency check on it (exhcnage SGs)....

I'm thinking to start with Q0S clone copy pacing value 8 on target devices....if we make same value on source device storage groups also, will that cause any issue to production and will it reduce WP load ?

226 Posts

February 3rd, 2014 15:00

PRS,

Since you're using thick clones, it should be using request based copy, so qos should work on the clone targets -- but it won't hurt production performance to set clone qos on the sources either. Since you just recently upgraded code to 5876.229, you may want to also consider increasing your fast relocation rate (assuming you're using fast) to a value higher than the default of 5... The fast algorithm is more aggressive in 229.

Sent from my iPhone

No Events found!

Top