Start a Conversation

Unsolved

This post is more than 5 years old

C

2640

January 26th, 2018 09:00

"Performance testing" guidelines for DDBEA backups of Oracle?

Not sure if that's the best title, but here's what I'm looking for:

Customer wants to run separate Oracle backup jobs using DDBEA over a period of time, and their key "ask" is that they get "true" results relative to how long it takes to run the backup without any "previous knowledge" of any other backups or backup data that may have run before.

So - what has to be done at the RMAN, Data Domain and possibly DDBEA levels for the customer to be able to run multiple Oracle backup jobs (that is, different DB sizes, using different numbers of channels, and things like that) but NOT see any "skew" due to any other backup job that may have run before?

One other "performance" question - is there any "formula" for determining how many channels to allocate in order to "optimize" an Oracle backup job? In my case, the customer has seen a backup run using 4 channels which "split" the overall database info into "9 pieces", and he is wondering whether that means it would run better if he used 9 channels instead?

One other thing - if either of these are "Oracle questions" as opposed to DDBEA question in any way, please comment accordingly so I know that is the case.

All comments/feedback appreciated - thanks.

Cal C.

85 Posts

January 26th, 2018 09:00

Oracle Open Floor Questions for a northwest customer of DDBEA:

Make sure the mtree is set to type "Oracle1".  The process to setup oracle is in the DDOS CLI manual.


Number of Files Per Set and Channels?

  • 4 files per set is the best practice, but depends on the size of DB and Hardware. Requires an internal block size of 8k or 16k. Files per set can be lower or even higher depending on impact to backup/DB speed. If the DB is using higher or lower block sizing, file per set changes will work but deduplication rates may drop.


  • 1 channel per CPU. Best practice is 1 channel per CPU, but can be as high as 2 per CPU. Again – start with 1 per CPU and you can test if 2 makes backups faster.



85 Posts

January 26th, 2018 10:00

That's is all configurable by inside Oracle. The DDBEA for RMAN agent just write out what Oracle tell it, like it was basically a NAS target.  When you setup the channels and filesperset it will reuse the channel.

132 Posts

January 26th, 2018 10:00

Thanks for that reply - good info there.

Is there any way to "predict" how many "pieces" Oracle will break the backup into? My customer is also concerned about the number of channels not matching up optimally with the number of pieces that Oracle processes during the backup, and having one or more channels "sitting around doing nothing" while other channels are processing the last pieces of the backup.

132 Posts

January 26th, 2018 11:00

I know that "it's not our job" to deal with the Oracle side of things, but if you have any insight as to how it is configured inside Oracle, that would be appreciated - all I'm looking for is info I can pass on to the customer with the understanding that they need to make their own decisions relative to Oracle best practices.

One other thing - can that mtree option you mentioned be set "dynamically"? I certainly wouldn't try to set it while backups were running to the Oracle SU, but are there any "side effects" of changing the value?

132 Posts

February 1st, 2018 06:00

Relative to making sure ' the mtree is set to type "Oracle1" ' (I'm guessing you mean the app optimized compression setting), should I be concerned about this message that I see when trying to change that parameter?


Changing this setting will cause dedup issues for the next backup.

Please ensure that this system has enough space before applying this setting.


Or is that just a warning relative to using the new compression "algorithm" and how well data will dedupe in the near term?


85 Posts

February 1st, 2018 07:00

Yes you need to be careful. The oracle1 change is going to change the stored format and get better compression in the future, but not change any existing backups. so it  could ruin the overall dedup for a short time until all previous RMAN backups get cycled off.

132 Posts

February 1st, 2018 08:00

Understood - but I thought I read something that seemed to imply that existing data would get "re-compressed" during the weekly cleaning?

Or was that likely just something that wasn't written as well as it could have been, and actually was trying to say that as data was cleaned off, things would get better?

(I'll try to find the reference and list it here)

No Events found!

Top