cphillips101
1 Nickel

Data Domain DD670 and Arcserve Backup R17.5 - best practise

Jump to solution

Hi all,

I am using Arcserve Backup r17.5 to backup some Microsoft SQL servers.  I am backing them up to a DD670 using CIFS shares on the DD.

Arcserve gives me an option of using deduplication when setting up the jobs.

My question is this - is it safe to allow Arcserve to carry out deduplication when storing the backups on the DD?  I am concerned that something peculiar may happen if I store deduplicated backups on a hardware dedup device!

I've tested it and I get better results if using the deduplication option in Arcserve than if storing them straight to the DD.

Thoughts please!

Regards
Colin

Tags (1)
0 Kudos
1 Solution

Accepted Solutions
rugby01
2 Bronze

Re: Data Domain DD670 and Arcserve Backup R17.5 - best practise

Jump to solution

Don't use compression or deduplication from Archserve. it will make every byte stored unique and remove all compression and deduplication.   EMC does have a replacement for CIFS coming out this year (currently in Beta).  When that product is available it will solve all your SQL dump issues.

I'm guessing the reason your seeing better on the archserve report is you have compression turn on already. Turn it off!

You can see the effects of compression in the "show performance" section on the autosupport.  the GZ column and LC column show global compression (dedup) and LZ compression (zip).   If either column is trending toward 1.0 then compression or deduplication is happening on the client. We see this most often in customers doing dump and sweep backups where the SQL admin turns on compressed backups.

0 Kudos
3 Replies
rugby01
2 Bronze

Re: Data Domain DD670 and Arcserve Backup R17.5 - best practise

Jump to solution

Don't use compression or deduplication from Archserve. it will make every byte stored unique and remove all compression and deduplication.   EMC does have a replacement for CIFS coming out this year (currently in Beta).  When that product is available it will solve all your SQL dump issues.

I'm guessing the reason your seeing better on the archserve report is you have compression turn on already. Turn it off!

You can see the effects of compression in the "show performance" section on the autosupport.  the GZ column and LC column show global compression (dedup) and LZ compression (zip).   If either column is trending toward 1.0 then compression or deduplication is happening on the client. We see this most often in customers doing dump and sweep backups where the SQL admin turns on compressed backups.

0 Kudos
cphillips101
1 Nickel

Re: Data Domain DD670 and Arcserve Backup R17.5 - best practise

Jump to solution

rugby01

Thanks for your reply.

I have compression and deduplication turned off within Arcserve.  It was the fact that there was an option to turn it on when specifying the datastore which I was questioning.

Here's a screenshot:

dd3.JPG.jpg

I'll continue saving the savesets to the DD as a normal device.

The Data Domain is definitely compressing the backups:

dd1.JPG.jpg

That's for the last 24 hours.  Impressive!

I couldn't locate the "show performance" section on autosupport.  I am running OS 5.4.5.0-477080 so maybe that's the reason?

All I have is:

dd2.JPG.jpg

Is it worth me updating the firmware/OS on the device?  It's 5 years old but I want to keep using it as it's a fantastic device.

Regards
Colin

0 Kudos
rugby01
2 Bronze

Re: Data Domain DD670 and Arcserve Backup R17.5 - best practise

Jump to solution

Yes - update the code to the 5.7 level as 100's of bugs have been fixed. 

autosupport - you should be getting a daily support or you can open the autosupport files show at the bottom of your image. The very bottom of the autosupport is "System Show Performance". Look at the gcomp and lcomp. the total deduplication is the average over that 10 minutes of gcomp X lcomp. If you see a section where your getting less then 1.0 for gcomp and 1.0 for lcomp - compression on the client!

Capture.PNG.png

0 Kudos