PatrickBetts
3 Argentium

Re: Data Domain compression / de-dupe expectations

Hrvoje,

I'm trying to.  Initially there was an issue with sending me attachments but we got it working.  I'm going over the data now.  If chappel02 allows, I'll post my findings (or chappel02 can).

Best Regards,

Patrick

0 Kudos
dynamox
6 Thallium

Re: Data Domain compression / de-dupe expectations

compression and encryption disabled in your backup application ?

0 Kudos
Fullerstein
1 Copper

Re: Data Domain compression / de-dupe expectations

gzfast compression seems a very odd setting. I would highly recommend setting this to lz (auto).

How often are you running full backups? Higher dedupe rates will be achieve on subsequent incrementals. A 3-10x on initial should be expected, however.

The other recommendations are fair: ensure no multi-plexing, encryption, or compression is being done to the data before its received by the Data Domain system.

0 Kudos
ble1
6 Indium

Re: Data Domain compression / de-dupe expectations

I'm running gzfast on all my DD boxes and there is nothing odd about it

0 Kudos
Highlighted
chappel02
1 Copper

Re: Data Domain compression / de-dupe expectations

Sorry for dropping this. For the sake of giving some closure, the final answer was that Storage Craft is not a supported backup client, so although the DD160 shows up as a valid target and can be written to, it DOES NOT compress OR de-dupe the file output of the Storage Craft clients. We switched to vDP-A for our virtualized clients, and got expected levels of compression and de-dupe (more or less), and were forced to retain our old pre-DD160 backup targets and Storage Craft for our physical systems.

Thanks for all your suggestions and tips.

ch

0 Kudos