edhoward
3 Argentium

Data Domain directly level deduplication report

Jump to solution

Hi there,

Is there anyway I can get a report on the deduplication level at a directory level?

Thanks,

Ed

1 Solution

Accepted Solutions
umichklewis
4 Tellurium

Re: Data Domain directly level deduplication report

Jump to solution

Sure -

Use "filesys show compression <path> recursive", where path is the directory you wish to get statics on.

On my DD860:

sysadmin@dd-test# filesys show compression /data/col1/cifs_sql/BMHFAS recursive last 1 day

/data/col1/cifs_sql/BMHFAS/BMHFAS_msdb_Full_201502200400.safe: mtime: 1424422807887335000, bytes: 49,812,480, g_comp: 5,303,381, l_comp: 5,271,771, meta-data: 17,456, bytes/storage_used: 9.4

/data/col1/cifs_sql/BMHFAS/BMHFAS_master_Full_201502200400.safe: mtime: 1424422805264819000, bytes: 7,793,664, g_comp: 625,954, l_comp: 518,488, meta-data: 2,252, bytes/storage_used: 15.0

/data/col1/cifs_sql/BMHFAS/BMHFAS_model_Full_201502200400.safe: mtime: 1424422805509809000, bytes: 5,830,656, g_comp: 461,242, l_comp: 348,727, meta-data: 1,580, bytes/storage_used: 16.6

/data/col1/cifs_sql/BMHFAS/BMHFAS_BESTSYS_Full_201502200400.safe: mtime: 1424422806320493000, bytes: 5,697,536, g_comp: 609,477, l_comp: 499,890, meta-data: 2,084, bytes/storage_used: 11.4

/data/col1/cifs_sql/BMHFAS/BMHFAS_ReportServer_Full_201502200400.safe: mtime: 1424422808070944000, bytes: 9,891,840, g_comp: 445,371, l_comp: 328,452, meta-data: 1,608, bytes/storage_used: 30.0

/data/col1/cifs_sql/BMHFAS/BMHFAS_ReportServerTempDB_Full_201502200400.safe: mtime: 1424422808714263000, bytes: 5,696,512, g_comp: 445,017, l_comp: 308,430, meta-data: 1,552, bytes/storage_used: 18.4

/data/col1/cifs_sql/BMHFAS/BMHFAS_Sage_FAS_Full_201502200400.safe: mtime: 1424422831615641000, bytes: 966,536,192, g_comp: 9,515,081, l_comp: 9,511,266, meta-data: 27,648, bytes/storage_used: 101.3

The above output shows the stats for the backup files written in the last day.

Let us know if that helps!

Karl

View solution in original post

0 Kudos
10 Replies
umichklewis
4 Tellurium

Re: Data Domain directly level deduplication report

Jump to solution

Sure -

Use "filesys show compression <path> recursive", where path is the directory you wish to get statics on.

On my DD860:

sysadmin@dd-test# filesys show compression /data/col1/cifs_sql/BMHFAS recursive last 1 day

/data/col1/cifs_sql/BMHFAS/BMHFAS_msdb_Full_201502200400.safe: mtime: 1424422807887335000, bytes: 49,812,480, g_comp: 5,303,381, l_comp: 5,271,771, meta-data: 17,456, bytes/storage_used: 9.4

/data/col1/cifs_sql/BMHFAS/BMHFAS_master_Full_201502200400.safe: mtime: 1424422805264819000, bytes: 7,793,664, g_comp: 625,954, l_comp: 518,488, meta-data: 2,252, bytes/storage_used: 15.0

/data/col1/cifs_sql/BMHFAS/BMHFAS_model_Full_201502200400.safe: mtime: 1424422805509809000, bytes: 5,830,656, g_comp: 461,242, l_comp: 348,727, meta-data: 1,580, bytes/storage_used: 16.6

/data/col1/cifs_sql/BMHFAS/BMHFAS_BESTSYS_Full_201502200400.safe: mtime: 1424422806320493000, bytes: 5,697,536, g_comp: 609,477, l_comp: 499,890, meta-data: 2,084, bytes/storage_used: 11.4

/data/col1/cifs_sql/BMHFAS/BMHFAS_ReportServer_Full_201502200400.safe: mtime: 1424422808070944000, bytes: 9,891,840, g_comp: 445,371, l_comp: 328,452, meta-data: 1,608, bytes/storage_used: 30.0

/data/col1/cifs_sql/BMHFAS/BMHFAS_ReportServerTempDB_Full_201502200400.safe: mtime: 1424422808714263000, bytes: 5,696,512, g_comp: 445,017, l_comp: 308,430, meta-data: 1,552, bytes/storage_used: 18.4

/data/col1/cifs_sql/BMHFAS/BMHFAS_Sage_FAS_Full_201502200400.safe: mtime: 1424422831615641000, bytes: 966,536,192, g_comp: 9,515,081, l_comp: 9,511,266, meta-data: 27,648, bytes/storage_used: 101.3

The above output shows the stats for the backup files written in the last day.

Let us know if that helps!

Karl

View solution in original post

0 Kudos
mikmowtx01
1 Copper

Re: Data Domain directly level deduplication report

Jump to solution

Just as an FYI... if you have DPA, coming in version 6.2 will be the ability to report dedupe by DD client. Does require some additional resources and collection but can all be automated and scheduled once set-up.

edhoward
3 Argentium

Re: Data Domain directly level deduplication report

Jump to solution

Great thanks for this, out of interest do you know what g_comp and l_comp are? Looks like compression.

0 Kudos
Nayaks1
2 Iron

Re: Data Domain directly level deduplication report

Jump to solution
0 Kudos
edhoward
3 Argentium

Re: Data Domain directly level deduplication report

Jump to solution

Thanks again but I can't access that link.

0 Kudos
Highlighted
Nayaks1
2 Iron

Re: Data Domain directly level deduplication report

Jump to solution
edhoward
3 Argentium

Re: Data Domain directly level deduplication report

Jump to solution

Excellent thanks, do you think I could write a Perl script to get a report on all the directories in one go?

0 Kudos
Nayaks1
2 Iron

Re: Data Domain directly level deduplication report

Jump to solution

To be honest I haven't tried that. It never crossed my mind either and  right now I don't have a  box to test. Let's see what others have to say.

0 Kudos
edhoward
3 Argentium

Re: Data Domain directly level deduplication report

Jump to solution

This only comes up once in a blue moon so I'm just going to run it manually, going take as long as writing the script! Thanks for your help.

0 Kudos