This post is more than 5 years old
214 Posts
1
2440
Data Domain directly level deduplication report
Hi there,
Is there anyway I can get a report on the deduplication level at a directory level?
Thanks,
Ed
This post is more than 5 years old
214 Posts
1
2440
Hi there,
Is there anyway I can get a report on the deduplication level at a directory level?
Thanks,
Ed
Top
umichklewis
1.2K Posts
0
February 20th, 2015 12:00
Sure -
Use "filesys show compression
recursive
", where path is the directory you wish to get statics on.
On my DD860:
sysadmin@dd-test# filesys show compression /data/col1/cifs_sql/BMHFAS recursive last 1 day
/data/col1/cifs_sql/BMHFAS/BMHFAS_msdb_Full_201502200400.safe: mtime: 1424422807887335000, bytes: 49,812,480, g_comp: 5,303,381, l_comp: 5,271,771, meta-data: 17,456, bytes/storage_used: 9.4
/data/col1/cifs_sql/BMHFAS/BMHFAS_master_Full_201502200400.safe: mtime: 1424422805264819000, bytes: 7,793,664, g_comp: 625,954, l_comp: 518,488, meta-data: 2,252, bytes/storage_used: 15.0
/data/col1/cifs_sql/BMHFAS/BMHFAS_model_Full_201502200400.safe: mtime: 1424422805509809000, bytes: 5,830,656, g_comp: 461,242, l_comp: 348,727, meta-data: 1,580, bytes/storage_used: 16.6
/data/col1/cifs_sql/BMHFAS/BMHFAS_BESTSYS_Full_201502200400.safe: mtime: 1424422806320493000, bytes: 5,697,536, g_comp: 609,477, l_comp: 499,890, meta-data: 2,084, bytes/storage_used: 11.4
/data/col1/cifs_sql/BMHFAS/BMHFAS_ReportServer_Full_201502200400.safe: mtime: 1424422808070944000, bytes: 9,891,840, g_comp: 445,371, l_comp: 328,452, meta-data: 1,608, bytes/storage_used: 30.0
/data/col1/cifs_sql/BMHFAS/BMHFAS_ReportServerTempDB_Full_201502200400.safe: mtime: 1424422808714263000, bytes: 5,696,512, g_comp: 445,017, l_comp: 308,430, meta-data: 1,552, bytes/storage_used: 18.4
/data/col1/cifs_sql/BMHFAS/BMHFAS_Sage_FAS_Full_201502200400.safe: mtime: 1424422831615641000, bytes: 966,536,192, g_comp: 9,515,081, l_comp: 9,511,266, meta-data: 27,648, bytes/storage_used: 101.3
The above output shows the stats for the backup files written in the last day.
Let us know if that helps!
Karl
Anonymous
5 Practitioner
5 Practitioner
•
274.2K Posts
1
February 22nd, 2015 07:00
Just as an FYI... if you have DPA, coming in version 6.2 will be the ability to report dedupe by DD client. Does require some additional resources and collection but can all be automated and scheduled once set-up.
edhoward
214 Posts
0
February 22nd, 2015 23:00
Thanks again but I can't access that link.
Nayaks1
77 Posts
1
February 22nd, 2015 23:00
try this
https://support.emc.com/kb/181054
Nayaks1
77 Posts
0
February 22nd, 2015 23:00
This might help
https://emc--c.na5.visual.force.com/apex/KB_HowTo?id=kA0700000004S1s
edhoward
214 Posts
0
February 22nd, 2015 23:00
Great thanks for this, out of interest do you know what g_comp and l_comp are? Looks like compression.
edhoward
214 Posts
0
February 23rd, 2015 00:00
Excellent thanks, do you think I could write a Perl script to get a report on all the directories in one go?
Nayaks1
77 Posts
0
February 23rd, 2015 02:00
To be honest I haven't tried that. It never crossed my mind either and right now I don't have a box to test. Let's see what others have to say.
edhoward
214 Posts
0
February 23rd, 2015 02:00
This only comes up once in a blue moon so I'm just going to run it manually, going take as long as writing the script! Thanks for your help.
umichklewis
1.2K Posts
0
February 25th, 2015 09:00
g_comp is Global Compress, l_comp is Local Compression, akin to the following:
sysadmin@ddtest# mtree show compression /data/col1/cifs_sql
From: 2015-02-18 17:00 To: 2015-02-25 17:00
Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp
(GiB) (GiB) Factor Factor Factor
(Reduction %)
------------- -------- --------- ----------- ---------- -------------
Written:*
Last 7 days 17675.7 2028.1 7.9x 1.1x 8.7x (88.5)
Last 24 hrs 2305.5 207.3 10.7x 1.0x 11.1x (91.0)
------------- -------- --------- ----------- ---------- -------------
* Does not include the effects of pre-comp file deletes/truncates
since the last cleaning on 2015/02/21 17:52:58.
Key:
Pre-Comp = Data written before compression
Post-Comp = Storage used after compression
Global-Comp Factor = Pre-Comp / (Size after de-dupe)
Local-Comp Factor = (Size after de-dupe) / Post-Comp
Total-Comp Factor = Pre-Comp / Post-Comp
Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100