Unsolved

4 Posts

1328

July 12th, 2021 08:00

EVT-SPACE-00004: Space usage in Data Collection has exceeded 90% threshold

Hello everyone,

 I run command "filesys show space" for checking the space utilization of a DD9800, and this is the result

bordio_0-1626101937142.png

Used GiB are 441248.0, but when I run the command "filesys show compression" for checking the main mtree used for backup, this is the result:

bordio_1-1626102712603.png

Locally Compressed in GB are 174231,9

From what I understand, Locally Compressed is how much final space the mtree is using. Is it right? If this is true, I don't understand what else is occupying the space of the data domain

Thank you

bd

4 Posts

July 22nd, 2021 05:00

Please, can anyone let me know if Locally Compressed is how much space is taking the mtree on the filesystem?

If this is true, I'll need to understand why the sum of the Locally Compressed value of the all mtree, is about the half of /data: post comp used value

Thank you

bd

18 Posts

August 12th, 2021 19:00

Does the "Original Bytes" of "filesys show compression" match your backup data size?
If not, then you would most likely see more than 1 mtree by running "mtree list".

"filesys show space" shows DD utilization is at 91%.
Check your backup size and retention settings ASAP before it reaches 100%.

August 23rd, 2021 08:00

You state that this is the main mtree used for backup, that by itself does not state anything about how that compares to the rest and the actual total?

So what are the overview showing all mtrees? And what backup tool(s) is used? Networker?

Due to way for example Networker nowadays (since nw 9.x if memory serves me well) is being processed by a dd once ingested when it moves/copies data from its temporary location to the actual location on a dd, I believe that still the show compression output would not be able to show the actual amount. There have been multile kb articles about that, but I can't recall is is even addressed nowadays. We are not actually using it to determine anything wrg to networker. Will have to check maybe again with some nw19.3 and 19.4 envs to see if that is still the case.

Also getting to know which mtree or ideally even which client is occupying how much storage to be able bill customers based on a mix of actual storage it occupies combined with for Networker front end protected capacity so that billing would mainly be based on actual costs for networker and actual costs on a dd instead of the old fashioned retained amount of data turns.out to be rather cumbersome when using dell dpa collecting the data from both nw and dd.

Having a clear overview which client conusmes the most storage, would make it easier to focus on where the most is to be gained when getting rid of it. With the whole dedupe, you don't really know until data is deleted within nw, data expiry is run within nw and dd cleaning has run, until the actual result gets known for freeing up disk space.

Up until now we mostly look at things if for example data is being expired and at the largest clients, hoping that the most is to be gained from those (especially if they are db systems).

Networker for example has a default where if a volume is being read from (either a restore or cloning) that it will not expire any data on that volume to protect against possible dataloss in case a saveset would expire while it still is being read from. However that all happens silently. No notification that expiration is not occurring. Is more looking for volumes for which expiration is not occurring.

However now we set a nsr debug option to expire data from such volumes regardless to prevent a fd filling up.

A giveaway also that this is occurring is that the amount of not yet expired data (mminfo query with "!ssrecycle") is (much) less than the total amount of data (querying for all data, so also expired data). Turned out to be multi TB's in our envs at times until setting the /nsr/debug option (I forgot the actual debug file to create for that, have to look that up again). We also had at times occurences of clients making incr. forever backups unintentional, by having their full backup not scheduled actively but being run once while incr. backups were enabled. This caused all inc.  backups to be kept, also when they were already long expired. Went back for a whole year in one case until it got noticed as a result of which nsrim run was approaching 24 hours to complete due to that client having 1 million savesets..

Maybe also PCM physical capacity management would give you insights how much diskspace every mtree actually occupies. But even that would have the total for all mtrees be higher than actually occupied, as pcm/pcr treats the data collectuon as if that mtree would be the only mtree on a dd. If a mtree also dedupes against data in other mtrees than that is not taken into account (so for billing if there is only one customer it can be used but you'd not be able to drill down on per single client).

There is still a lot to be desired to make overviews for single clients and their dd storage occupation.

Top