Unsolved
This post is more than 5 years old
7 Posts
0
4177
Manually deleting Networker backups...Help.
So here's what happened. Our backups are set to run every night however our retention settings seem to be messed up because we ran out of space and all our backup jobs were hung up waiting for available space. This is how I ended up deleting the backups.
After some research I used this command to gather the SSID of any backups between 08/01/2015 and 09/26/2015.
mminfo -avot -q "volume=volume_name,savetime>=08/01/2015,savetime<=09/26/2015" -r ssid >ssid.txt 2>&1
this gathered the backups and put the ssid's into a text file. From there I did this.
For /F %%a in (ssid.txt) do nsrmm -y -d -S %%a
This should remove all backups with the SSID's contained in the SSID.txt file I created.
My question is there any negative outcome from deleting backups this way?
I probably should have inquired about this before doing it....
Any help is appreciated.
bingo.1
2.4K Posts
0
October 5th, 2015 20:00
In general this is o.k.
However, keep in mind that this will delete all instances of each save set (backup & clones). If you do not want that you must use the ssid along with the cloneid.
On the other hand, you can also use nsrmm to adjust the browse and retention time of each save set:
For /F %%a in (ssid.txt) do nsrmm -y -w -y -S %%a
There is no need to delete the backups.
Davidtgnome
66 Posts
0
October 6th, 2015 06:00
Yes, there can be negative aspects.
If you were taking A full backup every Sunday, and incrementals the other days of the week. With the command you ran, the list would include the Fulls that would have been taken on 9/21/15, which would be needed for a restore from the file on 9/26.
Incrementals are dependent on the most recent successful full. Depending on how you schedule things, and if those backups were successful, you can run the risk of losing data and being unable to restore.
Also, what are you backing up to? If it's AFTD's then the space might not free up until you run nsrim -X. If it's a Data Domain, then the space won't clean up until the clean process is run, usually on tuesday mornings.
mackkey52
7 Posts
0
October 6th, 2015 12:00
I am not incredibly knowledgeable when it comes to Networker so I try do as much as I can through the management console. That being said how can I tell what The backups are being backed up to? Under Devices in the Networker management console I see the networker server under storage nodes and the Drive that backups are being saved to. There is nothing under Data Domain systems so I would guess we are using AFTD? Also in regards to losing data we run our fulls on Sundays and inc's everyday. So wouldn't savetime<=09/26/2015 mean any save time less than or equal to 9/26/15 ? so that would have kept our last week of backups starting on the 27th?
bingo.1
2.4K Posts
0
October 6th, 2015 13:00
You need to learn the CLI because it is much more powerful than the GUI.
Also keep in mind that not everything is implemented in the GUI.
To verify the the device,
- check the volume first
- then verify the device where this volume is mounted
- last check the device properties for the device type.
Your assumption about the savetime is correct.
mackkey52
7 Posts
0
October 6th, 2015 16:00
My volume is AFTD the media type is adv_file
also thank you for the help
mackkey52
7 Posts
0
October 7th, 2015 14:00
I had problems setting the retention and browse times for some reason. Ill have to read up on doing that a little more. Is there a way to skip savesets when troubleshooting a group that has only certain savesets failing and you dont want to backup the whole group again?
bingo.1
2.4K Posts
0
October 7th, 2015 21:00
Skipping save sets is a bit unusual, skipping clients is much more practical.
However, you can only do this from the command line:
savegrp -c [-c ...] -R
Davidtgnome
66 Posts
0
October 9th, 2015 04:00
We set up special groups to rerun failed clients. We remove clients that always fail from their groups untill we figure out why they are failing.
bingo.1
2.4K Posts
0
October 9th, 2015 06:00
Good idea.
Try to figure out their error commonality.