Unsolved
This post is more than 5 years old
1 Rookie
•
85 Posts
0
3958
September 28th, 2016 11:00
Most efficient Networker-based migration to new Datadomain
I am currently replacing 2 old DataDomains by 2 new DD's in a tapeless environment with 7 years retention on DD savesets. DD-Boost over IP is used for all backups (RMAN DB, SQL DB, Filesystems, VMware VBA.)
Collection replication cannot be used as the target DataDomain is smaller than the source and some fresh devices and volumes are also not a bad idea in this config.
Migration-plan:
1. add new DD-Boost devices to existing backup-pools and new DD-Boost devices to existing clone-pools.
2. change old DD-device in all pools to Readonly which switches all new backups to go to the new DD's
3. use a staging policy in the GUI to move all data from the old devices to the new device in each pool
Step 1 and 2 succesfully executed. Step 3 works also for the primary backup pool, however Networker fails to stage the clonepool data to the new device in the pool. The logs displays the error "No space was recovered from device since there was no saveset eligible for deletion.", but no reason why it did not execute the staging (which is in fact cloning+deletion of old saveset).
Any clue why this does not work as I expect?
Thanks, JP
The setup:
Networker 8.2.3.5
Backup over IP with DDboost to 1 SU at each DD only.
Clone Controlled DDBoost replication
Multiple backup-pools and for every backup-pool a clone-pool
Each pool consists of one DDBoost device on the old DD and a DDBoost device on the new DD.
VBA appliance for VMware data (Avamar 7.1 under the hood)
Preference to keep the old pool during migration to prevent changing all clients to new pools and new pool names.


ble1
4 Operator
•
14.4K Posts
0
September 28th, 2016 11:00
I'm not sure what GUI does, but from CLI, when you move (stage) ssid, unless you specify ssid/cloneid, it will move primary copy and remove old one (including clone). Again, I never used GUI for this so I am no familiar top of the head how GUI handles this. I would use CLI (to check if clones are there) and move it (actually, I would not - I believe it would be faster to create clones on new DDs).
jpveen
1 Rookie
•
85 Posts
0
September 28th, 2016 23:00
On CLI you don't have the option to directly stage on a volume level, only on saveset level, so that's why I prefer the GUI.
The main question/issue here is probablyif staging within the same pool (to a different device) is supported and works as expected.
Any experiences with this?
bingo.1
2.4K Posts
0
September 29th, 2016 02:00
Working on volumes may lead to the loss of an incomplete save set if it spans volumes (for tapes).
- In older versions (pre 7.5?) all save sets that were (partially) stored on a media got involved.
- Currently only save sets which have been started on the volume will get involved.
Fortunately you are working with disk media.
Migrating within the same pool works in general.
However, if you want to migrate disk save sets to a certain volume, the only chance is either dismount all others and/or to set them to readonly.
So I would always use resolve the save set, not the volume. And always use the cloneid along with the ssid.
ble1
4 Operator
•
14.4K Posts
0
September 29th, 2016 12:00
Volume based is also ssid based, but it does check ssids in background so no real difference there. Quick check of nsrclone manual shows you can do volume based migration too.
One thing to remember (I believe it applied to cloning, but same should be for staging I guess); since some version when deciding what to handle in forest of ssids, NW will take into account only those ssids which:
a) are complete on volume
b) start on that volume
So, if you use ddboost or aftd device - nothing to worry about, but if for some reason you also have classic file devices then something to keep in mind (some ppl use it as they apply quotas through NW disk devices as classic file device).
bbeckers1
2 Intern
•
203 Posts
0
October 5th, 2016 05:00
One would indeed think putting origonal devices in readonly mode to prevent any new data to be send to it. Drawback is that things like daily automatic nsrim run will then not be able to cleanup anything on the original ddboost devive due to the device being readonly. cant recall if the same applies trying to stage away data from the readonly device as nw would not be allowed to delete anything from it, but I guess the same applies. but by design a pool on another DD would have another name, so we prevent the old pool on the old DD to be used by having backups already use the new pools. staging data can still be cumbersome due to limitations in the gui but even on CLI one can run into issues for instance with nsrstage not being able to state both nw storagenode to read from as well as to write to. in certain situations nsrstage might simply remain to use a wrong nw sn to read from. but luckily nsrclone has both options AND an option to perform the stage (was it -m for migrate or aomwthing similar). So if you want to have full control with staging by being able to state both read from and write to nw sn, then use nsrclone as it fits the job better than nsrstage to achieve just that...