Unsolved
This post is more than 5 years old
61 Posts
0
1881
Clone from Tape to Data domain
hello all,
I am having a strange issue with regards to cloning a job from tape to a data domain device. It will run, the save set will get xfered and then i get a report back that it failed. the clone.log does not apear to be helpful. it just says "there were errors". i can clone back and forth between my data domains with no issues, and from teh data domain to tape. its just going from tape to data domain.
im running it all through clone jobs in networker.
im currently looking into nsrclone commands with verbose. anything else i should try in the meantime?
bingo.1
2.4K Posts
1
October 11th, 2014 06:00
Run a save set clone in debug mode(nsrclone -D5 ....). This should give you a clue.
NetworkSupport-
61 Posts
0
October 11th, 2014 07:00
here is the log.
i must stress that the save set DOES apear in the data domain device i have cloned it to. so im unsure as to WHY its listed as failed?
C:\Users\admin.smithson>nsrclone -v -D5 -b "620clone" -J SUSBCK04 -d SUSBCK04 -S
946271956/1399425182
10/11/14 10:14:37.805068 'nsrclone' is invoked by 'none', co_pjobid=none
10/11/14 10:14:37.805068 nsr_getserverhost(): returning clus_get_hostname() = su
sbck05.llsa.local
10/11/14 10:14:37.867567 RPC Authentication: error in LookupAccountSid: No mappi
ng between account names and security IDs was done. (Win32 error 0x534)
10/11/14 10:14:37.898816 lgto_auth: redirected to susbck05.llsa.local prog 39010
3 vers 2
10/11/14 10:14:37.914441 lgto_auth for `nsrd' succeeded
Job 'nsrclone of group ' has jobid 1249209.
Obtaining media database information on server susbck05.llsa.local
10/11/14 10:14:37.992564 lgto_auth: redirected to susbck05.llsa.local prog 39010
3 vers 2
10/11/14 10:14:38.008189 lgto_auth for `nsrmmdbd' succeeded
10/11/14 10:14:38.023814 Calling process_clone_....
10/11/14 10:14:38.070688 lgto_auth: redirected to susbck05.llsa.local prog 39010
3 vers 2
10/11/14 10:14:38.086313 lgto_auth for `nsrd' succeeded
10/11/14 10:14:38.148811 jukebox_device_host(), client:'', server:'', jukebox:'r
d=SUSBCK04:ADIC@2.1.1', device_type: ''
10/11/14 10:14:38.148811 device_host(), devfull: rd=SUSBCK04:\\.\Tape1
10/11/14 10:14:38.148811 device_host: devhost returned = SUSBCK04
10/11/14 10:14:38.148811 jukebox_device_host(), device node: 'SUSBCK04'
10/11/14 10:14:38.148811 Setting destination node to 'null' for clone '139942518
2'
10/11/14 10:14:38.148811 Entering add_to_cl_series
10/11/14 10:14:38.148811 skipping cloneid=1399425182 for ssid=946271956: already
in the list
10/11/14 10:14:38.148811 Exiting add_to_cl_series
10/11/14 10:14:38.148811 Entering add_to_cl_series
10/11/14 10:14:38.148811 skipping cloneid=1399425182 for ssid=946271956: already
in the list
10/11/14 10:14:38.148811 Exiting add_to_cl_series
10/11/14 10:14:38.148811 JOBATTR_SAVESET_ID has '1' save-set IDs to be cloned:
saveset id: 946271956/1399425182;
10/11/14 10:14:38.148811 Snode_units has:
10/11/14 10:14:38.148811 Type - Regular
10/11/14 10:14:38.164436 src snode - SUSBCK04
10/11/14 10:14:38.164436 dst snode - null
10/11/14 10:14:38.164436 ss list -
10/11/14 10:14:38.164436 ssid/cloneid = 946271956/1399425182
10/11/14 10:14:38.164436 vol list -
10/11/14 10:14:38.164436 volid = 2305290481
10/11/14 10:14:38.164436 vname = 000109
10/11/14 10:14:38.164436 ssinfo -
10/11/14 10:14:38.164436 client - VUSWEB10
10/11/14 10:14:38.164436 ssid = 946271956
10/11/14 10:14:38.164436 cloneid = 1399425182
10/11/14 10:14:38.164436 client = VUSWEB10
10/11/14 10:14:38.164436 ss clone instances -
10/11/14 10:14:38.164436 cloneid = 139942
5182
10/11/14 10:14:38.164436 ss vid series -
10/11/14 10:14:38.164436 ssid/volid = 946
271956/2305290481
80470:nsrclone: Following volumes are needed for cloning
80471:nsrclone: 000109 (Regular)
10/11/14 10:14:38.164436 Entering do_clone_operation
10/11/14 10:14:38.164436 skipping attempt to use thread for cloning
10/11/14 10:14:38.164436 Entering process_clones
10/11/14 10:14:38.164436 Entering clone_snode_unit for Regular clone
10/11/14 10:14:38.164436 Entering process_this_snode for Regular clone
5874:nsrclone: Automatically copying save sets(s) to other volume(s)
79634:nsrclone:
Starting Regular cloning operation...
6217:nsrclone: ...from storage node: SUSBCK04
10/11/14 10:14:38.164436 get_ss_list successful after <0> retries
10/11/14 10:14:38.180061 get_saveset_list succeeded for Regular clone
10/11/14 10:14:38.211310 EXIT add_client_rdz, rdz=?
10/11/14 10:14:38.211310 start_regular_clone: before nsr_start, cl_input_al has
forced volume location: SUSBCK04;
job id: 1249209;
manual: Yes;
NSR operation: cloning;
save sets: \
bff6f77a-00000006-3866f6d4-5366f6d4-00c91800-b73b72fe/1399425182;
save storage node: SUSBCK04;
ss restricted data zones: ;
volume location: SUSBCK04;
volume pool: 620clone;
10/11/14 10:14:38.211310 calling clntnsr_start_pools_2_2
10/11/14 10:25:56.776406 calling clntnsr_end_2_2
10/11/14 10:25:56.776406 gen_clone_result_cur_sn: ENTER
10/11/14 10:25:56.776406 snl_src_snode=SUSBCK04, snl_dst_snode=?
79625:nsrclone: Failed to clone any Regular save sets
10/11/14 10:25:56.776406 gen_clone_result: EXIT
10/11/14 10:25:56.776406 Exiting process_this_snode for Regular clone
10/11/14 10:25:56.776406 Exiting clone_snode_unit for Regular clone
10/11/14 10:25:56.776406 Exiting process_clones
10/11/14 10:25:56.776406 Exiting do_clone_operation
10/11/14 10:25:56.776406 alldone(): ENTER
10/11/14 10:25:56.776406 report_job_completion has the following attrlist:
completion severity: 50;
completion status: failed;
failed savesets: 946271956;
10/11/14 10:25:56.792031 free_snode_units ENTER
10/11/14 10:25:56.792031 free_snode_units EXIT
10/11/14 10:25:56.792031 nsrclone, alldone(): EXIT
NetworkSupport-
61 Posts
0
October 11th, 2014 13:00
hm.
i cloned data from just a few days ago... no fail.
would this be marked as a failure if the cloned data was marked as recyclable? EG its past our browse time?
m_kilpatrick
10 Posts
1
October 13th, 2014 06:00
it could be related to the data being marked as recyclable. I know that if you scan in an old tape with intention of rebuilding the client indexes that if the client data has expired then you have to change the browse and retention on the client (prior to starting the scan job). I would have seen a scan job run through a whole tape and then there was no change in the client index - I changed the browse time from 3 months to a year and ran the scan job again successfully.
Perhaps you should change the browse time on the client SSID and then run the clone job
bingo.1
2.4K Posts
1
October 13th, 2014 14:00
The save set status 'recyclable' could really be the key issue.
Starting with the AFTD device type, recyclable save sets will be deleted asap.
This would also explain the gap between 10:14 and 10:25.
My assumption is that the save set will be cloned but will then be deleted right away.
And because everything works as designed, you do not receive an error.