Start a Conversation

Unsolved

This post is more than 5 years old

1861

July 3rd, 2014 08:00

Clone storage node

Hi all!

Case for you guys:

1. Setting 1 server (server1) and 1 storage node (sn1)

2. Create a AFTD for server1 and one for sn1

3. Attach 1 Jukebox to server1 and one to sn1

4. Backup client1 to both server1 AFTD and to sn1 AFTD

5. Create a clone job to clone the backups of client1 to a Pool (say Default Clone)

6. Make sure that both jukeboxes allow "Default Clone" tapes

7. Start the job and watch NetWorker cloning all savesets to the tapes of server1.

Expected result would in my opinion be that the matching savesets on server1 would be cloned to a tape on server1 and the savesets on sn1 would be cloned to tapes on sn1.

This is Networker version 8.1

Is this fixed in later revisions or is it just "expected behavoiur" ?

736 Posts

July 3rd, 2014 23:00

Hi,

This may be expected behaviour depending on how you have setup the 'Clone Storage Nodes' field in the properties of sn1 (configuration tab).

From the NetWorker Cloning Integration Guide: page 35

"Use the following criteria to determine the storage node to which the clone data will be written (write source):

- The Clone Storage Node attribute of the read source storage node is used as the write source."

If there is nothing in there, then the NetWorker server's equivalent field will be consulted.  If that has no value, the NetWorker server's Storage Node field (this time in the client properties, not the Storage Node properties) will be used.

-Bobby

20 Posts

July 4th, 2014 00:00

Yes, I did read that from the manual. But even if I did put sn1 as clone storage node for sn1 (which should be the default in my opinion btw.) the clone process still used the Storage Node of the server if the clone job contained savesets from both sn1 and server1. THAT is against what the manual says...

736 Posts

July 4th, 2014 00:00

If it's not obeying the 'clone storage node' field of the read source storage node, then that sounds like a bug.  Could be this one:

126779 : Clone Controlled Replication (CCR) does not honor write storage node            
https://support.emc.com/kb/126779

-Bobby

14.3K Posts

July 4th, 2014 02:00

Dag wrote:

Yes, I did read that from the manual. But even if I did put sn1 as clone storage node for sn1 (which should be the default in my opinion btw.) the clone process still used the Storage Node of the server if the clone job contained savesets from both sn1 and server1. THAT is against what the manual says...

How did you set cloning (and most importantly where) cloning relationship?  I have something similar and it works just fine (though NW 8.0.3.x).

20 Posts

July 4th, 2014 07:00

I set the clone storage node in the "Storage node" settings, sn1 for sn1 and server1 for server1.

14.3K Posts

July 4th, 2014 17:00

When I check that 4), you said you did backup on both backup server and sn1.  Is it possible that save sets from bck server were cloned there and those from sn1 went to sn1?  Normally, I would expect 1 sn to be used despite number sns defined (unless one is really not available), but since only you know how you made tests and you didn't provide any logs yet...

20 Posts

July 5th, 2014 03:00

The whole story:

I am teaching Networker and this case was actually initiated by one of my students asking for the reason for the clone going to the "wrong" storage node (Only to the sn). As such I don't exactly know what happened before. I did check that there were savesets on both AFTD:s (server and sn) covered by the clone setup. The initial reason turned out to be that he had set the "Clone storage node" in the Storage node setting to be the target. We removed that setting and upped the requested number of requested copies to 2 to get a new cloning done and rerun the clone operation. Now both storage node and server cloned to the server. We expected them to be split to the corresponding "local" targets. Then we set the "Clone storage node" setting on each Storage node to be the local one (Which I actually thought was the default...), upped the number of copies to 3 and once again rerun the clone operation. Once again all the savesets were cloned to the server only.

I should probably set up my own version of this to test the result and being in control of the whole chain entirely myself and then get back to the questions?

14.3K Posts

July 5th, 2014 10:00

Easiest way is to test it yourself.  Even more easier if you server is Linux.

Based on what you said, you use NSR cloning not based on group, but rather one you can set.  In there, there is also option for read and write storage node so perhaps this was not set right.  You can test it yourself and check for nsrclone process and see which arguments are being used (alternative is to check nsrtask.raw which should also contain CLI version of command built by server which may reveal settings used).

20 Posts

July 6th, 2014 07:00

And then I tested with explicit settings in "Clone storage node" and then it seems to work... So it does behave as specified. The only strange thing is that the default "Clone storage node" is NOT the storage node itself. But not a bug.

20 Posts

July 6th, 2014 07:00

Ok, set up a test myself and NW indeed did clone both savesets (from sn and server) to a tape at the server. And you are right in the fact that this is a "NSR clone" setup.

The command generated is:

'nsrclone -S -D 0 -s nwwindows -b "Default Clone" -C 1 -c winclient -t "Sat Jul 05 09:50:37 2014" -e "Sun Jul 06 09:50:37 2014"'

Checked that the "Clone storage node" is emty in the storage node settings.

Seems like a bug to me...

The exact version of NW in the lab is: 8.1.0.1 build 199

14.3K Posts

July 6th, 2014 07:00

If testing 8.1 the test it with latest patch level which is 8.1.1.6, but in respect to this I doubt anything would change.

Because you run nsrclone outside group (and with tapes) it will use slightly different set of rules which includes read hostnames and similar.  I would suggest destination storage node to be used always (when using CLI I used that all the time).

No Events found!

Top