I presume that you are triggering the nsrclone from your storage node. You would need to use a combination of using the -J switch in the nsrclone command and set up the clone storage node in your client's configuration.
The -J switch in nsrclone specifies the storage node from where the data is to be read and the clone storage node field will hard code the destination storage node where the clones are to be written.
I forgot to mention that I'm using scripts to do staging and cloning operations. So, I'm using the nsrstage and nsrclone commands. Is it still possible to do it? And can you be more specific on how to do it, please?
It still doesn't work... Here is an example of a configuration of what I did. I have a NetWorker server NW, a storage node SN and a client C1. In the 'Storage Nodes' and 'Clone Storage Nodes' fields of SN and C1, I've put SN. The staging operation is OK by launching the nsrstage command from SN. But when I want to do a cloning operation, the operation wants to mount a volume from my cloning pool on SN (which it doesn't have since I want to do cloning on NW). I'm surely missing something somewhere, but I don't know what it is...
I understand that you want to clone to the networker server, NW, for the destination and the read to be done from your storage node SN. You would want to set your clone storage node to NW and use the -J SN switch in nsrclone. If I am right -J is available with 7.4.x onwards not with the earlier versions. In any case, setting your clone storage node to NW for the client C1 should fix your problem.
I've tried it, but it still doesn't work. It still wants to mount a tape from my cloning pool to the SN. The release of NetWorker is 7.4.3, so it shouldn't be a problem. I read somewhere that 'clone storage nodes' field is only used by SN client's ressource. Is that right?
I'm not sure even if pools would help... usually practice is such that you do this over the SAN... if your BCK library is visible on BCK server only and SN library on SN only then you might have a problem to make work this setup. You could add nsradmin change of storage node field into script of course and that would be the easiest workaround.
First of all, clone storage node is never checked for regular clients - this resources get's checked only by storage node clients.
In this specific case source of data for staging is controlled by location of AFTD (SN) and target is controlled by pool available only in second (SN) library I assume.
Cloning (source) is controlled by read hostname for library which is probably SN as I believe this library is not shared with server and target is controlled by pool. The storage node used in this specific case is the one specified by clone storage node in SN client resource.
I assume the problem here might be in case where both cloning and staging are sharing target host where to load appropriate pool for target. -J doesn't help here as it is used only for recovery part (reading). The solution would be to have both library devices visible on SN and BCK (the BCK library) via SAN and control this via pool.
Nothing special. Assumption is hosts are on SAN thus you can make those drives visible to them.
If I understand correctly you have setup like this: host: bck_server lib: bck_lib
host: sn1 lib: sn1_lib
All drives by bck_lib are shown on bck_server and all drives by sn1_lib are shown on sn1. Additionally, you can do 2 things: - show same physical drives from bck_server to sn1 (that would be DDS) - show some physical drives from bck_lib to bck_server and some to sn1
What you go after really depends on your needs and available resources.
Hari5
443 Posts
0
October 24th, 2008 22:00
The -J switch in nsrclone specifies the storage node from where the data is to be read and the clone storage node field will hard code the destination storage node where the clones are to be written.
ble1
4 Operator
•
14.4K Posts
0
October 26th, 2008 11:00
Node and clone to another one, considering that the
NetWorker Server is also a SN?
Yes, you even don't need CLI for such setup.
Hari5
443 Posts
0
October 26th, 2008 19:00
JFA1
1 Rookie
•
26 Posts
0
October 27th, 2008 04:00
JFA1
1 Rookie
•
26 Posts
0
October 27th, 2008 06:00
Hari5
443 Posts
0
October 27th, 2008 07:00
JFA1
1 Rookie
•
26 Posts
0
October 27th, 2008 07:00
ble1
4 Operator
•
14.4K Posts
0
October 27th, 2008 12:00
ble1
4 Operator
•
14.4K Posts
0
October 27th, 2008 12:00
In this specific case source of data for staging is controlled by location of AFTD (SN) and target is controlled by pool available only in second (SN) library I assume.
Cloning (source) is controlled by read hostname for library which is probably SN as I believe this library is not shared with server and target is controlled by pool. The storage node used in this specific case is the one specified by clone storage node in SN client resource.
I assume the problem here might be in case where both cloning and staging are sharing target host where to load appropriate pool for target. -J doesn't help here as it is used only for recovery part (reading). The solution would be to have both library devices visible on SN and BCK (the BCK library) via SAN and control this via pool.
JFA1
1 Rookie
•
26 Posts
0
October 27th, 2008 12:00
What do you mean by that? Is there a relation between this and the Shared Devices properties?
JFA1
1 Rookie
•
26 Posts
0
October 28th, 2008 06:00
ble1
4 Operator
•
14.4K Posts
0
October 28th, 2008 06:00
ble1
4 Operator
•
14.4K Posts
1
October 28th, 2008 11:00
If I understand correctly you have setup like this:
host: bck_server
lib: bck_lib
host: sn1
lib: sn1_lib
All drives by bck_lib are shown on bck_server and all drives by sn1_lib are shown on sn1. Additionally, you can do 2 things:
- show same physical drives from bck_server to sn1 (that would be DDS)
- show some physical drives from bck_lib to bck_server and some to sn1
What you go after really depends on your needs and available resources.
JFA1
1 Rookie
•
26 Posts
0
November 5th, 2008 06:00