1 Copper

networkers behaviour of cloning save sets and index with storage node


My infrastructure:

2 data center on 2 different locations.

One is the production data center (PDC) and the other our backup data center (BDC) with test and migration servers.

The networker server is installed in the PDC with two tape-devices and a storage node with 2 tape-devices is installed in the BDC.

All the years we made workflow processed savings from client.

We made at first an "original" (primary) saving from PDC to BDC-tapes (org), and after this a second saving from PDC to PDC-tapes (dup named for "duplication").

Now, it is necessary to save a PDC-installed-server via networker-workflows, while we want to use the cloning technology.

Still made new Pools "org" and "orgclone", the first hint was, that the index and bootstrap want to be saved on the local-tape-devices on the PDC, Also, generating two new pools: index and indexclone.

Now, we are saving to the pool "org" on BDC, after saving the save sets networker saves the index and Bootstrap on pool "index" on PDC.

Finished this, the cloning begins, from "org" to "orgclone". After cloning the save sets, networker wants to clone the "index" and "Bootstrap" from pool "index" (PDC) to orgclone in BDC...

This concept isn't planned, he waits until a timeout, you can see in the group-saving detailed viewed, that the Status of clones is "Succeeded" but also only 50% completed.

After 10 minutes or so, I found the index and Bootstrap SSIDs on the orgclone-tape...

I want to understand what networker will do and why he is waiting??? Can I reduce this timeout?

Tags (3)
0 Kudos
1 Reply
4 Tellurium

Re: networkers behaviour of cloning save sets and index with storage node

To check this, may I suggest that you manually clone these save sets from the command line and explicitly set the storage node options.

0 Kudos