Start a Conversation

Unsolved

This post is more than 5 years old

3108

November 28th, 2014 09:00

Networker Recovery from DDBoost Clone Copy

​Hi,​

​we have 2 networker Data zone (8.1.1.5) in 2 different locations with 2 DD's(DD4500 & os: 5.4) 1 in each zone. all our backups are going to DD using DDBoost & client direct and all the backups are getting replicated to other DD using networker group clone jobs.​

​i got a requested to restore a data from client ABC which is getting backed up under my primary server S1 to restore to the client XYZ which located & getting backed up under my secondary S2.​

​I have manged to get communcation from XYZ to S1 viceversa and created a test client XYZ in S1 backup server. my requirements are to restore the data from my secondary to DD D2 to XYZ directly with out using any storage node. is it possible, if possible how to do it​

​ i have logged in to XYZ server & using the below command to do.​

​recover -c abc -s S1 -R XYZ -b Clone_pool -iN​

​but stil the data is not moving directly from my secondary D2 to XYZ. instead the data is flowing from my primary server storage node SN1 to XYZ. Because of this my restore is running very slow. i need to restore ~5 TB of data.​

​Please suggest me if there any other better way to perfrom this restore faster.​

​Thanks!!​

1 Rookie

 • 

14.3K Posts

November 28th, 2014 12:00

Flag original ssid as suspect.

12 Posts

November 29th, 2014 02:00

Hi Hrvoje,

Thanks for your quick reply. even if mark my original savset to suspect, will the data flow directly from secondary DD D2 to XYZ (just like client direct)? what i need to do in order to push data directly from the DD to client?

what I am seeing is my data is begin pushed from the primary datazone SN1 to XYZ either from the D1/D2. instead of sending the data directly from D2

Also is there any option to increase the number of parallel recover streams for a particular restore.

12 Posts

November 29th, 2014 05:00

Hi Hrvoje,

we are not using the clone devices on D2 on our secondary backup server S2 or the SN2. they are just mounted in SN1 only. Do i need to mount the these clone device on D2 to my secondary data zone SN2? in order to have direct restore from D2 to XYZ?

1 Rookie

 • 

14.3K Posts

November 29th, 2014 05:00

Start recover from XYZ after ssid/cloneid on D1 is marked suspect and you should see flow from D2 using replica directly to XYZ.  The only reason why data flow would go through SN1 would be if ddboost communication was not possible and fail-back to traditional method has happened (here I assume that both D1 and D2 have devices attached to SN1 though it would be more logical to have D1 on SN1 and D2 on SN2).

1 Rookie

 • 

14.3K Posts

November 29th, 2014 05:00

OK, I re-read your setup.  Data from ABC is on both D1 and D2 (replica), right?  I don't see anywhere you saying that you cloned this, but I assume you did as otherwise how would data come from D2.  Now, when you cloned this, how exactly was the setup with devices?

12 Posts

November 29th, 2014 06:00

Ok let me be very clear on this

S1--> SN1--> 4 DD D1 devices & 4 DD D2 devices

S2--> SN2---> 4 DD D2 Devices & 4 DD D1 deceives

Note: no single deives is mounted on 2 SN's/NWServer.(all the 8 devices are identical)

we are not using DD replication.we are cloning the data using networker clone on group completion from D1 devices on SN1 to D2 Devices on SN1. The setups is same on another location as well.

1 Rookie

 • 

14.3K Posts

November 29th, 2014 06:00

OK, we can forget S2 as it is irrelevant.  ABC wrote to S1.  Now, you defined XYZ on S1 as well as otherwise you can't restore it.  Cloning is done via NW.  So, ABC wrote on one or more of 4 D1 devices/volumes, then you made clone to one or more device/volumes on D2 attached to SN1.  From my tests, there are two ways to get this working:

a) if you disable device or storage node with original volume, NW will continue to use replica

b) mark original ssid/cloneid pair as suspect

In my case I used 2 SNs with DD from each site attached to them only, but that should not matter as long as the workflow goes (except that I used clustered server so I had single server and much easier task).  So, if a) does not work for you or simply it is not possible, just do b).  If you see data flow going via SN1 and not directly from D2 to XYZ, then it means DDBoost communication was not possible and it failed to traditional restore over storage node.  In such case, you should focus on why communication between D2 and XYZ didn't work.

12 Posts

November 29th, 2014 07:00

There should not be any communication problem between XYZ & D2 bcoz XYZ is sending its backup data to D2 directly daily. What I have observed that there is no communication between D1 & XYZ. May be I need check more keenly on "mark original ssid/cloneid pair as suspect" or may be I need to establish communication between D1 & XYZ atleast to make restore run faster than the current situation. (if I am not wrong).

Also Hrvoje is there any option to increase parallel streams for recover?

12 Posts

November 29th, 2014 07:00

Thank you very much for help & support on this Hrvoje.

1 Rookie

 • 

14.3K Posts

November 29th, 2014 07:00

If this is file system backup, you can have max streams as ssids.

No Events found!

Top