Unsolved

This post is more than 5 years old

23 Posts

858

August 28th, 2007 06:00

architecture question

We have a remote site that is slow to backup (one client machine 18+ hours).. I'm wondering if placing a "storage node" instance up there with enough disk space could be a valid solution rather than installing yet another legato server. I'd associate the remote storage node to a server we have local down here.

Is this a valid idea or is that not how the idea of a storage node works?
I know this is not a new problem as many have had this issue in the past I'm sure.

4 Operator

 • 

14.3K Posts

August 28th, 2007 06:00

If I were you I would go for storage node if link between that site and server is reliable... because you will still have metadata going through that link and activer server-sn communication. For that I assume healthy 10MBit link should be enough. If there are any doubts, simply buy cheapest server edition (make sure that it supports disk or use small library) and there you go.

23 Posts

August 28th, 2007 06:00

Actually Im sort of answering my own post..
This does not really seem like a viable solution if it would work. The backup would be in the same location as the data. Thus not providing a solution of off-siteed-ness :-)

We are going to look into consolidation or synthetic full backup for this client. This seems more promissing..

4 Operator

 • 

14.3K Posts

August 28th, 2007 06:00

Do you have SAN infrastructure on that site?

194 Posts

August 28th, 2007 07:00

Have you considered using compressasm to reduce the amount of data sent over the network? When we enabled it on our servers we reduced our backup times by up to 3 hours and that's over the LAN.

Vic

23 Posts

August 28th, 2007 07:00

We do use compression and compressasm for this and other clients. The backup still takes 18+ the remote link is T1 so very slow..

I still think consolidation is probably the way to go with our issue. This data goes to disk so adding another pool for the consolidation will not be that big of an issue.

4 Operator

 • 

14.3K Posts

August 28th, 2007 08:00

We do use compression and compressasm for this and other clients.

Using both is not recommended. Ever used 2 different compression algorithms one atop each other? Bad combination...

It really depends on data amount how to play with this further... how much data do you have on remote site?

23 Posts

August 28th, 2007 08:00

Any one have any pointers to "GOOD" docs on consolidation with networker..
The nwadmin guide is pretty lacking in technical details .. like usual.

Our thoughts on an initial schema for this.
Have a rolling consolidate on the back of our expire period say 4 weeks ago run incremental
each day there after and then always have a consolidate from today - 1.

Many questions arise:
- does consolidate throw away an incremental once a consolidate has been run?
- when a consolidate is run does it always go back to last full or does it go to a previous consolidated full.
- is it better to have a consolidate save as up to date as possible (ie: always have one for yesterday)

23 Posts

August 28th, 2007 08:00

full is:
name totalsize level
d:\homedirs 12284637956 full

4 Operator

 • 

14.3K Posts

August 28th, 2007 11:00

OK, so that would be 12GB. I would say having a small tape library for that with single drive should be enough. If you have someone to manage remote site you could either have standalone device. Bare in mind that these are home directories so data structure is important - it might be that bad speed comes from many small files too.
No Events found!

Top