Highlighted
2 Bronze

CloudBoost to Amazon Web Services

Jump to solution

I am trying to wrap my head around using CloudBoost with NetWorker to get my backups safely offsite.  I have configured everything and even successfully ran the clone operation.  My question is, when does the data move from CloudBoost appliance to AWS?  How can I confirm the data is on AWS?  I was able to run a recovery operation and see that the Save Set was using the Default Clone Pool.  I am not convinced that this is pulling anything from the cloud, but more likely the CloudBoost VM.

Second question.  Should I be using Clone or Archive for this?  My brain tells me a clone will be an exact replica of when is on my DD2500.  If this is the case, wouldn't the retention policy of the DD2500 be pushed to the cloud storage too?  I only want limited time storage on my DD2500, but long term retention in the cloud.

Thanks,

Jeff

0 Kudos
1 Solution

Accepted Solutions
Highlighted
2 Bronze

Re: CloudBoost to Amazon Web Services

Jump to solution

Data is never put on the CloudBoost VM. It is a direct pass through to the cloud.

If you want to verify that the clone has been written to the object store you can use an S3 browsing app to browse the buckets manually. There will be a thousands of randomly named chunks there that average 256KB.

You want to use clone. NetWorker supports different retention times for the clone versus the original.

View solution in original post

7 Replies
Highlighted
2 Bronze

Re: CloudBoost to Amazon Web Services

Jump to solution

Data is never put on the CloudBoost VM. It is a direct pass through to the cloud.

If you want to verify that the clone has been written to the object store you can use an S3 browsing app to browse the buckets manually. There will be a thousands of randomly named chunks there that average 256KB.

You want to use clone. NetWorker supports different retention times for the clone versus the original.

View solution in original post

Highlighted
2 Bronze

Re: CloudBoost to Amazon Web Services

Jump to solution

Thanks Gordie,

I thought that is how it worked, but I was not sure.  As for the clone, I don't remember seeing retention times.  Looks like I need to go back through and find them.

Jeff

0 Kudos
Highlighted
2 Bronze

Re: CloudBoost to Amazon Web Services

Jump to solution

In NetWorker 8.2 in the properties of the clone job, under index management you can set the retention time. It might be somewhere slightly different in other versions.

0 Kudos
Highlighted
2 Bronze

Re: CloudBoost to Amazon Web Services

Jump to solution

I found it and the settings are blank.  I assume that blank means indefinitely?  I need to change the browse policy to something a little more reasonable.

0 Kudos
2 Bronze

Re: CloudBoost to Amazon Web Services

Jump to solution

Geordie,

I have been able to get CloudBoost to connect to S3.  Now, I am trying to migrate the data to Glacier, but there is a warning about file size.  The average file size in my S3 storage is 122.0KB.  Is there any reason the average is not 256KB?  Is there a setting for this?

Thanks,

Jeff

0 Kudos
Highlighted
2 Bronze

Re: CloudBoost to Amazon Web Services

Jump to solution

Great question ladiver.  At the moment CloudBoost does not yet support AWS Glacier. Our next plans are to support S3-IA (Infrequent Access).

There are no settings available in CloudBoost to increase the object size. Objects are varying sizes, depending on file type, after they are chunked at 256K, compressed, and encrypted.

0 Kudos
Highlighted
2 Bronze

Re: CloudBoost to Amazon Web Services

Jump to solution

Just to clarify, glacier has a multi-hour time to retrieve. That in and of itself it a complete non-starter. Glacier is also not supported with CloudBoost because with dedupe, chunks can be part of both the oldest and the newest backup making it very difficult to migrate the chunks in any kind of logical way. 

Average chunk size will be lower than the target chunk size for a few reasons. For file backups if the file being backed up is less than the target chunk size the most efficient dedupe chunk size is the file without anything extra added. Also the target chunk size is before compression and encryption.