Start a Conversation

Unsolved

This post is more than 5 years old

3729

October 26th, 2015 22:00

Backup is too slow with large save set with more than 10 million files

Dear All,

I have a backup client consist large save set with more than 10 million files

Operating system : Windows 2008 R2

The Backup is too slow that its running in KB/sec speed even in SAN and LAN backup

There are no server utilization while backup is running , is there any way to increase the backup speed when its running for million files

October 27th, 2015 06:00

Hi,

you can try to use the bigadm directive to check whether really the file system is the bottle neck:

https://emc--c.na5.visual.force.com/apex/KB_HowTo?id=kA0700000004KjG

Regards

Michael

2.4K Posts

October 27th, 2015 11:00

First of all it is important that you use B2D - tape drives are no good as they permanently shoe-shine with these data rates.

Then, you do not have much choices:

  - either you compress (zip) the files at the client

  - or you run an image-style backup (block based backup or RAW partition backup)

This is pure NetWorker.

It will speed up if you think about a combination of NW and Avamar but this of course will need extra investments.

4 Operator

 • 

1.3K Posts

October 28th, 2015 01:00

I would suggest you to make use of Block based backup which is designed to solve problems similar to what you are currently facing.

however this solution required the target device to be a disk type (AFTD or DataDomain), you can propose to add disk storage to your backup infra to create a AFTD and do a POC for Block based backup for your organization to approve the additional charges.

35 Posts

October 28th, 2015 03:00

I also do agree of leveraging BBB. There could many causes also.

Bigasm can help knowing the bottleneck.

You can also divide into many client policies and run backup for individual savesets, and not all at once.


October 30th, 2015 08:00

Hi Bingo,

please keep in mind that "compress (zip) the files at the client" makes only sense if neither ddboost nor deduplication is used (e. g. in case of having a DataDomain backup device).

Regards

Michael

2.4K Posts

October 30th, 2015 14:00

@vakil

Actually, bigasm is more testing network and backup device speed than anything else - so I think the results are not valid in this case.

@kleinenbroich

I am not sure about ddboost - we use it on similar filesystem with no client compression and the throughput is rather slow as well. But keep in mind that it all depends on a lot of factors, so you must run true tests with your true data and your backup scenario anyway.to find the best method.

No Events found!

Top