I have a server with about 2.5 million small files on it, that is having trouble backing up. These files are static in nature and do not change once they are written, but new ones are added to the drive daily.
I remember from Avamar training there is some optimization around the fcache & pcache that can be done to allocate more memory for improved performance at the expense of taking more resources on the host.
For those who have had these large file count problems recommend this course of action?
Is there anything else any one could recommend to try and speed up this process?
Thank you kindly,
Solved! Go to Solution.
What is the total physical RAM size on your client ?
If we have huge number of files, we should keep file cache higher and hash cache lower. The sum of these two should not be more than 25% of total physical RAM.
The file cache must be at least N x 44 MB, where N is the number of millions of files in the backup.
In your case, N is 2.5 so you should have at least 2.5 * 44 = 110 MB file cache.
The file cache doubles in size each time it needs to increase. The current file cache sizes are in megabytes: 5.5 MB, 11 MB, 22 MB, 44 MB, 88 MB, 176 MB, 352 MB, 704 MB, and 1,408 MB.
So you have to set file cache at least 176 MB.
But By default, value of file cache is 1/8 of total physical RAM. That means you already have file cache much higher than minimum value required for 2.5 million files.
I don’t think so you need to make any changes in file cache. But make sure you have file cache at default value.
May I know if backup is failing or its unable to complete backup within your specified backup time limit ?
If backup is failing then what is the error ?
If its unable to complete backup then did you try to increase backup time limit ?
Yes we can do.
Create a avtar.cmd file and add following parameters -
Here file cache is set to 1/6th of total physical RAM and hash cache is set to 1/12th of total physical RAM (Just for example)
The job was running for an extremely long time, sometimes around 24 hours. I will implement these changes and let you know how it effects the job.
Just for additional information (and for users who run database backup)
We should keep hash cache value higher and file cache lower for database backup. Typically, the hash cache must be a minimum of N MB, where N is the size of the database being backed up in gigabytes. The current hash cache sizes are in megabytes: 24 MB, 48 MB, 96 MB, 192 MB, 384 MB, 768 MB, and so forth. If we have 4 GB physical RAM and backup 600 GB data base, we should have at least 600 MB hash cache, but we can not set 600 MB hash cache and we have to set it to 768 MB.
So first assign hash cache to required value and later assign file cache from total available RAM but the sum of these two should not be more than 1/4th of RAM.
I would recommend taking a look at the logs. If the caches are undersized, the logs will have messages about the cache being unable to grow and hashes being booted from the cache. If you do not see these messages, adjusting the cache sizing will not make any difference.
If the client is running version 6.0 or older and there are directories with large numbers of files in them, I would recommend upgrading the client to version 6.1 or newer since there is improved handling for large directories in newer versions of the client. This can make a significant performance difference.