I have an Avamar client with a few dense filesystems. The backup for this client is intruding on the blackout/maintenance window and is automatically cancelled by Avamar. I want to be able to split the dataset out into smaller chunks.
I have read up on this and am led to believe that "cacheprefix" will help to achieve this.
I have read the Operational Best Practices Guide but it doesn't make sense.
* Break the client file system into multiple smaller datasets.
* For each dataset, ensure that the maximum file and hash caches assign a unique cacheprefix attribute.
How can I break a client filesystem into smaller datasets? For the life of me, I cannot find out how to do this! I can only see how to assign a single dataset to a client and not more than one.
Does anyone have an example of how to implement this?
You can have seperate datasets for each of the local drives by selecting the drive under the Source data of the dataset and then, under Options -> More,
enter attribute as cacheprefix=cache_name (for ex, driveC) and specify filecachemax and hashcachemax attributes as well
Break in small dataset, ex:
Create the same number of the groups
Associate an each dataset to a Group.
The cacheprefix must be create for each group.
With this to start the first backup set of Hash (1) be active.
Upon finishing this set will come out of memory
When starting the second group, the set of hash (2) is active.
and so on.
With this you will avoid memory overflow.