I realize this is an old topic, but after searching the web for a while, it's the closest answer I've gotten so far (having yet to open an SR on the issue).
The long and the short of it is, we have an IDPA (integrated Avamar and DataDomain), and our DataDoman filesystem (DDFS) crashed/died and restarted in the midst of a bunch of backups. The next time the client backups ran for some of the large filesystem jobs, they were glacially slow. It appears their f_cache2.dat files were reset or deleted and are being recreated from scratch. One server with millions of files and a ~50GB cache file is now ~25% of the way through its job and has about a ~1GB cache file thus far.
It should not be necessary to delete the cache file after an upgrade unless there are exceptional circumstances.
Ian, we saw the problem too many times not to conclude that it had something to do with the upgrade. It might have been the significance of the changes between 7.1 and 7.3 agents (and thus won't happen again if we do smaller upgrades) that caused the failures but in any case it's not impactful enough to our environment not to delete the cache file as a precaution.
I know we did have some issues with the paging cache in older releases but these should be resolved now. I just don't want people to see your post and think they need to delete the cache files every time. Since 90+% of the performance gain from the Avamar client comes from the file cache, that could have a devastating impact on backup performance in a large environment.