Avamar: How to gather the information to troubleshoot capacity issues
Summary: This article describes what information is needed when troubleshooting Avamar capacity issues, and how to collect it.
Instructions
Addressing Capacity Issues in Avamar:
When dealing with capacity issues on an Avamar grid, it is crucial to understand the root cause. This requires a series of steps, starting with data collection for thorough investigation.
Avamar grids have several types of capacity limits. A comprehensive understanding of these limits, along with their historical context, can clarify both current and past capacity issues experienced by the system.
-
80%: Capacity Warning
-
95%: Health Check Limit is reached
-
100%: Server Read-Only Limit is reached, causing the grid to switch to admin mode
-
Garbage collection (GC) fails, resulting in MSG_ERR_DISKFULL or MSG_ERR_STRIPECREATE errors.
-
Checkpoints fail due to MSG_ERR_DISKFULL error.
-
Backups cannot run or fail due to full capacity.
-
Backups fail with MSG_ERR_STRIPECREATE errors or messages indicating that the target server is full.
-
The access state switches to admin mode (unless maintenance is running).
-
The backup scheduler is disabled and cannot be resumed due to metadata capacity limits.
Understanding these aspects can help in managing and resolving capacity issues on an Avamar grid.
Gathering information:
Log in to the Avamar Utility Node and run the following commands:
(These only collect information and do not apply any changes)
1. If not already known, it provides the Avamar server full name or Fully Qualified Domain Name (FQDN):
hostname -f
2. Verify that all services are enabled, including the maintenance scheduler:
dpnctl status
3. The overall state:
status.dpn
4. Run the capacity.sh script to collect 60 days worth of data and the top 10 contributing clients:
capacity.sh --days=60 --top=10
5. Logs showing basic garbage collection behavior over the last 30 days:
dumpmaintlogs --types=gc --days=30 | grep "4202"
6. The amount of data that garbage collection removed, how many passes it completed and for how long it ran:
dumpmaintlogs --types=gc --days=30 | grep passes | cut -d ' ' -f1,10,14,15,17
7. Check how long hfscheck runs for:
dumpmaintlogs --types=hfscheck --days=30 | grep -i elapsed|cut -d ' ' -f1,12 | grep -v check
8. Details of capacity usage per node and per partition:
avmaint nodelist | egrep 'nodetag|fs-percent-full'
9. A list of checkpoints:
cplist
10. Maintenance job scheduled start/stop times:
avmaint sched status --ava | egrep -A 2 "maintenance-window|backup-window" | tail -16
11. Collect all disk settings:
avmaint config --ava | egrep -i 'disk|crunching|balance'
Never change values unless advised by an Avamar Subject Matter Expert (SME). Nondefault values might be in place for a good reason. Understand the situation thoroughly.
12. Collect counts of different types of stripes per node per data partition:
avmaint nodelist --xmlperline=99 | grep 'comp='
13. Check the amount of memory (and swap) in use on each node:
mapall free -m