Avamar: How to gather the information to troubleshoot capacity issues

摘要: This article describes what information is needed when troubleshooting Avamar capacity issues, and how to collect it.

本文适用于 本文不适用于 本文并非针对某种特定的产品。 本文并非包含所有产品版本。

说明

Addressing Capacity Issues in Avamar:

When dealing with capacity issues on an Avamar grid, it is crucial to understand the root cause. This requires a series of steps, starting with data collection for thorough investigation.

Avamar grids have several types of capacity limits. A comprehensive understanding of these limits, along with their historical context, can clarify both current and past capacity issues experienced by the system.

 
The grid generates specific events, warnings, or errors in the User Interface (UI) when certain capacity thresholds are crossed:
  • 80%: Capacity Warning
  • 95%: Health Check Limit is reached
  • 100%: Server Read-Only Limit is reached, causing the grid to switch to admin mode
 
When an Avamar grid is full, it may exhibit the following symptoms or errors: 
  • Garbage collection (GC) fails, resulting in MSG_ERR_DISKFULL or MSG_ERR_STRIPECREATE errors.
  • Checkpoints fail due to MSG_ERR_DISKFULL error.
  • Backups cannot run or fail due to full capacity.
  • Backups fail with MSG_ERR_STRIPECREATE errors or messages indicating that the target server is full.
  • The access state switches to admin mode (unless maintenance is running).
  • The backup scheduler is disabled and cannot be resumed due to metadata capacity limits.

Understanding these aspects can help in managing and resolving capacity issues on an Avamar grid.

 
 

Gathering information:

Log in to the Avamar Utility Node and run the following commands:

(These only collect information and do not apply any changes)

1. If not already known, it provides the Avamar server full name or Fully Qualified Domain Name (FQDN):

hostname -f
 

2. Verify that all services are enabled, including the maintenance scheduler:

dpnctl status
 

3. The overall state:

status.dpn
 

4. Run the capacity.sh script to collect 60 days worth of data and the top 10 contributing clients:

capacity.sh --days=60 --top=10
 

5. Logs showing basic garbage collection behavior over the last 30 days:

dumpmaintlogs --types=gc --days=30 | grep "4202"
 

6. The amount of data that garbage collection removed, how many passes it completed and for how long it ran:

dumpmaintlogs --types=gc --days=30 | grep passes | cut -d ' ' -f1,10,14,15,17
 

7. Check how long hfscheck runs for:

dumpmaintlogs --types=hfscheck --days=30 | grep -i elapsed|cut -d ' ' -f1,12 | grep -v check
 

8. Details of capacity usage per node and per partition:

avmaint nodelist | egrep 'nodetag|fs-percent-full'
 

9. A list of checkpoints:

cplist
 

10. Maintenance job scheduled start/stop times:

avmaint sched status --ava | egrep -A 2 "maintenance-window|backup-window" | tail -16
 

11. Collect all disk settings:

avmaint config --ava | egrep -i 'disk|crunching|balance'
 

Never change values unless advised by an Avamar Subject Matter Expert (SME). Nondefault values might be in place for a good reason. Understand the situation thoroughly.

12. Collect counts of different types of stripes per node per data partition:

avmaint nodelist --xmlperline=99 | grep 'comp='
 

13. Check the amount of memory (and swap) in use on each node:

mapall free -m

其他信息

Some of the above steps have articles which explain their output. If an article listed below is unreachable, log in to the Dell Support site to access it. 

受影响的产品

Avamar

产品

Avamar, Avamar Server
文章属性
文章编号: 000040862
文章类型: How To
上次修改时间: 09 7月 2025
版本:  8
从其他戴尔用户那里查找问题的答案
支持服务
检查您的设备是否在支持服务涵盖的范围内。