Avamar: The grid enters admin mode due to differences in data partitions sparse files

Summary: This article addresses an issue where the number of sparse files differs between the data partitions causing the grid to enter admin mode.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

All backups and replication destination activities are failing.

The Avamar Server is in admin mode.

The capacity is below the diskreadonly value.

The number of stripes on all data partitions is similar:

status.dpn
Thu Jul 16 09:41:28 CEST 2015  [avamar.emc.com] Thu Jul 16 07:41:27 2020 UTC (Initialized Thu Jan 16 16:42:42 2014 UTC)
Node   IP Address     Version   State   Runlevel  Srvr+Root+User Dis Suspend Load UsedMB Errlen  %Full   Percent Full and Stripe Status by Disk
0.0     10.10.10.10  7.1.1-145  ONLINE fullaccess mhpu+0hpu+0000   1 false   0.03 35759 80935381   1.6%   1%(onl:1414)  1%(onl:1418)  1%(onl:1408)  1%(onl:1414)  1%(onl:1414)  1%(onl:1412)
Srvr+Root+User Modes = migrate + hfswriteable + persistwriteable + useraccntwriteable

System ID: 1434729928@00:50:56:8A:24:53

All reported states=(ONLINE), runlevels=(fullaccess), modes=(mhpu+0hpu+0000)
System-Status: ok
Access-Status: admin

No checkpoint yet
No GC yet
No hfscheck yet

Maintenance windows scheduler capacity profile is active.
  The backup window is currently running.
  Next backup window start time: Thu Jul 16 20:00:00 2020 CEST
  Next maintenance window start time: Thu Jul 16 12:00:00 2020 CEST
 

The output of the following command indicates that the grid is in admin mode due to something called diskbeat:
(diskbeat is the activity that changed the server access mode)

avmaint nodelist --xmlperline=99 | grep activityaccess 
<activityaccessmodes adminuser="mhpu+0hpu+0hpu" checkpoint="mhpu+0hpu+0hpu" conversion="mhpu+0hpu+0hpu" diskbeat="mhpu+0hpu+0000" garbagecollect="mhpu+0hpu+0hpu" 
heartbeat="mhpu+0hpu+0hpu" hfscheckserver="mhpu+0hpu+0hpu" hfscheckexecute="mhpu+0hpu+0hpu" nodebeat="mhpu+0hpu+0hpu" runlevel="mhpu+0hpu+0hpu" 
testintegrity="mhpu+0hpu+0hpu" removehashes="mhpu+0hpu+0hpu" rebuildstripe="mhpu+0hpu+0hpu" diskfull="mhpu+0hpu+0hpu" hashrefcheck="mhpu+0hpu+0hpu"/> 
 

The df command shows a large discrepancy between the data partitions:

Example:

df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             7.9G  6.2G  1.4G  82% /
udev                   18G  288K   18G   1% /dev
tmpfs                  18G     0   18G   0% /dev/shm
/dev/sda1             130M   62M   61M  51% /boot
/dev/sda7             1.5G  227M  1.2G  17% /var
/dev/sda9              77G   50G   23G  70% /space
/dev/sdb1             1.0T  190G  835G  19% /data01
/dev/sdc1             1.0T  183G  842G  18% /data02
/dev/sdd1             1.0T  185G  839G  19% /data03
/dev/sde1             1.0T  416G  608G  41% /data04
/dev/sdf1             1.0T  190G  835G  19% /data05
/dev/sdg1             1.0T  187G  838G  19% /data06 

In this output, /data04 is 41% used, where the other data partitions are 18%-19% used.

This difference in size between the data partitions exceeds the freespaceunbalance value.
(freespaceunbalance is the maximum capacity difference between data partitions)

 

Further investigation shows:

The checkpoint (cp) overhead in similar across the data partitions, but the cur size is lower on one or more partitions:

cps -blk 
Checkpoint usage by partition:
  188.020 /data01/cur
  181.944 /data02/cur
  186.020 /data03/cur
  435.234 /data04/cur
  190.617 /data05/cur
  187.797 /data06/cur
    0.540 /data01/cp.20200716082941
    0.542 /data02/cp.20200716082941
    0.548 /data03/cp.20200716082941
    0.038 /data04/cp.20200716082941
    0.523 /data05/cp.20200716082941
    0.493 /data06/cp.20200716082941
    0.759 /data01/cp.20200716080454
    0.777 /data02/cp.20200716080454
    0.781 /data03/cp.20200716080454
    0.336 /data04/cp.20200716080454
    0.751 /data05/cp.20200716080454
    0.721 /data06/cp.20200716080454

GB used     %use Total checkpoint usage by node:
 6593.815        Total blocks on node           Thu Jul 16 10:41:56 2020
 5198.045  78.83 Total blocks available
 1369.633  20.77 cur                            Thu Jul 16 10:33:14 2020
    2.683   0.04 cp.20200716082941              Thu Jul 16 10:32:42 2020
    4.125   0.06 cp.20200716080454              Thu Jul 16 10:14:15 2020
 1376.440  20.87 Total blocks used by dpn 
 

There is a large difference between the CUR partition apparent size (size seen by applications) and the CUR disk usage (How much actual space is taken by the files):

du -sh --apparent-size /data??/cur
458G    /data01/cur
456G    /data02/cur
456G    /data03/cur
455G    /data04/cur
457G    /data05/cur
456G    /data06/cur 
du -sh /data??/cur
176G    /data01/cur
170G    /data02/cur
174G    /data03/cur
406G    /data04/cur
178G    /data05/cur
175G    /data06/cur

Cause

One or more data partitions have more sparse files than the other data partitions.

A sparse file is a type of file that attempts to use file system space more efficiently when blocks allocated to the file are mostly empty.
(For example: Blocks that only contain zeros or nothing are not actually stored on disk.)

Instead it writes brief information (metadata) representing the empty blocks to the disk.

 

This issue is usually associated with a high change rate, which causes a dramatic checkpoint overhead increase.

Resolution

1. Run the following commands:

status.dpn 
avmaint nodelist --xmlperline=99 | grep activityaccess 
df -h 
cps -blk
du -sh --apparent-size /data??/cur
du -sh /data??/cur 
 

2. Create a Service Request with the Dell Technologies Avamar Support team, referencing this article and providing the output gathered above.

Affected Products

Avamar, Avamar Server
Article Properties
Article Number: 000036449
Article Type: Solution
Last Modified: 04 Jun 2025
Version:  5
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.