Avamar: ADME UI is blank showing empty contents due to '/' space utilization

Summary: Avamar Data Migration Enabler (ADME) is blank and not being populated due to root slash '/' 100%.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

The logfile was removed, and the capacity was reduced and the UI could be populated after as a result.
  

root@<avamar-host>:/atoadmin/tmp/atocfg2/#: cp autotapeout2.stat /data01/.
root@<avamar-host>:/atoadmin/tmp/atocfg2/#: rm -rf autotapeout2.stat
root@<avamar-host>:/atoadmin/log/#: df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5             7.9G  6.2G  1.4G  83% /
udev                   16G  256K   16G   1% /dev
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sda1             114M   54M   55M  50% /boot
/dev/sda3             1.8T  105G  1.7T   6% /data01
/dev/sda7             1.5G  171M  1.3G  12% /var
/dev/sdb1             1.9T   61G  1.8T   4% /data02
/dev/sdc1             1.9T   60G  1.8T   4% /data03
root@<avamar-host>:/atoadmin/log/#:

It is recommended to restart the ADMe UI after:

root@<avamar-host>:/data01/#: adme -gui stop

ADMe WEB-UI has been stopped.

root@<avamar-hostname>:/data01/#: adme -gui start

ADMe WEB-UI service started successfully PID=[122153]
 To launch, point your browser to the URL. 

https://<avamar-hostname>:8888

root@<avamar-host>:/data01/#:
root@<avamar-host>:/data01/#: adme -gui status

ADMe WEB-UI service is started PID=[122153]
 To launch Web-UI, point your browser to the URL -  https://<avamar-hostname>:8888

root@<avamar-host>:/data01/#:

Upon opening and logging to ADMe UI (https://Avamar-URL:8888), the ADMe UI is showing a blank page with no resources and contents.

Symptoms:
- ADMe shows a blank white page.
- Upon browing other tabs or views, a blank white page also appears and contents are also not appearing.

Cause

After investigation, it has appeared that the '/' root directory is full at 100%.

ADME UI can be checked if it is not logging due to space:

  1. UI status
    1. This can be checked using the below command, which checks the UI status.
root@<avamar-hostname>:/atoadmin/#: adme -gui status
/usr/local/avamar/bin/adme[18520]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18528]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18535]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18548]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18554]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18564]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18567]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18568]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18569]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18570]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18571]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18572]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18573]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18574]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18575]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18611]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18612]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18613]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18614]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18615]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18616]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18617]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18618]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18619]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18620]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18621]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18622]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18623]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[19083]: echo: write to 1 failed [No space left on device]

ADMe WEB-UI service is started PID=[20228]
 To launch Web-UI, point your browser to URL -  https://<avamar-hostname>:8888

root@<avamar-hostname>:/atoadmin/#:

2- df -h
To identify the files consuming capacity, follow the steps below:

root@<avamar-hostname>:/atoadmin/tmp/atocfg2/#: df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5             7.9G  7.9G     0 100% /
udev                   16G  256K   16G   1% /dev
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sda1             114M   54M   55M  50% /boot
/dev/sda3             1.8T  104G  1.7T   6% /data01
/dev/sda7             1.5G  171M  1.3G  12% /var
/dev/sdb1             1.9T   61G  1.8T   4% /data02
/dev/sdc1             1.9T   60G  1.8T   4% /data03
root@<avamar-hostname>:/atoadmin/tmp/atocfg2/#:


Change the directory to the '/' mount point and run the following Perl command.
This perl command lists a tree of the directories/files sorted by size.
root@<Avamar-hostname>:/atoadmin/#: cd /
root@<Avamar-hostname>://#: perl -e'%h=map{/.\s/;99**(ord$&&7)-$`,$_}`du -hx`;die@h{sort%h}'
0       ./dev
7.7G    .
4.7G    ./usr
2.9G    ./usr/local
2.3G    ./usr/local/avamar
2.1G    ./atoadmin
1.7G    ./atoadmin/tmp
1.6G    ./atoadmin/tmp/atocfg2
1.2G    ./usr/local/avamar/bin
692M    ./usr/local/avamar/lib
508M    ./usr/local/avamar-tomcat-7.0.59
497M    ./usr/local/avamar-tomcat-7.0.59/webapps
437M    ./usr/share
399M    ./usr/local/avamar/tmp
360M    ./usr/java
341M    ./usr/lib64
335M    ./usr/lib
301M    ./opt
294M    ./root
291M    ./root/.avamardata
276M    ./atoadmin/log
201M    ./usr/sbin
188M    ./opt/emc-third-party
182M    ./usr/java/jre1.8.0_131
181M    ./usr/java/jre1.8.0_131/lib
176M    ./usr/java/jre1.8.0_112
175M    ./usr/java/jre1.8.0_112/lib
173M    ./lib
162M    ./lib/modules
160M    ./usr/bin
153M    ./opt/emc-third-party/platform/suse-11-x64/ruby-1.9.1
141M    ./usr/lib64/erlang
136M    ./usr/local/avamar/lib/jetty
127M    ./usr/local/avamar-tomcat-7.0.59/webapps/aam
121M    ./usr/local/avamar-tomcat-7.0.59/webapps/dtlt
115M    ./usr/lib64/erlang/lib
111M    ./usr/share/locale
109M    ./usr/lib/locale


It is clear that the log directories highlighted above for the ADMe was ranked from the top using directories.
Checking under /atoadmin/log, the webui.log.2 file appears to have grown to 223 megabytes, which is unusually large compared to other logs that typically rotate at 11 MB.

root@<avamar-hostname>:/atoadmin/log/#: ls -lSrh
total 366M
-rwxrwxrwx 1 root root    0 Mar 15 17:06 webui.log.0.lck
-rwxrwxrwx 1 root root    0 May  8 17:27 atoevent.log
-rwxrwxrwx 1 root root    0 Oct 26  2016 1
-rwxrwxrwx 1 root root  13K Apr 14  2017 admbatch-ADM.log
-rwxrwxrwx 1 root root  25K Nov  9 14:47 admbatch-LINUXFS.log
-rwxrwxrwx 1 root root  28K Apr 19  2017 atoevent.log20
-rwxrwxrwx 1 root root  31K Apr 17 03:15 recapture_history.log
-rwxrwxrwx 1 root root  80K Mar 15 17:06 nohup.out
-rwxrwxrwx 1 root root 196K Apr 10 07:46 admbatch-HYPERV2.log
-rwxrwxrwx 1 root root 241K Apr 18  2017 admbatch-HYPERV.log
-rwxrwxrwx 1 root root 346K Apr 27 08:59 atoevent.log3
-rwxrwxrwx 1 root root 637K Apr 13  2016 admbatch-ONDEMAND.log
-rwxrwxrwx 1 root root 671K May  7 15:32 atoevent.log2
-rwxrwxrwx 1 root root 1.2M Apr 27 08:59 admbatch-WINFS.log
-rwxrwxrwx 1 root root 1.3M May  1 16:29 admbatch-SQL1.log
-rwxrwxrwx 1 root root 1.6M May  3 20:08 admbatch-SQL.log
-rwxrwxrwx 1 root root 2.5M Apr  7 02:37 admbatch-HYPERV1.log
-rwxrwxrwx 1 root root 2.7M Apr 20 08:51 admbatch-NDMP.log
-rwxrwxrwx 1 root root 8.9M May  9 11:06 webui.log.0
-rwxrwxrwx 1 root root  11M Jun 13  2017 webui.log.4
-rwxrwxrwx 1 root root  11M Jan 17 19:10 webui.log.1
-rwxrwxrwx 1 root root  11M Aug  8  2017 webui.log.3
-rwxrwxrwx 1 root root  93M May  9 11:15 admbatch-NDMP1.log
-rwxrwxrwx 1 root root 223M Sep 20  2017 webui.log.2


 

Upon further investigation into the issue, it was discovered that the ADMe had been logging to that file for three consecutive days, displaying the following message.

 

root@<avamar-hostname>:/atoadmin/log/#: tail -20 webui.log.2
 >>> Waiting for -migrate phase to begin before initiating the cancel operation 
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
................................................

In this scenario, ADMe failed to connect to Avamar dispatchers or sessions, could not spawn maintenance commands, and continued failing for three days afterward.

 

Sep 18, 2017 5:05:34 PM com.avamar.pss.server.ATOServiceImpl runADMECommand
INFO: Output : 0  ERROR!  Exit code 15: Cannot connect to Avamar dispatcher
ERROR: avmaint: cpstatus: cannot connect to server <avamar-hostname> at <avamar-ip>:27000
ERROR: avmaint: gcstatus: cannot        connect to server <avamar-hostname> at <avamar-ip>:27000

 Gathering client/group properties & job stats information...

Sep 18, 2017 5:05:34 PM com.avamar.pss.server.ATOServiceImpl getJobActivities
INFO: Loading Activities for : all from /atoadmin/jobstats/jobactivity.csv
Sep 18, 2017 5:08:37 PM com.avamar.pss.server.ATOServiceImpl runADMECommand
INFO: Output :
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ..
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ...
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ....
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .....
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ......
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .......
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ........
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .........
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ..........
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ...........
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ............
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .............
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ..............
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ...............
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ................
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .................
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ..................
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ...................
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ....................
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .....................

Resolution

  1. To resolve the issue, reduce usage on the root '/' mount point to bring it below 100%, allowing the ADMe UI to populate correctly.
  2. Consult with Dell Support to Identify files and confirm that identified files can be removed  
  3. Once the space is removed from root '/' and the space are less than 100%, restart ADME UI.
ADMe WebUI

Additional Information

Refer to similar to Knowledge Base (KB) for resolving space issues:

Avamar - how to identify large files or directories consuming a lot of disk space on an Avamar node



 

Affected Products

Avamar Data Migration Enabler

Products

Avamar, Avamar Data Migration Enabler
Article Properties
Article Number: 000036706
Article Type: Solution
Last Modified: 28 Oct 2025
Version:  4
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.