Avamar: A interface do usuário do ADME está em branco mostrando conteúdo vazio devido à utilização de espaço em '/'

Summary: O Avamar Data Migration Enabler (ADME) está em branco e não está sendo preenchido devido à barra raiz '/' 100%.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

O arquivo de log foi removido e a capacidade foi reduzida, e a interface do usuário pôde ser preenchida depois como resultado.
  

root@<avamar-host>:/atoadmin/tmp/atocfg2/#: cp autotapeout2.stat /data01/.
root@<avamar-host>:/atoadmin/tmp/atocfg2/#: rm -rf autotapeout2.stat
root@<avamar-host>:/atoadmin/log/#: df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5             7.9G  6.2G  1.4G  83% /
udev                   16G  256K   16G   1% /dev
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sda1             114M   54M   55M  50% /boot
/dev/sda3             1.8T  105G  1.7T   6% /data01
/dev/sda7             1.5G  171M  1.3G  12% /var
/dev/sdb1             1.9T   61G  1.8T   4% /data02
/dev/sdc1             1.9T   60G  1.8T   4% /data03
root@<avamar-host>:/atoadmin/log/#:

É recomendável reiniciar a interface do usuário do ADMe após:

root@<avamar-host>:/data01/#: adme -gui stop

A IU WEB do ADMe foi interrompida.

root@<avamar-hostname>:/data01/#: adme -gui start

O serviço ADMe WEB-UI foi iniciado com sucesso PID=[122153]
Para iniciar, aponte seu navegador para a URL. 

https://<avamar-hostname>:8888

root@<avamar-host>:/data01/#:
root@<avamar-host>:/data01/#: adme -gui status

O serviço ADMe WEB-UI é iniciado PID=[122153]
Para iniciar a Web-UI, aponte seu navegador para a URL - https://< avamar-hostname>:8888

root@<avamar-host>:/data01/#:

Ao abrir e registrar na interface do usuário do ADMe (https://Avamar-URL:8888), esta exibe uma página em branco sem recursos e conteúdo.

Sintomas:
- o ADMe mostra uma página em branco.
- Ao abrir outras abas ou visualizações, uma página branca em branco também aparece e o conteúdo também não aparece.

Cause

Após investigação, parece que o diretório raiz '/' está cheio em 100%.

A interface do usuário do ADME pode ser verificada se não estiver em log devido ao espaço:

  1. Status da interface do usuário
    1. Isso pode ser verificado usando o comando abaixo, que verifica o status da interface do usuário.
root@<avamar-hostname>:/atoadmin/#: adme -gui status
/usr/local/avamar/bin/adme[18520]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18528]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18535]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18548]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18554]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18564]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18567]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18568]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18569]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18570]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18571]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18572]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18573]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18574]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18575]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18611]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18612]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18613]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18614]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18615]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18616]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18617]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18618]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18619]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18620]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18621]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18622]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[18623]: echo: write to 1 failed [No space left on device]
/usr/local/avamar/bin/adme[19083]: echo: write to 1 failed [No space left on device]

ADMe WEB-UI service is started PID=[20228]
 To launch Web-UI, point your browser to URL -  https://<avamar-hostname>:8888

root@<avamar-hostname>:/atoadmin/#:

2- df -h
Para identificar os arquivos que consomem capacidade, siga as etapas abaixo:

root@<avamar-hostname>:/atoadmin/tmp/atocfg2/#: df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5             7.9G  7.9G     0 100% /
udev                   16G  256K   16G   1% /dev
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sda1             114M   54M   55M  50% /boot
/dev/sda3             1.8T  104G  1.7T   6% /data01
/dev/sda7             1.5G  171M  1.3G  12% /var
/dev/sdb1             1.9T   61G  1.8T   4% /data02
/dev/sdc1             1.9T   60G  1.8T   4% /data03
root@<avamar-hostname>:/atoadmin/tmp/atocfg2/#:


Altere o diretório para o ponto de montagem '/' e execute o seguinte comando Perl.
Esse comando perl lista uma árvore de diretórios/arquivos classificados por tamanho.
root@<Avamar-hostname>:/atoadmin/#: cd /
root@<Avamar-hostname>://#: perl -e'%h=map{/.\s/;99**(ord$&&7)-$`,$_}`du -hx`;die@h{sort%h}'
0       ./dev
7.7G    .
4.7G    ./usr
2.9G    ./usr/local
2.3G    ./usr/local/avamar
2.1G    ./atoadmin
1.7G    ./atoadmin/tmp
1.6G    ./atoadmin/tmp/atocfg2
1.2G    ./usr/local/avamar/bin
692M    ./usr/local/avamar/lib
508M    ./usr/local/avamar-tomcat-7.0.59
497M    ./usr/local/avamar-tomcat-7.0.59/webapps
437M    ./usr/share
399M    ./usr/local/avamar/tmp
360M    ./usr/java
341M    ./usr/lib64
335M    ./usr/lib
301M    ./opt
294M    ./root
291M    ./root/.avamardata
276M    ./atoadmin/log
201M    ./usr/sbin
188M    ./opt/emc-third-party
182M    ./usr/java/jre1.8.0_131
181M    ./usr/java/jre1.8.0_131/lib
176M    ./usr/java/jre1.8.0_112
175M    ./usr/java/jre1.8.0_112/lib
173M    ./lib
162M    ./lib/modules
160M    ./usr/bin
153M    ./opt/emc-third-party/platform/suse-11-x64/ruby-1.9.1
141M    ./usr/lib64/erlang
136M    ./usr/local/avamar/lib/jetty
127M    ./usr/local/avamar-tomcat-7.0.59/webapps/aam
121M    ./usr/local/avamar-tomcat-7.0.59/webapps/dtlt
115M    ./usr/lib64/erlang/lib
111M    ./usr/share/locale
109M    ./usr/lib/locale


Está claro que os diretórios de log destacados acima para o ADMe foram classificados na parte superior usando diretórios.
Verificando em /atoadmin/logo webui.log.2 parece ter crescido para 223 megabytes, o que é excepcionalmente grande em comparação com outros logs que normalmente giram em 11 MB.

root@<avamar-hostname>:/atoadmin/log/#: ls -lSrh
total 366M
-rwxrwxrwx 1 root root    0 Mar 15 17:06 webui.log.0.lck
-rwxrwxrwx 1 root root    0 May  8 17:27 atoevent.log
-rwxrwxrwx 1 root root    0 Oct 26  2016 1
-rwxrwxrwx 1 root root  13K Apr 14  2017 admbatch-ADM.log
-rwxrwxrwx 1 root root  25K Nov  9 14:47 admbatch-LINUXFS.log
-rwxrwxrwx 1 root root  28K Apr 19  2017 atoevent.log20
-rwxrwxrwx 1 root root  31K Apr 17 03:15 recapture_history.log
-rwxrwxrwx 1 root root  80K Mar 15 17:06 nohup.out
-rwxrwxrwx 1 root root 196K Apr 10 07:46 admbatch-HYPERV2.log
-rwxrwxrwx 1 root root 241K Apr 18  2017 admbatch-HYPERV.log
-rwxrwxrwx 1 root root 346K Apr 27 08:59 atoevent.log3
-rwxrwxrwx 1 root root 637K Apr 13  2016 admbatch-ONDEMAND.log
-rwxrwxrwx 1 root root 671K May  7 15:32 atoevent.log2
-rwxrwxrwx 1 root root 1.2M Apr 27 08:59 admbatch-WINFS.log
-rwxrwxrwx 1 root root 1.3M May  1 16:29 admbatch-SQL1.log
-rwxrwxrwx 1 root root 1.6M May  3 20:08 admbatch-SQL.log
-rwxrwxrwx 1 root root 2.5M Apr  7 02:37 admbatch-HYPERV1.log
-rwxrwxrwx 1 root root 2.7M Apr 20 08:51 admbatch-NDMP.log
-rwxrwxrwx 1 root root 8.9M May  9 11:06 webui.log.0
-rwxrwxrwx 1 root root  11M Jun 13  2017 webui.log.4
-rwxrwxrwx 1 root root  11M Jan 17 19:10 webui.log.1
-rwxrwxrwx 1 root root  11M Aug  8  2017 webui.log.3
-rwxrwxrwx 1 root root  93M May  9 11:15 admbatch-NDMP1.log
-rwxrwxrwx 1 root root 223M Sep 20  2017 webui.log.2


 

Após uma investigação mais aprofundada sobre o problema, descobriu-se que o ADMe estava registrando esse arquivo por três dias consecutivos, exibindo a seguinte mensagem.

 

root@<avamar-hostname>:/atoadmin/log/#: tail -20 webui.log.2
 >>> Waiting for -migrate phase to begin before initiating the cancel operation 
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
......................................................................................
................................................

Nesse cenário, o ADMe falhava ao se conectar aos dispatchers ou às sessões do Avamar, não conseguia gerar comandos de manutenção e continuava falhando por três dias depois.

 

Sep 18, 2017 5:05:34 PM com.avamar.pss.server.ATOServiceImpl runADMECommand
INFO: Output : 0  ERROR!  Exit code 15: Cannot connect to Avamar dispatcher
ERROR: avmaint: cpstatus: cannot connect to server <avamar-hostname> at <avamar-ip>:27000
ERROR: avmaint: gcstatus: cannot        connect to server <avamar-hostname> at <avamar-ip>:27000

 Gathering client/group properties & job stats information...

Sep 18, 2017 5:05:34 PM com.avamar.pss.server.ATOServiceImpl getJobActivities
INFO: Loading Activities for : all from /atoadmin/jobstats/jobactivity.csv
Sep 18, 2017 5:08:37 PM com.avamar.pss.server.ATOServiceImpl runADMECommand
INFO: Output :
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ..
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ...
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ....
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .....
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ......
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .......
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ........
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .........
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ..........
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ...........
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ............
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .............
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ..............
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ...............
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ................
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .................
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ..................
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ...................
 >>> Waiting for -migrate phase to begin before initiating the cancel operation ....................
 >>> Waiting for -migrate phase to begin before initiating the cancel operation .....................

Resolution

  1. Para resolver o problema, reduza o uso no ponto de montagem raiz '/' para colocá-lo abaixo de 100%, permitindo que a interface do usuário do ADMe seja preenchida corretamente.
  2. Consulte o Suporte Dell para identificar arquivos e confirmar se os arquivos identificados podem ser removidos  
  3. Depois que o espaço for removido da raiz '/' e o espaço for inferior a 100%, reinicie a interface do usuário do ADME.
IU Web do ADMe

Additional Information

Consulte um semelhante à Base de conhecimento (KB) para resolver problemas de espaço:

Avamar — como identificar arquivos grandes ou diretórios que consomem muito espaço em disco em um nó do Avamar



 

Affected Products

Avamar Data Migration Enabler

Products

Avamar, Avamar Data Migration Enabler
Article Properties
Article Number: 000036706
Article Type: Solution
Last Modified: 28 Oct 2025
Version:  4
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.