Avamar: Data Domain: Kan geen checkpointback-up maken vanwege onjuiste implementatie van de AVE
Summary: Kan geen back-up van een controlestation maken vanwege onjuiste implementatie van de AVE.
Symptoms
Onjuiste AVE-implementatie
- Kan geen back-up van een checkpoint maken
- Alle onderhoudsklussen zijn succesvol afgerond
- Bevestigde verbinding tussen Av en DD
mccli event show | grep -i "failed to create" 71546 2019-02-21 09:09:31 CET ERROR 31034 SYSTEM PROCESS / Failed to create a checkpoint backup. 71229 2019-02-20 09:09:22 CET ERROR 31034 SYSTEM PROCESS / Failed to create a checkpoint backup. 70926 2019-02-19 09:26:29 CET ERROR 31034 SYSTEM PROCESS / Failed to create a checkpoint backup. 70340 2019-02-18 09:10:41 CET ERROR 31034 SYSTEM PROCESS / Failed to create a checkpoint backup.
- Fouten geretourneerd in het cpbackup-logboek voor een bepaalde cp op een bepaalde datapartitie: data07
admin@*****:/usr/local/avamar/var/client/>: less cpbackup-cp.20190221080716-28111.log . . [Thu Feb 21 09:09:17 2019] Backup data07 finished in 00:00:01. [Thu Feb 21 09:09:17 2019] Cleanup backup for data07 [Thu Feb 21 09:09:17 2019] Backup data07 returned with exit code 158 [Thu Feb 21 09:09:17 2019] Execute: ps -o pid,ppid,cmd --no-headers --ppid 28226 || true > [Thu Feb 21 09:09:17 2019] Killing all child processes: 28226 > [Thu Feb 21 09:09:17 2019] Killing PIDs (28226) with signal 15. [Thu Feb 21 09:09:17 2019] Sleeping for 10 seconds after killing processes... [Thu Feb 21 09:09:27 2019] Execute: ps -o pid,ppid,cmd --noheaders --pid 28226 || true [Thu Feb 21 09:09:27 2019] Execute output: 28226 28111 [sh] <defunct> . . [Thu Feb 21 09:09:28 2019] Backup data06 finished in 00:00:12. [Thu Feb 21 09:09:28 2019] Cleanup backup for data06 [Thu Feb 21 09:09:28 2019] Backup data06 returned with exit code 158 [Thu Feb 21 09:09:28 2019] Finished backing up files in 00:00:12. [Thu Feb 21 09:09:28 2019] Execute: /usr/local/avamar/bin/mccli event publish --code=31034 --attribute="checkpoint" --value=" cp.20190221080716" --attribute="logfile" --value="/space/avamar/var/client/cpbackup-cp.20190221080716-28111.log" --attribute= "cache" --value="OK" --attribute="data06 elapsed" --value="00:00:12" --attribute="data06 fail" --value="Exit code: 158. Signa l: 0." --attribute="data07 elapsed" --value="00:00:01" --attribute="data07 fail" --value="Exit code: 158. Signal: 0." --attri bute="max ddr streams" --value="6" --attribute="max parallel avtars" --value="2" --attribute="parallel running avtars" --valu e="2" --attribute="pass thru flags" --value="--id=root --ap=******** --hfsaddr=***** --hfsport=27000" --attribute="total elapsed time" --value="00:00:12" --attribute="volumes" --value="data01 data02 data03 data04 data05 data06 data07" [Thu Mar 22 09:10:03 2018] Execute: /usr/local/avamar/bin/mccli event publish --code=31034 --attribute="checkpoint" --value="cp.20180322150523" --attribute="logfile" --value="/data01/avamar/var/cpbackup-cp.20180322150523-34518.log" --attribute="abort reason" --value="Flag --max-ddr-streams must be greater than zero" --attribute="cache" --value="OK" --attribute="max ddr streams" --value="0" --attribute="max parallel avtars" --value="2" --attribute="pass thru flags" --value="--id=root --ap=******** --hfsaddr=avamar" --attribute="volumes" --value="data01 data02 data03" abort reason Flag --max-ddr-streams must be greater than zero [Thu Feb 21 09:09:31 2019] Execute output: 0,23000,CLI command completed successfully. Attribute Value ----------------------- ------------------------------------------------------------- checkpoint cp.20190221080716 cache OK parallel running avtars 2 logfile /space/avamar/var/client/cpbackup-cp.20190221080716-28111.log max ddr streams 6 pass thru flags --id=root --ap=******** --hfsaddr=***** --hfsport=27000 volumes data01 data02 data03 data04 data05 data06 data07 data06 fail Exit code: 158. Signal: 0. total elapsed time 00:00:12 data07 elapsed 00:00:01 max parallel avtars 2 data06 elapsed 00:00:12 data07 fail Exit code: 158. Signal: 0.
- Kijken naar de gebeurtenis die is gemeld voor het mislukken van de cpbackup
mccli event show --id=71546 0,23000,CLI command completed successfully. Attribute Value ----------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ID 71546 Date 2019-02-21 09:09:31 CET Type ERROR Code 31034 Category SYSTEM Severity PROCESS Domain / Summary Failed to create a checkpoint backup. SW Source MCS:BS For Whom All Users HW Source ***** Description Failed to create a checkpoint backup. Remedy No action required. Notes N/A Data <data><entry key="checkpoint" type="text" value="cp.20190221080716" version="1"/><entry key="cache" type="text" value="OK" version="1"/><entry key="parallel running avtars" type="text" value="2" version="1"/><entry key="logfile" type="text" value="/space/avamar/var/client/cpbackup-cp.20190221080716-28111.log" version="1"/><entry key="max ddr streams" type="text" value="6" version="1"/><entry key="pass thru flags" type="text" value="--id=root --ap=******** --hfsaddr=***** --hfsport=27000" version="1"/><entry key="volumes" type="text" value="data01 data02 data03 data04 data05 data06 data07" version="1"/><entry key="requestor" type="xml" value="<requestor domain="/" product="NONE" role="Administrator" user="MCUser"/>" version=""/><entry key="data06 fail" type="text" value="Exit code: 158. Signal: 0." version="1"/><entry key="total elapsed time" type="text" value="00:00:12" version="1"/><entry key="data07 elapsed" type="text" value="00:00:01" version="1"/><entry key="max parallel avtars" type="text" value="2" version="1"/><entry key="data06 elapsed" type="text" value="00:00:12" version="1"/><entry key="data07 fail" type="text" value="Exit code: 158. Signal: 0." version="1"/></data>
- Bij het controleren van het cpbackup-logbestand op een van de getroffen datapartities, werd een bericht gevonden dat de server alleen-lezen is vanwege diskfull in het cpbackup-logboek voor data06 en data07
admin@*****:/usr/local/avamar/var/client/>: less cpbackup-cp.20190221080716-data06.log . . 2019-02-21 09:09:16 avtar Info <5554>: Connecting to one node in each datacenter 2019-02-21 09:09:16 avtar Info <5993>: - Connect: Connected to 172.27.7.3:29000, Priv=0, SSL Cipher=AES256-SHA 2019-02-21 09:09:16 avtar Info <5993>: - Datacenter 0 has 1 nodes: Connected to 172.27.7.3:29000, Priv=0, SSL Cipher=AES256-SHA 2019-02-21 09:09:16 avtar Info <42862>: - Server is in read-only mode due to diskfull 2019-02-21 09:09:16 avtar Info <17972>: - Server is in Read-only mode. 2019-02-21 09:09:16 avtar FATAL <8604>: Fatal server connection problem, aborting initialization. Verify correct server address and login credentials. 2019-02-21 09:09:16 avtar FATAL <8941>: Fatal server connection problem, aborting initialization. Verify correct server address and login credentials. 2019-02-21 09:09:16 avtar Info <6149>: Error summary: 2 errors: 8604, 8941 2019-02-21 09:09:16 avtar Info <6645>: Not sending wrapup anywhere. 2019-02-21 09:09:16 avtar Info <5314>: Command failed (2 errors, exit code 10008: cannot establish connection with server (possible network or DNS failure))
- Het uitvoeren van fs-perc toont een groot verschil tussen data01 en de andere partities:
avmaint nodelist |grep -i "fs-perc" fs-percent-full="37.0" fs-percent-full="8.8" fs-percent-full="8.7" fs-percent-full="8.7" fs-percent-full="8.8" fs-percent-full="8.7" fs-percent-full="8.7" - Bij het uitvoeren van df -h bleek dat data01 kleiner was dan andere partities, maar was 37% vol, terwijl de andere 9% waren
df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 16G 5.0G 10G 34% / udev 18G 184K 18G 1% /dev tmpfs 18G 0 18G 0% /dev/shm /dev/sda1 1011M 91M 869M 10% /boot /dev/sda6 7.6G 287M 6.9G 4% /var /dev/sda8 62G 14G 45G 25% /space /dev/sdb1 250G 93G 158G 37% /data01 /dev/sdc1 1.0T 90G 934G 9% /data02 /dev/sdd1 1.0T 89G 935G 9% /data03 /dev/sde1 1.0T 90G 935G 9% /data04 /dev/sdf1 1.0T 91G 934G 9% /data05 /dev/sdg1 1.0T 90G 935G 9% /data06 /dev/sdh1 1.0T 90G 935G 9% /data07
Cause
Partitie /data01 is 250G, terwijl de andere 1 TB zijn, dus daarom is het gebruik hoger dan de andere. We ondersteunen het hebben van verschillende gegevenspartitiegroottes in dezelfde AVE.
Aangezien dit een AVE van 4 TB is, moeten er zes gegevenspartities zijn (elk 1 TB); bron: https://www.delltechnologies.com/asset/en-us/products/data-protection/technical-support/docu91853.pdf ("AVE virtual disk requirements", pagina 17)
Er staat een opmerking op dezelfde pagina:
"Omdat de AVE .ova-installatie drie opslagpartities van 250 GB samen met de OS-schijf maakt, is er bij de installatie ongeveer 900 GB vrije schijfruimte vereist. De AVE.ovf-installatie maakt tijdens de installatie echter geen opslagpartities. Daarom is er bij de installatie alleen voldoende schijfruimte voor de OS-schijf nodig en kunnen volgende storagepartities worden gemaakt op andere datastores"
Aangezien deze AVE zeven partities heeft (een van 250 GB en de rest 1 TB) in plaats van 6 van 1 TB, zou de AVE verkeerd worden geïmplementeerd.
Resolution
In dit geval moet de klant een nieuwe 4 TB AVE implementeren (met 6 x 1 TB datapartities, geen 250 GB partities). Repliceer vervolgens al hun data naar het nieuwe systeem.
Additional Information
De nieuwe implementatie zou de verantwoordelijkheid zijn van het lokale team en de professionele diensten.