IDPA: DP4400-diskfejl forårsager ustabilitet i Data Domain-filsystemet

Resumen: Diskdrev i DP4400, der logger for store fejl, kan forårsage genstart af Data Domain File System (FS) og ustabilitet.

Este artículo se aplica a Este artículo no se aplica a Este artículo no está vinculado a ningún producto específico. No se identifican todas las versiones del producto en este artículo.

Síntomas

Følgende symptomer kan ses:

  • Data Domain-filsystemet kan rapportere som utilgængeligt eller genstarte gentagne gange
  • Logfiler og beskeder i Data Domain rapporterer muligvis, at "vol1 ikke er tilgængelig"
  • Avamar-vedligeholdelsesservices mislykkes på grund af MSG_ERR_DDR_ERROR
  • Uventet høj kapacitet brugt på grund af gentagne fejl i Avamar-vedligeholdelse eller Data Domain-rensning
  • IDRAC kan vise, at alle diske er sunde, men controllerlogfiler kan vise noget andet


Eksempler:
Data Domain kan logge underretninger som f.eks.: 

ALERT Filesystem EVT-FILESYS-00002: Problem is preventing filesystem from running.
EVT-STORAGE-00020: The Active tier is unavailable.
EVT-FILESYS-00011: DDFS process died; restarting


I logfilen /ddr/var/log/debug/ddfs.info, kan du se fejl som f.eks.:

Jun 30 11:48:28 idpa-dd ddfs[8504]: ERROR: MSG-SL-00004: Volume vol1 is unavailable. err:Missing storage device.
Jun 30 11:58:20 idpa-dd ddfs[15962]: ERROR: MSG-SL-00004: Volume vol1 is unavailable. err:Missing storage device.



Logfilen /ddr/var/log/debug/kern.info Kan rapportere diskgruppefejl som f.eks.:

Jun 30  18:51:08 idpa-dd kernel: [10002271.298276] (E4)DD_RAID: Array [dg2/ppart14] encountered READ I/O errors [57.57 dm-10p5 6000c290ea0836a3178bab0785368300] [dev idx: 0] [stripe: 516562] [gs:ffff880ce56ed210, request:ffff880ce9ebeb40] faults:1
Jun 30  18:51:08 idpa-dd kernel: [10002271.298302] (E4)ERROR: dd_dgrp.c:5731 dd_dgrp_array_internal_notification:: Too many disks failed [1, 14, 0]
Jun 30  18:51:08 idpa-dd kernel: [10002271.298305] (E4)DD_RAID: DiskGroup [dg2] has total failure!



Eller yderligere fejl såsom:

idpa-dd kernel: [56127713.299919] (E4)sd 2:0:1:0: [sds] tag#0 Sense Key : Medium Error [current]
idpa-dd kernel: [56127713.299921] (E4)sd 2:0:1:0: [sds] tag#0 Add. Sense: No additional sense information
idpa-dd kernel: [56127713.299924] (E4)sd 2:0:1:0: [sds] tag#0 CDB: Read(16) 88 00 00 00 00 01 ed 7c 57 42 00 00 02 01 00 00
idpa-dd kernel: [56127713.299926] (E4)dd_blk_update_request: I/O error, dev sds, sector 8279316290
idpa-dd kernel: [56127713.299949] (E4)DEBUG: dd_array_error.c:512 dd_array_handle_fault:: nr_faults:1 array->level_info.nr_disks:1
idpa-dd kernel: [56127713.299956] (E4)DD_RAID: Array [dg2/ppart8] encountered READ I/O errors  [57.57 dm-18p5 6000c2963d6777f9dc56d52993b4f044] [dev idx: 0] [stripe: 806949] [gs:ffff880c10e92220, request:ffff880ce4ec4ca8] faults:1
idpa-dd kernel: [56128442.963940] (E4)DD_RAID: DiskGroup [dg2] has total failure!
idpa-dd kernel: [56128442.963964] (E4)DD_RAID: Array [dg2/ext3]: Suspended
idpa-dd kernel: [56128442.963988] (E4)DD_RAID: Array [dg2/ext3_1]: Suspended

Causa

I IDPA DP4400 bruger den virtuelle Data Domain-maskine datalagre, der består af enheder og diskdrev i enheden. Hvis diskdrev fra VD02 eller VD03 logger fejl med høj hastighed, kan datalagerets ydeevne reduceres nok til, at DDOS markerer diskenheden som utilgængelig og forsøger at genstarte filsystemet. 

P-hysical Disk to volume mapping for DP4400 er som følger:

Virtuel disk RAID-niveau Fysiske diske Datalagernavn Beskrivelse
VD01 RAID 1 Disketterne 00:01:00 og 00:01:01 (diske 0 og 1) DP-appliance-datastore Placering af datalager til VM'er
VD02 RAID 6 Diske 00:01:02 til 01:09 (diske 2 - 9) DP-enhed-ddve1 Placering af DDVE1-datalager til DDVE-filsystem (findes i DP4400S- og DP4400-modeller)
VD03 RAID 6 Diske 00:01:10 til 01:17 (diske 10-17) DP-enhed-ddve2 Placering af DDVE2-datalager til DDVE-filsystem (findes kun i modellen DP4400)

 

Resolución

  1. Indsaml logfilerne fra RAID-controlleren (PERC) ved hjælp af en af følgende muligheder:

Vis status for hver disk: 

      • Idpa-acm# showfru disk
        PERC-logfilerne indsamles fra ACM på følgende måde:
      • Idpa-acm# dpacli -host 192.168.100.101 -logs Perc -output perc_logs.tgz
    • Få adgang til ESXi-værten ved hjælp af CLI, og kør følgende:
      • Idpa-esx# perccli /c0 show termlog > /tmp/ttylog.txt
      • Idpa-esx# perccli /c0 show events > /tmp/events.txt
  1. Fra disse logfiler kan du gennemgå for hændelser som vist i følgende eksempler:
06/17/23 5:02:22: C0:EVT#97309-06/17/23 5:02:22: 113=Unexpected sense: PD 03(e0x20/s3) Path 50000399c882671a, CDB: 88 00 00 00 00 00 7e b4 72 29 00 00 01 d7 00 00, Sense: 3/11/01 06/17/23 5:02:22: C0:Raw Sense for PD 3: 72 03 11 01 00 00 00 34 00 0a 80 00 00 00 00 00 7e b4 72 29 02 06 00 00 80 00 3f 00 80 1e 00 88 81 07 02 0f 01 13 00 00 7f cd 01 38 00 02 00 22 1a 40 00 14 c0 c0 0f 00 7f d2 ff ff 06/17/23 5:02:22: C0:DM_PerformSenseDataRecovery:Medium Error DevId[3] devHandle d RDM=40d47600 retries=0 callback=c0358e30 06/17/23 5:02:22: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=427, ld=1, src=7, cmd=2, lba=2f83aac00, cnt=400, rmwOp=0

06/21/23 5:30:01: C0:EVT#97500-06/21/23 5:30:01: 110=Corrected medium error during recovery on PD 03(e0x20/s3) at d05a2e0a 06/21/23 5:30:01: C0:Issuing write verify pd=03 physArm=1 span=0 startBlk=d05a2e13 numBlks=1 06/21/23 5:30:01: C0:EVT#97501-06/21/23 5:30:01: 110=Corrected medium error during recovery on PD 03(e0x20/s3) at d05a2e13 06/21/23 5:30:01: C0:Issuing write verify pd=03 physArm=1 span=0 startBlk=d05a2e14 numBlks=1


seqNum: 0x00002999
Time: Mon Mar 20 17:53:50 2023

Code: 0x0000005d
Class: 0
Locale: 0x02
Event Description: Patrol Read corrected medium error on PD 0a(e0x20/s10) at 8912fa1c
Event Data:
===========
Device ID: 10
Enclosure Index: 32
Slot Number: 10
LBA: 2299722268


seqNum: 0x0000299a
Time: Mon Mar 20 17:53:50 2023

Code: 0x00000071
Class: 0
Locale: 0x02
Event Description: Unexpected sense: PD 0a(e0x20/s10) Path 50000399e8429da2, CDB: 8f 00 00 00 00 00 89 12 fa 1d 00 00 10 00 00 00, Sense: 3/11/01
Event Data:
===========
Device ID: 10
Enclosure Index: 32
Slot Number: 10
CDB Length: 16
CDB Data:
008f 0000 0000 0000 0000 0000 0089 0012 00fa 001d 0000 0000 0010 0000 0000 0000 Sense Length: 60
Sense Data:
0072 0003 0011 0001 0000 0000 0000 0034 0000 000a 0080 0000 0000 0000 0000 0000 0089 0012 00fa 001d 0002 0006 0000 0000 0080 0000 0000 0000 0080 001e 0000 008f 0081 0007 0002 000a 0000 00d6 0000 0000 008d 003e 0000 00ef 0000 0002 0000 0022 001f 0040 0000 0000 00fd 00fd 000a 0000 008d 003e 00ff 00ff 0000 0000 0000 0000

 

Kontroller, om der er mønstre og gentagne fejl. Du kan se mange hændelser, der logføres fra et enkelt drev. Dette angiver, hvilken enhed der forårsager problemer:

$ grep -i "medium error" ttylog.txt
05/08/23 17:30:18: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:18: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:21: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:21: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:24: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:24: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:26: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:26: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:28: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:28: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:31: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:31: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
.
.
$ grep -i "medium error" ttylog.txt | wc -l
2168


$ grep -i "command timeout" ttylog.txt
05/16/23  5:36:54: C0:EVT#06386-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d6 49 00 00 00 68 00 00
05/16/23  5:36:54: C0:EVT#06387-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 90 f2 00 00 00 3f 00 00
05/16/23  5:36:54: C0:EVT#06388-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 8e 7e 00 00 00 6d 00 00
05/16/23  5:36:54: C0:EVT#06389-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d9 5e 00 00 00 61 00 00
05/16/23  5:36:54: C0:EVT#06390-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d9 33 00 00 00 2b 00 00
05/16/23  5:36:54: C0:EVT#06391-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 e6 c3 00 00 00 70 00 00
05/16/23  5:36:54: C0:EVT#06392-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 e5 55 00 00 00 60 00 00
05/16/23  5:36:54: C0:EVT#06393-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 8e f0 00 00 00 7f 00 00
05/16/23  5:36:54: C0:EVT#06394-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 81 91 08 00 00 00 00 4e 00 00
.
.
$ grep -i "command timeout" ttylog.txt |wc -l
58


I ovenstående eksempler kan du se, at disken i slot 11 (devID b) logger medie- og timeoutfejl med høj hastighed.

BEMÆRK:  I PERC-logfilerne vises DevID'et i hexadecimalt format. DevID "0b" er "11" i decimal, så dette henviser til Slot 11.


Følgende eksempler viser problemer med et diskdrev, f.eks. nulstilling af diske, der logges af controlleren.

Dette eksempel viser et problem, der skyldes et drev, der konstant nulstiller og forårsager et problem på den berørte virtuelle disk:

2022-01-21 01:58:39 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. 
2022-01-21 01:58:39 LOG007 The previous log entry was repeated 27 times. 
2022-01-21 01:56:05 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. 
2022-01-21 01:56:05 LOG007 The previous log entry was repeated 988 times.
.
.
2022-01-21 04:00:36 545196 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.
2022-01-21 03:58:39 545193 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.
2022-01-21 03:56:05 545190 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.
.
.
2022-01-25 19:21:49 545547 PDR3 Disk 12 in Backplane 1 of Integrated RAID Controller 1 is not functioning correctly.
2022-01-25 19:21:49 545548 VDR56 Redundancy of Virtual Disk 1 on Integrated RAID Controller 1 has been degraded.
2022-01-25 19:21:49 545549 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.

 

Et drev, der er markeret som forudsigende mislykket, kan også forårsage problemer:

2022-09-05 23:01:56 11008 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset.
2022-09-05 22:55:28 11003 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset
2022-09-05 23:02:23 11010 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset.
2022-09-05 23:01:56 11009 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8.
2022-09-05 23:03:28 11012 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-05 23:02:28 11011 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8.
2022-09-06 10:22:26 11034 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-06 00:11:27 11029 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-05 23:18:32 11015 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-05 23:06:26 11014 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8.

 

  1. Få vist og identificer oplysninger om enhedens disk ved hjælp af en af følgende metoder:
  • Brug iDRAC- eller TSR-dataene til at få vist oplysninger om drevet
  • Fra ACM OS skal du bruge følgende kommando til at få vist diskoplysninger: showfru-disk
    1. Kontakt Dell Support for at oprette en serviceanmodning, og se denne artikel for bekræftelse af udskiftning af disk.

     

    BEMÆRK: For at reducere risikoen for yderligere problemer anbefales det at deaktivere Data Domain-filsystemet, indtil disken er udskiftet.

      Dette gøres fra Data Domain CLI ved at køre kommandoen: 

    filesys disable

     

    FORSIGTIG: Hvis flere diskdrev viser sig at være defekte eller har for store fejl, skal du ikke proaktivt udskifte nogen diske, før Dell Support er blevet aktiveret. For store diskfejl kan forårsage tab af data.

    Productos afectados

    PowerProtect Data Protection Appliance, PowerProtect DP4400, Integrated Data Protection Appliance Family, PowerProtect Data Protection Hardware, Integrated Data Protection Appliance Software
    Propiedades del artículo
    Número del artículo: 000216674
    Tipo de artículo: Solution
    Última modificación: 07 may 2026
    Versión:  3
    Encuentre respuestas a sus preguntas de otros usuarios de Dell
    Servicios de soporte
    Compruebe si el dispositivo está cubierto por los servicios de soporte.