IDPA: DP4400 Disk Errors Cause Data Domain Filesystem Instability

Resumen: Disk drives within the DP4400 that are logging excessive errors can cause Data Domain File System (FS) restarts and instability.

Este artículo se aplica a Este artículo no se aplica a Este artículo no está vinculado a ningún producto específico. No se identifican todas las versiones del producto en este artículo.

Síntomas

The following symptoms may be seen:

  • Data Domain Filesystem may report as unavailable or restart repeatedly
  • Logs and alerts within Data Domain may report "vol1 is unavailable"
  • Avamar maintenance services are failing due to MSG_ERR_DDR_ERROR
  • Unexpected high capacity used due to repeated failure of Avamar maintenance or Data Domain cleaning
  • The iDRAC may show that all disks are healthy, but controller logs may show otherwise


Examples:
The Data Domain may log alerts such as: 

ALERT Filesystem EVT-FILESYS-00002: Problem is preventing filesystem from running.
EVT-STORAGE-00020: The Active tier is unavailable.
EVT-FILESYS-00011: DDFS process died; restarting


In the log file /ddr/var/log/debug/ddfs.info, you may see errors such as:

Jun 30 11:48:28 idpa-dd ddfs[8504]: ERROR: MSG-SL-00004: Volume vol1 is unavailable. err:Missing storage device.
Jun 30 11:58:20 idpa-dd ddfs[15962]: ERROR: MSG-SL-00004: Volume vol1 is unavailable. err:Missing storage device.



The log file /ddr/var/log/debug/kern.info may report disk group errors such as:

Jun 30  18:51:08 idpa-dd kernel: [10002271.298276] (E4)DD_RAID: Array [dg2/ppart14] encountered READ I/O errors [57.57 dm-10p5 6000c290ea0836a3178bab0785368300] [dev idx: 0] [stripe: 516562] [gs:ffff880ce56ed210, request:ffff880ce9ebeb40] faults:1
Jun 30  18:51:08 idpa-dd kernel: [10002271.298302] (E4)ERROR: dd_dgrp.c:5731 dd_dgrp_array_internal_notification:: Too many disks failed [1, 14, 0]
Jun 30  18:51:08 idpa-dd kernel: [10002271.298305] (E4)DD_RAID: DiskGroup [dg2] has total failure!



Or further errors such as:

idpa-dd kernel: [56127713.299919] (E4)sd 2:0:1:0: [sds] tag#0 Sense Key : Medium Error [current]
idpa-dd kernel: [56127713.299921] (E4)sd 2:0:1:0: [sds] tag#0 Add. Sense: No additional sense information
idpa-dd kernel: [56127713.299924] (E4)sd 2:0:1:0: [sds] tag#0 CDB: Read(16) 88 00 00 00 00 01 ed 7c 57 42 00 00 02 01 00 00
idpa-dd kernel: [56127713.299926] (E4)dd_blk_update_request: I/O error, dev sds, sector 8279316290
idpa-dd kernel: [56127713.299949] (E4)DEBUG: dd_array_error.c:512 dd_array_handle_fault:: nr_faults:1 array->level_info.nr_disks:1
idpa-dd kernel: [56127713.299956] (E4)DD_RAID: Array [dg2/ppart8] encountered READ I/O errors  [57.57 dm-18p5 6000c2963d6777f9dc56d52993b4f044] [dev idx: 0] [stripe: 806949] [gs:ffff880c10e92220, request:ffff880ce4ec4ca8] faults:1
idpa-dd kernel: [56128442.963940] (E4)DD_RAID: DiskGroup [dg2] has total failure!
idpa-dd kernel: [56128442.963964] (E4)DD_RAID: Array [dg2/ext3]: Suspended
idpa-dd kernel: [56128442.963988] (E4)DD_RAID: Array [dg2/ext3_1]: Suspended

Causa

In the IDPA DP4400, the Data Domain virtual machine uses datastores that are made up from volumes and disk drives within the appliance. If any disk drives from VD02 or VD03 are logging errors at a high rate, the datastore performance can be reduced enough that DDOS marks the volume as unavailable and tries to restart the Filesystem. 

The Physical Disk to volume mapping for DP4400 is as follows:

Virtual Disk RAID Level Physical Disks Datastore name Description
VD01 RAID 1 Disks 00:01:00 and 00:01:01 (disks 0 and 1) DP-appliance-datastore Location of datastore for VMs
VD02 RAID 6 Disks 00:01:02 through 01:09 (disks 2 - 9) DP-appliance-ddve1 Location of DDVE1 datastore for DDVE filesystem (found in DP4400S and DP4400 models)
VD03 RAID 6 Disks 00:01:10 through 01:17 (disks 10 - 17) DP-appliance-ddve2 Location of DDVE2 datastore for DDVE filesystem (found in DP4400 model only)

 

Resolución

  1. Collect the logs from the RAID controller (PERC) using one of the following options:

Show the status for each disk: 

      • Idpa-acm# showfru disk
        Collect the PERC logs from the ACM as follows:
      • Idpa-acm# dpacli -host 192.168.100.101 -logs Perc -output perc_logs.tgz
    • Access the ESXi host using CLI and run the following:
      • Idpa-esx# perccli /c0 show termlog > /tmp/ttylog.txt
      • Idpa-esx# perccli /c0 show events > /tmp/events.txt
  1. From those logs, you can review for events such as shown in the following examples:
06/17/23 5:02:22: C0:EVT#97309-06/17/23 5:02:22: 113=Unexpected sense: PD 03(e0x20/s3) Path 50000399c882671a, CDB: 88 00 00 00 00 00 7e b4 72 29 00 00 01 d7 00 00, Sense: 3/11/01 06/17/23 5:02:22: C0:Raw Sense for PD 3: 72 03 11 01 00 00 00 34 00 0a 80 00 00 00 00 00 7e b4 72 29 02 06 00 00 80 00 3f 00 80 1e 00 88 81 07 02 0f 01 13 00 00 7f cd 01 38 00 02 00 22 1a 40 00 14 c0 c0 0f 00 7f d2 ff ff 06/17/23 5:02:22: C0:DM_PerformSenseDataRecovery:Medium Error DevId[3] devHandle d RDM=40d47600 retries=0 callback=c0358e30 06/17/23 5:02:22: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=427, ld=1, src=7, cmd=2, lba=2f83aac00, cnt=400, rmwOp=0

06/21/23 5:30:01: C0:EVT#97500-06/21/23 5:30:01: 110=Corrected medium error during recovery on PD 03(e0x20/s3) at d05a2e0a 06/21/23 5:30:01: C0:Issuing write verify pd=03 physArm=1 span=0 startBlk=d05a2e13 numBlks=1 06/21/23 5:30:01: C0:EVT#97501-06/21/23 5:30:01: 110=Corrected medium error during recovery on PD 03(e0x20/s3) at d05a2e13 06/21/23 5:30:01: C0:Issuing write verify pd=03 physArm=1 span=0 startBlk=d05a2e14 numBlks=1


seqNum: 0x00002999
Time: Mon Mar 20 17:53:50 2023

Code: 0x0000005d
Class: 0
Locale: 0x02
Event Description: Patrol Read corrected medium error on PD 0a(e0x20/s10) at 8912fa1c
Event Data:
===========
Device ID: 10
Enclosure Index: 32
Slot Number: 10
LBA: 2299722268


seqNum: 0x0000299a
Time: Mon Mar 20 17:53:50 2023

Code: 0x00000071
Class: 0
Locale: 0x02
Event Description: Unexpected sense: PD 0a(e0x20/s10) Path 50000399e8429da2, CDB: 8f 00 00 00 00 00 89 12 fa 1d 00 00 10 00 00 00, Sense: 3/11/01
Event Data:
===========
Device ID: 10
Enclosure Index: 32
Slot Number: 10
CDB Length: 16
CDB Data:
008f 0000 0000 0000 0000 0000 0089 0012 00fa 001d 0000 0000 0010 0000 0000 0000 Sense Length: 60
Sense Data:
0072 0003 0011 0001 0000 0000 0000 0034 0000 000a 0080 0000 0000 0000 0000 0000 0089 0012 00fa 001d 0002 0006 0000 0000 0080 0000 0000 0000 0080 001e 0000 008f 0081 0007 0002 000a 0000 00d6 0000 0000 008d 003e 0000 00ef 0000 0002 0000 0022 001f 0040 0000 0000 00fd 00fd 000a 0000 008d 003e 00ff 00ff 0000 0000 0000 0000

 

Check for patterns and repetitive errors. You may see many events being logged from a single drive, this indicates which device is causing problems:

$ grep -i "medium error" ttylog.txt
05/08/23 17:30:18: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:18: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:21: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:21: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:24: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:24: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:26: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:26: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:28: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:28: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:31: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:31: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
.
.
$ grep -i "medium error" ttylog.txt | wc -l
2168


$ grep -i "command timeout" ttylog.txt
05/16/23  5:36:54: C0:EVT#06386-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d6 49 00 00 00 68 00 00
05/16/23  5:36:54: C0:EVT#06387-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 90 f2 00 00 00 3f 00 00
05/16/23  5:36:54: C0:EVT#06388-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 8e 7e 00 00 00 6d 00 00
05/16/23  5:36:54: C0:EVT#06389-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d9 5e 00 00 00 61 00 00
05/16/23  5:36:54: C0:EVT#06390-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d9 33 00 00 00 2b 00 00
05/16/23  5:36:54: C0:EVT#06391-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 e6 c3 00 00 00 70 00 00
05/16/23  5:36:54: C0:EVT#06392-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 e5 55 00 00 00 60 00 00
05/16/23  5:36:54: C0:EVT#06393-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 8e f0 00 00 00 7f 00 00
05/16/23  5:36:54: C0:EVT#06394-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 81 91 08 00 00 00 00 4e 00 00
.
.
$ grep -i "command timeout" ttylog.txt |wc -l
58


In the above examples, you can see that the disk in slot 11 (devID b) is logging medium and timeout errors at a high rate.

NOTE:  Within the PERC logs, the DevID is shown in hexadecimal format. DevID "0b" is "11" in decimal, so this would refer to Slot 11.


The following examples show problems with a disk drive such as disk resets logged by the controller.

This example shows an issue caused by a drive that constantly resets and causes a problem in the affected Virtual Disk:

2022-01-21 01:58:39 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. 
2022-01-21 01:58:39 LOG007 The previous log entry was repeated 27 times. 
2022-01-21 01:56:05 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. 
2022-01-21 01:56:05 LOG007 The previous log entry was repeated 988 times.
.
.
2022-01-21 04:00:36 545196 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.
2022-01-21 03:58:39 545193 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.
2022-01-21 03:56:05 545190 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.
.
.
2022-01-25 19:21:49 545547 PDR3 Disk 12 in Backplane 1 of Integrated RAID Controller 1 is not functioning correctly.
2022-01-25 19:21:49 545548 VDR56 Redundancy of Virtual Disk 1 on Integrated RAID Controller 1 has been degraded.
2022-01-25 19:21:49 545549 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.

 

A drive that has been marked as predictive failed can also cause problems:

2022-09-05 23:01:56 11008 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset.
2022-09-05 22:55:28 11003 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset
2022-09-05 23:02:23 11010 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset.
2022-09-05 23:01:56 11009 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8.
2022-09-05 23:03:28 11012 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-05 23:02:28 11011 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8.
2022-09-06 10:22:26 11034 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-06 00:11:27 11029 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-05 23:18:32 11015 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-05 23:06:26 11014 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8.

 

  1. View and identify the appliance disk details using one of the following methods:
  • Use the iDRAC or TSR data to view the drive details
  • From the ACM OS, use the following command to show disk details: showfru disk
    1. Engage Dell Support to create a Service Request and reference this article for confirmation of disk replacement.

     

    NOTE: To reduce the risk of further problems, it is advised to disable the Data Domain filesystem until the disk is replaced.

      This is done from the Data Domain CLI by running the command: 

    filesys disable

     

    CAUTION: If multiple disk drives are showing as failing or having excessive errors, do not proactively replace any disks until Dell Support has been engaged. Excessive disk failures can cause data loss.

    Productos afectados

    PowerProtect Data Protection Appliance, PowerProtect DP4400, Integrated Data Protection Appliance Family, PowerProtect Data Protection Hardware, Integrated Data Protection Appliance Software
    Propiedades del artículo
    Número del artículo: 000216674
    Tipo de artículo: Solution
    Última modificación: 07 may 2026
    Versión:  3
    Encuentre respuestas a sus preguntas de otros usuarios de Dell
    Servicios de soporte
    Compruebe si el dispositivo está cubierto por los servicios de soporte.