IDPA:DP4400 磁碟錯誤導致 Data Domain 檔案系統不穩定

Resumen: DP4400 內的磁碟機記錄過多錯誤,可能會導致 Data Domain 檔案系統 (FS) 重新開機和不穩定。

Este artículo se aplica a Este artículo no se aplica a Este artículo no está vinculado a ningún producto específico. No se identifican todas las versiones del producto en este artículo.

Síntomas

可能會出現以下症狀:

  • Data Domain 檔案系統可能會回報無法使用或重複重新開機
  • Data Domain 內的記錄和警示可能會回報「vol1 無法使用」
  • 由於MSG_ERR_DDR_ERROR,Avamar 維護服務失敗
  • 由於 Avamar 維護或 Data Domain 清理重複失敗,導致使用非預期高容量
  • iDRAC 可能會顯示所有磁碟狀況良好,但控制器記錄可能會顯示其他狀況


範例:
Data Domain 可能會記錄下列警示: 

ALERT Filesystem EVT-FILESYS-00002: Problem is preventing filesystem from running.
EVT-STORAGE-00020: The Active tier is unavailable.
EVT-FILESYS-00011: DDFS process died; restarting


在記錄檔中 /ddr/var/log/debug/ddfs.info,您可能會看到以下錯誤:

Jun 30 11:48:28 idpa-dd ddfs[8504]: ERROR: MSG-SL-00004: Volume vol1 is unavailable. err:Missing storage device.
Jun 30 11:58:20 idpa-dd ddfs[15962]: ERROR: MSG-SL-00004: Volume vol1 is unavailable. err:Missing storage device.



記錄檔 /ddr/var/log/debug/kern.info 可能會報告磁碟群組錯誤,例如:

Jun 30  18:51:08 idpa-dd kernel: [10002271.298276] (E4)DD_RAID: Array [dg2/ppart14] encountered READ I/O errors [57.57 dm-10p5 6000c290ea0836a3178bab0785368300] [dev idx: 0] [stripe: 516562] [gs:ffff880ce56ed210, request:ffff880ce9ebeb40] faults:1
Jun 30  18:51:08 idpa-dd kernel: [10002271.298302] (E4)ERROR: dd_dgrp.c:5731 dd_dgrp_array_internal_notification:: Too many disks failed [1, 14, 0]
Jun 30  18:51:08 idpa-dd kernel: [10002271.298305] (E4)DD_RAID: DiskGroup [dg2] has total failure!



或進一步的錯誤,例如:

idpa-dd kernel: [56127713.299919] (E4)sd 2:0:1:0: [sds] tag#0 Sense Key : Medium Error [current]
idpa-dd kernel: [56127713.299921] (E4)sd 2:0:1:0: [sds] tag#0 Add. Sense: No additional sense information
idpa-dd kernel: [56127713.299924] (E4)sd 2:0:1:0: [sds] tag#0 CDB: Read(16) 88 00 00 00 00 01 ed 7c 57 42 00 00 02 01 00 00
idpa-dd kernel: [56127713.299926] (E4)dd_blk_update_request: I/O error, dev sds, sector 8279316290
idpa-dd kernel: [56127713.299949] (E4)DEBUG: dd_array_error.c:512 dd_array_handle_fault:: nr_faults:1 array->level_info.nr_disks:1
idpa-dd kernel: [56127713.299956] (E4)DD_RAID: Array [dg2/ppart8] encountered READ I/O errors  [57.57 dm-18p5 6000c2963d6777f9dc56d52993b4f044] [dev idx: 0] [stripe: 806949] [gs:ffff880c10e92220, request:ffff880ce4ec4ca8] faults:1
idpa-dd kernel: [56128442.963940] (E4)DD_RAID: DiskGroup [dg2] has total failure!
idpa-dd kernel: [56128442.963964] (E4)DD_RAID: Array [dg2/ext3]: Suspended
idpa-dd kernel: [56128442.963988] (E4)DD_RAID: Array [dg2/ext3_1]: Suspended

Causa

在 IDPA DP4400 中,Data Domain 虛擬機器使用由裝置內的磁碟區和磁碟機組成的資料存放區。如果 VD02 或 VD03 的任何磁碟機以高速率記錄錯誤,則資料存放區效能可能會降低到足以讓 DDOS 將磁碟區標記為無法使用,並嘗試重新開機檔案系統。

DP4400 的 Physical 磁碟 至磁碟區對應如下:

虛擬磁碟 RAID 層級 實體磁碟 資料存放區名稱 說明
VD01 RAID 1 磁碟 00:01:00 和 00:01:01(磁碟 0 和 1) DP-裝置-資料存放區 VM 的資料存放區位置
VD02 RAID 6 磁碟 00:01:02 到 01:09(磁碟 2 - 9) DP-appliance-ddve1 DDVE 檔案系統的 DDVE1 資料存放區位置 (可在 DP4400S 和 DP4400 機型中找到)
VD03 RAID 6 磁碟 00:01:10 到 01:17(磁碟 10 - 17) DP-appliance-ddve2 DDVE 檔案系統的 DDVE2 資料存放區位置 (僅於 DP4400 機型中找到)

 

Resolución

  1. 使用下列其中一個選項,從 RAID 控制器 (PERC) 收集記錄:

顯示每個磁碟的狀態: 

      • Idpa-acm# showfru disk
        根據以下步驟,從 ACM 收集 PERC 記錄:
      • Idpa-acm# dpacli -host 192.168.100.101 -logs Perc -output perc_logs.tgz
    • 使用 CLI 存取 ESXi 主機並執行下列步驟:
      • Idpa-esx# perccli /c0 show termlog > /tmp/ttylog.txt
      • Idpa-esx# perccli /c0 show events > /tmp/events.txt
  1. 從這些日誌中,您可以查看事件,如以下範例所示:
06/17/23 5:02:22: C0:EVT#97309-06/17/23 5:02:22: 113=Unexpected sense: PD 03(e0x20/s3) Path 50000399c882671a, CDB: 88 00 00 00 00 00 7e b4 72 29 00 00 01 d7 00 00, Sense: 3/11/01 06/17/23 5:02:22: C0:Raw Sense for PD 3: 72 03 11 01 00 00 00 34 00 0a 80 00 00 00 00 00 7e b4 72 29 02 06 00 00 80 00 3f 00 80 1e 00 88 81 07 02 0f 01 13 00 00 7f cd 01 38 00 02 00 22 1a 40 00 14 c0 c0 0f 00 7f d2 ff ff 06/17/23 5:02:22: C0:DM_PerformSenseDataRecovery:Medium Error DevId[3] devHandle d RDM=40d47600 retries=0 callback=c0358e30 06/17/23 5:02:22: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=427, ld=1, src=7, cmd=2, lba=2f83aac00, cnt=400, rmwOp=0

06/21/23 5:30:01: C0:EVT#97500-06/21/23 5:30:01: 110=Corrected medium error during recovery on PD 03(e0x20/s3) at d05a2e0a 06/21/23 5:30:01: C0:Issuing write verify pd=03 physArm=1 span=0 startBlk=d05a2e13 numBlks=1 06/21/23 5:30:01: C0:EVT#97501-06/21/23 5:30:01: 110=Corrected medium error during recovery on PD 03(e0x20/s3) at d05a2e13 06/21/23 5:30:01: C0:Issuing write verify pd=03 physArm=1 span=0 startBlk=d05a2e14 numBlks=1


seqNum: 0x00002999
Time: Mon Mar 20 17:53:50 2023

Code: 0x0000005d
Class: 0
Locale: 0x02
Event Description: Patrol Read corrected medium error on PD 0a(e0x20/s10) at 8912fa1c
Event Data:
===========
Device ID: 10
Enclosure Index: 32
Slot Number: 10
LBA: 2299722268


seqNum: 0x0000299a
Time: Mon Mar 20 17:53:50 2023

Code: 0x00000071
Class: 0
Locale: 0x02
Event Description: Unexpected sense: PD 0a(e0x20/s10) Path 50000399e8429da2, CDB: 8f 00 00 00 00 00 89 12 fa 1d 00 00 10 00 00 00, Sense: 3/11/01
Event Data:
===========
Device ID: 10
Enclosure Index: 32
Slot Number: 10
CDB Length: 16
CDB Data:
008f 0000 0000 0000 0000 0000 0089 0012 00fa 001d 0000 0000 0010 0000 0000 0000 Sense Length: 60
Sense Data:
0072 0003 0011 0001 0000 0000 0000 0034 0000 000a 0080 0000 0000 0000 0000 0000 0089 0012 00fa 001d 0002 0006 0000 0000 0080 0000 0000 0000 0080 001e 0000 008f 0081 0007 0002 000a 0000 00d6 0000 0000 008d 003e 0000 00ef 0000 0002 0000 0022 001f 0040 0000 0000 00fd 00fd 000a 0000 008d 003e 00ff 00ff 0000 0000 0000 0000

 

檢查模式和重複錯誤。您可能會看到單一磁碟機記錄許多事件,這可指出導致問題的裝置:

$ grep -i "medium error" ttylog.txt
05/08/23 17:30:18: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:18: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:21: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:21: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:24: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:24: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:26: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:26: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:28: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:28: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
05/08/23 17:30:31: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c
05/08/23 17:30:31: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0
.
.
$ grep -i "medium error" ttylog.txt | wc -l
2168


$ grep -i "command timeout" ttylog.txt
05/16/23  5:36:54: C0:EVT#06386-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d6 49 00 00 00 68 00 00
05/16/23  5:36:54: C0:EVT#06387-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 90 f2 00 00 00 3f 00 00
05/16/23  5:36:54: C0:EVT#06388-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 8e 7e 00 00 00 6d 00 00
05/16/23  5:36:54: C0:EVT#06389-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d9 5e 00 00 00 61 00 00
05/16/23  5:36:54: C0:EVT#06390-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d9 33 00 00 00 2b 00 00
05/16/23  5:36:54: C0:EVT#06391-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 e6 c3 00 00 00 70 00 00
05/16/23  5:36:54: C0:EVT#06392-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 e5 55 00 00 00 60 00 00
05/16/23  5:36:54: C0:EVT#06393-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 8e f0 00 00 00 7f 00 00
05/16/23  5:36:54: C0:EVT#06394-05/16/23  5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 81 91 08 00 00 00 00 4e 00 00
.
.
$ grep -i "command timeout" ttylog.txt |wc -l
58


在上述範例中,您可以看到插槽 11 中的磁碟 (devID b) 正以高速率記錄中和逾時錯誤。

注意:  在 PERC 記錄中,DevID 會以十六進位格式顯示。DevID 的「0b」是十進位的「11」,因此這指的是插槽 11。


以下示例顯示磁碟驅動器的問題,例如控制器記錄的磁碟重置。

此範例顯示由不斷重設的磁碟機所導致的問題,並導致受影響的虛擬磁碟發生問題:

2022-01-21 01:58:39 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. 
2022-01-21 01:58:39 LOG007 The previous log entry was repeated 27 times. 
2022-01-21 01:56:05 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. 
2022-01-21 01:56:05 LOG007 The previous log entry was repeated 988 times.
.
.
2022-01-21 04:00:36 545196 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.
2022-01-21 03:58:39 545193 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.
2022-01-21 03:56:05 545190 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.
.
.
2022-01-25 19:21:49 545547 PDR3 Disk 12 in Backplane 1 of Integrated RAID Controller 1 is not functioning correctly.
2022-01-25 19:21:49 545548 VDR56 Redundancy of Virtual Disk 1 on Integrated RAID Controller 1 has been degraded.
2022-01-25 19:21:49 545549 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.

 

標示為預測性故障的磁碟機也可能造成問題:

2022-09-05 23:01:56 11008 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset.
2022-09-05 22:55:28 11003 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset
2022-09-05 23:02:23 11010 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset.
2022-09-05 23:01:56 11009 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8.
2022-09-05 23:03:28 11012 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-05 23:02:28 11011 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8.
2022-09-06 10:22:26 11034 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-06 00:11:27 11029 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-05 23:18:32 11015 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery.
2022-09-05 23:06:26 11014 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8.

 

  1. 使用下列其中一種方法檢視和識別裝置磁碟詳細資料:
  • 使用 iDRAC 或 TSR 資料檢視磁碟機詳細資料
  • 在 ACM 作業系統中,使用下列命令顯示磁碟詳細資料:showfru disk
    1. 請聯絡 Dell 支援 以建立服務要求,並參考本文以確認磁碟更換。

     

    注意:為降低進一步發生問題的風險,建議您停用 Data Domain 檔案系統,直到更換磁碟為止。

      這是透過執行下列命令,從 Data Domain CLI 完成: 

    filesys disable

     

    警示:如果多個磁碟機顯示故障或發生過多錯誤,請勿主動更換任何磁碟,直到 Dell 支援介入。過多的磁碟故障可能會導致資料遺失。

    Productos afectados

    PowerProtect Data Protection Appliance, PowerProtect DP4400, Integrated Data Protection Appliance Family, PowerProtect Data Protection Hardware, Integrated Data Protection Appliance Software
    Propiedades del artículo
    Número del artículo: 000216674
    Tipo de artículo: Solution
    Última modificación: 07 may 2026
    Versión:  3
    Encuentre respuestas a sus preguntas de otros usuarios de Dell
    Servicios de soporte
    Compruebe si el dispositivo está cubierto por los servicios de soporte.