IDPA: Erros de disco do DP4400 causam instabilidade do file system do Data Domain
Resumen: As unidades de disco no DP4400 que estiverem registrando erros excessivos podem causar reinicializações e instabilidade no Data Domain File System (FS).
Síntomas
Os seguintes sintomas podem ser observados:
- O file system do Data Domain pode ser relatado como indisponível ou reiniciar repetidamente
- Logs e alertas no Data Domain podem relatar "vol1 is unavailable"
- Os serviços de manutenção do Avamar estão falhando devido a MSG_ERR_DDR_ERROR
- Alta capacidade inesperada usada devido a falhas repetidas de manutenção do Avamar ou limpeza do Data Domain
- O iDRAC pode mostrar que todos os discos estão íntegros, mas os logs do controlador podem mostrar o contrário
Exemplos:
o Data Domain pode registrar alertas como:
ALERT Filesystem EVT-FILESYS-00002: Problem is preventing filesystem from running. EVT-STORAGE-00020: The Active tier is unavailable. EVT-FILESYS-00011: DDFS process died; restarting
No arquivo de log /ddr/var/log/debug/ddfs.info, você pode ver erros como:
Jun 30 11:48:28 idpa-dd ddfs[8504]: ERROR: MSG-SL-00004: Volume vol1 is unavailable. err:Missing storage device. Jun 30 11:58:20 idpa-dd ddfs[15962]: ERROR: MSG-SL-00004: Volume vol1 is unavailable. err:Missing storage device.
O arquivo de log /ddr/var/log/debug/kern.info Pode relatar erros de grupo de discos, como:
Jun 30 18:51:08 idpa-dd kernel: [10002271.298276] (E4)DD_RAID: Array [dg2/ppart14] encountered READ I/O errors [57.57 dm-10p5 6000c290ea0836a3178bab0785368300] [dev idx: 0] [stripe: 516562] [gs:ffff880ce56ed210, request:ffff880ce9ebeb40] faults:1 Jun 30 18:51:08 idpa-dd kernel: [10002271.298302] (E4)ERROR: dd_dgrp.c:5731 dd_dgrp_array_internal_notification:: Too many disks failed [1, 14, 0] Jun 30 18:51:08 idpa-dd kernel: [10002271.298305] (E4)DD_RAID: DiskGroup [dg2] has total failure!
Ou erros adicionais, como:
idpa-dd kernel: [56127713.299919] (E4)sd 2:0:1:0: [sds] tag#0 Sense Key : Medium Error [current] idpa-dd kernel: [56127713.299921] (E4)sd 2:0:1:0: [sds] tag#0 Add. Sense: No additional sense information idpa-dd kernel: [56127713.299924] (E4)sd 2:0:1:0: [sds] tag#0 CDB: Read(16) 88 00 00 00 00 01 ed 7c 57 42 00 00 02 01 00 00 idpa-dd kernel: [56127713.299926] (E4)dd_blk_update_request: I/O error, dev sds, sector 8279316290 idpa-dd kernel: [56127713.299949] (E4)DEBUG: dd_array_error.c:512 dd_array_handle_fault:: nr_faults:1 array->level_info.nr_disks:1 idpa-dd kernel: [56127713.299956] (E4)DD_RAID: Array [dg2/ppart8] encountered READ I/O errors [57.57 dm-18p5 6000c2963d6777f9dc56d52993b4f044] [dev idx: 0] [stripe: 806949] [gs:ffff880c10e92220, request:ffff880ce4ec4ca8] faults:1 idpa-dd kernel: [56128442.963940] (E4)DD_RAID: DiskGroup [dg2] has total failure! idpa-dd kernel: [56128442.963964] (E4)DD_RAID: Array [dg2/ext3]: Suspended idpa-dd kernel: [56128442.963988] (E4)DD_RAID: Array [dg2/ext3_1]: Suspended
Causa
No IDPA DP4400, a máquina virtual do Data Domain usa datastores compostos de volumes e unidades de disco no equipamento. Se alguma unidade de disco do VD02 ou VD03 estiver registrando erros em uma alta taxa, o desempenho do datastore poderá ser reduzido o suficiente para que o DDOS marque o volume como indisponível e tente reiniciar o file system.
O mapeamento de disco para volume físico P para o DP4400 é o seguinte:
| Disco virtual | Nível de RAID | Discos físicos | Nome do datastore | Descrição |
| VD01 | RAID 1 | Discos 00:01:00 e 00:01:01 (discos 0 e 1) | DP-equipamento-datastore | Localização do datastore para VMs |
| VD02 | RAID 6 | Discos 00:01:02 a 01:09 (discos 2 - 9) | DP-appliance-ddve1 | Localização do datastore DDVE1 para o file system do DDVE (encontrado nos modelos DP4400S e DP4400) |
| VD03 | RAID 6 | Discos 00:01:10 a 01:17 (discos 10 - 17) | DP-equipamento-ddve2 | Localização do datastore DDVE2 para o file system do DDVE (encontrado somente no modelo DP4400) |
Resolución
- Colete os logs do controlador RAID (PERC) usando uma das seguintes opções:
-
- Acessar o iDRAC do DP4400 e visualizar a integridade do subsistema de armazenamento
- Visualizar o status dos componentes dos volumes e de cada disco físico
- Visualize os logs de eventos e logs do Lifecycle Controller para obter sinais de mensagens repetidas do disco.
- Realize uma coleta TSR e certifique-se de selecionar os registros de armazenamento. Data Domain: Como coletar logs TSR no PowerProtect DD3300, DD6900, DD9400, DD9900 e DP4400
- Acesse o ACM usando SSH e execute os seguintes comandos:
- Acessar o iDRAC do DP4400 e visualizar a integridade do subsistema de armazenamento
Mostre o status de cada disco:
-
-
-
Idpa-acm# showfru disk
Colete os logs do PERC do ACM da seguinte maneira: -
Idpa-acm# dpacli -host 192.168.100.101 -logs Perc -output perc_logs.tgz
-
- Acesse o host do ESXi usando a CLI e execute o seguinte:
-
Idpa-esx# perccli /c0 show termlog > /tmp/ttylog.txt
-
Idpa-esx# perccli /c0 show events > /tmp/events.txt
-
-
- A partir desses logs, você pode analisar eventos como os mostrados nos seguintes exemplos:
06/17/23 5:02:22: C0:EVT#97309-06/17/23 5:02:22: 113=Unexpected sense: PD 03(e0x20/s3) Path 50000399c882671a, CDB: 88 00 00 00 00 00 7e b4 72 29 00 00 01 d7 00 00, Sense: 3/11/01 06/17/23 5:02:22: C0:Raw Sense for PD 3: 72 03 11 01 00 00 00 34 00 0a 80 00 00 00 00 00 7e b4 72 29 02 06 00 00 80 00 3f 00 80 1e 00 88 81 07 02 0f 01 13 00 00 7f cd 01 38 00 02 00 22 1a 40 00 14 c0 c0 0f 00 7f d2 ff ff 06/17/23 5:02:22: C0:DM_PerformSenseDataRecovery:Medium Error DevId[3] devHandle d RDM=40d47600 retries=0 callback=c0358e30 06/17/23 5:02:22: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=427, ld=1, src=7, cmd=2, lba=2f83aac00, cnt=400, rmwOp=0 06/21/23 5:30:01: C0:EVT#97500-06/21/23 5:30:01: 110=Corrected medium error during recovery on PD 03(e0x20/s3) at d05a2e0a 06/21/23 5:30:01: C0:Issuing write verify pd=03 physArm=1 span=0 startBlk=d05a2e13 numBlks=1 06/21/23 5:30:01: C0:EVT#97501-06/21/23 5:30:01: 110=Corrected medium error during recovery on PD 03(e0x20/s3) at d05a2e13 06/21/23 5:30:01: C0:Issuing write verify pd=03 physArm=1 span=0 startBlk=d05a2e14 numBlks=1 seqNum: 0x00002999 Time: Mon Mar 20 17:53:50 2023 Code: 0x0000005d Class: 0 Locale: 0x02 Event Description: Patrol Read corrected medium error on PD 0a(e0x20/s10) at 8912fa1c Event Data: =========== Device ID: 10 Enclosure Index: 32 Slot Number: 10 LBA: 2299722268 seqNum: 0x0000299a Time: Mon Mar 20 17:53:50 2023 Code: 0x00000071 Class: 0 Locale: 0x02 Event Description: Unexpected sense: PD 0a(e0x20/s10) Path 50000399e8429da2, CDB: 8f 00 00 00 00 00 89 12 fa 1d 00 00 10 00 00 00, Sense: 3/11/01 Event Data: =========== Device ID: 10 Enclosure Index: 32 Slot Number: 10 CDB Length: 16 CDB Data: 008f 0000 0000 0000 0000 0000 0089 0012 00fa 001d 0000 0000 0010 0000 0000 0000 Sense Length: 60 Sense Data: 0072 0003 0011 0001 0000 0000 0000 0034 0000 000a 0080 0000 0000 0000 0000 0000 0089 0012 00fa 001d 0002 0006 0000 0000 0080 0000 0000 0000 0080 001e 0000 008f 0081 0007 0002 000a 0000 00d6 0000 0000 008d 003e 0000 00ef 0000 0002 0000 0022 001f 0040 0000 0000 00fd 00fd 000a 0000 008d 003e 00ff 00ff 0000 0000 0000 0000
Verifique padrões e erros repetitivos. Você pode ver muitos eventos sendo registrados em uma única unidade, isso indica qual dispositivo está causando problemas:
$ grep -i "medium error" ttylog.txt 05/08/23 17:30:18: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c 05/08/23 17:30:18: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0 05/08/23 17:30:21: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c 05/08/23 17:30:21: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0 05/08/23 17:30:24: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c 05/08/23 17:30:24: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0 05/08/23 17:30:26: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c 05/08/23 17:30:26: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0 05/08/23 17:30:28: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c 05/08/23 17:30:28: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0 05/08/23 17:30:31: C0:DM_PerformSenseDataRecovery:Medium Error DevId[b] devHandle 15 RDM=40da6800 retries=0 callback=c0358e2c 05/08/23 17:30:31: C0:DM_PerformSenseDataRecovery: Medium Error is for: cmdId=ae, ld=2, src=1, cmd=1, lba=26ca06f8b, cnt=200, rmwOp=0 . . $ grep -i "medium error" ttylog.txt | wc -l 2168 $ grep -i "command timeout" ttylog.txt 05/16/23 5:36:54: C0:EVT#06386-05/16/23 5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d6 49 00 00 00 68 00 00 05/16/23 5:36:54: C0:EVT#06387-05/16/23 5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 90 f2 00 00 00 3f 00 00 05/16/23 5:36:54: C0:EVT#06388-05/16/23 5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 8e 7e 00 00 00 6d 00 00 05/16/23 5:36:54: C0:EVT#06389-05/16/23 5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d9 5e 00 00 00 61 00 00 05/16/23 5:36:54: C0:EVT#06390-05/16/23 5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 d9 33 00 00 00 2b 00 00 05/16/23 5:36:54: C0:EVT#06391-05/16/23 5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 e6 c3 00 00 00 70 00 00 05/16/23 5:36:54: C0:EVT#06392-05/16/23 5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 7b 82 e5 55 00 00 00 60 00 00 05/16/23 5:36:54: C0:EVT#06393-05/16/23 5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 02 e9 7e 8e f0 00 00 00 7f 00 00 05/16/23 5:36:54: C0:EVT#06394-05/16/23 5:36:54: 267=Command timeout on PD 0b(e0x20/s11) Path 5000039aa853e82e, CDB: 88 00 00 00 00 03 81 91 08 00 00 00 00 4e 00 00 . . $ grep -i "command timeout" ttylog.txt |wc -l 58
Nos exemplos acima, você pode ver que o disco no slot 11 (devID b) está registrando erros de tempo de espera excedido e médio em uma alta taxa.
Os exemplos a seguir mostram problemas com uma unidade de disco, como redefinições de disco registradas pelo controlador.
Este exemplo mostra um problema causado por uma unidade que é redefinida constantemente e causa um problema no disco virtual afetado:
2022-01-21 01:58:39 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. 2022-01-21 01:58:39 LOG007 The previous log entry was repeated 27 times. 2022-01-21 01:56:05 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. 2022-01-21 01:56:05 LOG007 The previous log entry was repeated 988 times. . . 2022-01-21 04:00:36 545196 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. 2022-01-21 03:58:39 545193 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. 2022-01-21 03:56:05 545190 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset. . . 2022-01-25 19:21:49 545547 PDR3 Disk 12 in Backplane 1 of Integrated RAID Controller 1 is not functioning correctly. 2022-01-25 19:21:49 545548 VDR56 Redundancy of Virtual Disk 1 on Integrated RAID Controller 1 has been degraded. 2022-01-25 19:21:49 545549 PDR87 Disk 12 in Backplane 1 of Integrated RAID Controller 1 was reset.
Uma unidade que foi marcada como com falha preditiva também pode causar problemas:
2022-09-05 23:01:56 11008 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset. 2022-09-05 22:55:28 11003 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset 2022-09-05 23:02:23 11010 PDR87 Disk 1 in Backplane 1 of RAID Controller in Slot 8 was reset. 2022-09-05 23:01:56 11009 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8. 2022-09-05 23:03:28 11012 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery. 2022-09-05 23:02:28 11011 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8. 2022-09-06 10:22:26 11034 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery. 2022-09-06 00:11:27 11029 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery. 2022-09-05 23:18:32 11015 PDR54 A disk media error on Disk 1 in Backplane 1 of RAID Controller in Slot 8 was corrected during recovery. 2022-09-05 23:06:26 11014 PDR16 Predictive failure reported for Disk 1 in Backplane 1 of RAID Controller in Slot 8.
- Visualize e identifique os detalhes do disco do equipamento usando um dos seguintes métodos:
- Use os dados do iDRAC ou TSR para visualizar os detalhes da unidade
- No sistema operacional do ACM, use o seguinte comando para mostrar detalhes do disco: showfru disk
- Entre em contato com o Suporte Dell para criar um chamado e consulte este artigo para confirmação da substituição do disco.
Isso é feito a partir da CLI do Data Domain executando o comando:
filesys disable
Información adicional
Etapas para tornar o disco off-line:
- Faça log-in no host do ESXi como usuário root
- Execute o comando: perccli /c0 show
- Dentro dessa saída, localize a unidade afetada e anote o ID do compartimento e do slot
-
Execute este comando para colocar a unidade off-line usando os valores da saída acima: perccli /c0[/ex]/sx set offline
-
Por exemplo, para o disco off-line no slot 2 para e32: perccli /c0/e32/s2 set offline
- Quando o disco for substituído, a unidade será marcada on-line automaticamente novamente.