PowerFlex 3.6 SDSs Crash With Panic Unexpected Status: NO_RESOURCES
Summary: PowerFlex SDS's crash with Panic Expression FEIO_RECOVERY__IN_PROGRESS continues to crash with Panic Expression ALWAYS_ASSERT Unexpected status: NO_RESOURCES, or just with Panic Expression ALWAYS_ASSERT Unexpected status: NO_RESOURCES ...
Symptoms
Scenario
During a normal operation of a PowerFlex cluster, an SDS or multiple SDS crash with the same stack trace
Symptoms
- This can happen during a normal I/O operation while using Fine Granularity (FG) pools.
- More likely to happen during a vtree migration while writing new volume data to a Fine Granularity (FG) pool.
The following scenarios might appear:
Stack traces appear in proximity on an SDS that crashed:
2022/04/05 12:01:30.816177 Panic in file /data/build/workspace/ScaleIO-Common-Job/src/tgt/spef/frontend/fe_io.c, line 3214, function feIo_L2PGatewayUpdate, PID 3682104.Panic Expression FEIO_RECOVERY__IN_PROGRESS == pFeIoDev->recoveryState || 2004 == rc || 20 == rc PANIC_ID_tgt_feio_11. /opt/emc/scaleio/sds/bin/sds-3.6.200.105(mosDbg_PanicPrepare+0x131) [0x5cba01] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(feIo_L2PGatewayUpdate+0xb5b) [0x85612b] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(spef_WriteDo+0x1ec) [0x85631c] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(spefStorageRegion_CompressedWrite+0xb4) [0x95ad14] /opt/emc/scaleio/sds/bin/sds-3.6.200.105() [0x929d0d] /opt/emc/scaleio/sds/bin/sds-3.6.200.105() [0x92ab53] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(raidComb_Write+0xd0) [0x92be30] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(iohComb_WriteSecondary+0x251) [0x9a7d11] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(ioh_Write+0x3e0) [0x9a9dd0] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(ioh_NewRequest+0x2bba) [0x9af92a] ... 2022/04/05 12:01:36.642447 [CHOKE_POINT] Panic in file /data/build/workspace/ScaleIO-Common-Job/src/tgt/storage/spef_impl/spef_storage.c, line 415, function spefStorage_AttachDeviceCK, PID 2102550.Panic Expression ALWAYS_ASSERT Unexpected status: NO_RESOURCES. /opt/emc/scaleio/sds/bin/sds-3.6.200.105(mosDbg_PanicPrepare+0x131) [0x5cba01] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(spefStorage_AttachDeviceCK+0x29a) [0x96bb5a] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(spef_AttachDeviceUmtMainFunc+0x369) [0x87e769] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(mosUmt_StartFunc+0x9c) [0x5b8c5c] /lib64/libc.so.6(+0x4d3d0) [0x7fa54857f3d0]
A single stack tack trace appears on an SDS that crashed:
2022/04/05 12:01:36.642447 [CHOKE_POINT] Panic in file /data/build/workspace/ScaleIO-Common-Job/src/tgt/storage/spef_impl/spef_storage.c, line 415, function spefStorage_AttachDeviceCK, PID 2102550.Panic Expression ALWAYS_ASSERT Unexpected status: NO_RESOURCES. /opt/emc/scaleio/sds/bin/sds-3.6.200.105(mosDbg_PanicPrepare+0x131) [0x5cba01] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(spefStorage_AttachDeviceCK+0x29a) [0x96bb5a] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(spef_AttachDeviceUmtMainFunc+0x369) [0x87e769] /opt/emc/scaleio/sds/bin/sds-3.6.200.105(mosUmt_StartFunc+0x9c) [0x5b8c5c] /lib64/libc.so.6(+0x4d3d0) [0x7fa54857f3d0]
Impact
SDS decouples at the time of the crash, causing rebuilds. If enough SDSs crash and decouple from the cluster in parallel, then Data Unavailable will result.
Cause
This can occur both during normal I/O operations and during a vtree migration to an FG storage pool. Some of the IO must be split to be able to fit in the 4k pages of an FG pool. A new write I/O along with some of the split IO is being written into buffer space and causes a buffer overflow, eventually causing the SDS to crash.
Resolution
While this is a rare condition, during a vtree migration, run the migration at a time of less I/O pressure from other sources. Although this can reduce the chances of this crash occurring, it is not guaranteed.
Impacted Versions
PowerFlex v3.6.x
Fixed In Version
PowerFlex v3.6.0.5
Additional Information
Impacted versions
PowerFlex v3.6.x
Fixed In Version
PowerFlex v3.6.0.5