Openshift: Cluster is not operational after performing power cycle on one control plane node
Summary: One control plane node is non-gracefully shutdown, after it is powered on, it will go to "not ready" state and cause the cluster not operational.
This article applies to
This article does not apply to
This article is not tied to any specific product.
Not all product versions are identified in this article.
Symptoms
Directly power cycle one control plane node, after the node boots up, the cluster is not recovered.
UI pages will display error or empty information, for example:

UI pages will display error or empty information, for example:

Cause
Directly power cycle one control plane node is not supported. This is a disaster recovery scenario.
When the control plane node is un-gracefully shutdown, the Container Storage Interface (CSI) drivers do not automatically detach volumes, which will cause the pods are in "Container Creating" state. When the control plane node boots up after an un-graceful shutdown, it will lose any docker image local cache and try to retrieve from depo manager, while the depo manager pod was not in Running state after the control plane node power cycle, so the control plane node will go to "not ready" state and cause the cluster not operational.
When the control plane node is un-gracefully shutdown, the Container Storage Interface (CSI) drivers do not automatically detach volumes, which will cause the pods are in "Container Creating" state. When the control plane node boots up after an un-graceful shutdown, it will lose any docker image local cache and try to retrieve from depo manager, while the depo manager pod was not in Running state after the control plane node power cycle, so the control plane node will go to "not ready" state and cause the cluster not operational.
Resolution
Follow below instruction to detach CSI volumes after non-graceful node shutdown.
1. After a node is detected as unhealthy, shut down the worker node.
2. Ensure that the node is shut down by running the following command and checking the status is NotReady
3. Taint the corresponding node object by running the following command:
5. Remove the taint by running the following command:
Note: In above commands, <node name> = name of the non-gracefully shutdown node
1. After a node is detected as unhealthy, shut down the worker node.
2. Ensure that the node is shut down by running the following command and checking the status is NotReady
oc get node <node name>Important: If the node is not completely shut down, do not proceed with tainting the node. If the node is still up and the taint is applied, filesystem corruption can occur.
3. Taint the corresponding node object by running the following command:
oc adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute4. Restart the node.
5. Remove the taint by running the following command:
oc adm taint node <node name> node.kubernetes.io/out-of-service-
Note: In above commands, <node name> = name of the non-gracefully shutdown node
Affected Products
APEX Cloud Platform for Red Hat OpenShiftArticle Properties
Article Number: 000217678
Article Type: Solution
Last Modified: 20 فبراير 2026
Version: 3
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.