Start a Conversation



166 Posts


September 23rd, 2019 03:00

FAQ / CSI Driver for PowerMax

To which CSI driver PowerMax version does this FAQ apply?

This FAQ applies to version 1.4 of the CSI Driver for PowerMax

What's new with 1.4 ?

* Support for beta snapshots
* Support for online Volume expansion
* Support for raw block volumes
* New CSI PowerMax Reverse Proxy service
* New installation scripts
* Kubernetes 1.17, 1.18, 1.19 support
* Openshift 4.4 support
* RHEL 7.8 support

Which CSI version driver conforms to?

The driver works with Kubernetes 1.17, 1.18 and 1.19 with the CSI specification 1.1. The CSI interface enabled

What are the supported features?

The following table lists the supported action to manage the lifecycle of a PowerMax volume handled with the CSI driver.

Action Supported (version)
Static Provisionning yes
Dynamic Provisionning yes
Binding yes
Retain Reclaiming yes
Delete Reclaiming yes
Recycle Reclaiming no
Expand Persistent Volume yes
Shrink Persistent Volume no
Create Snapshot Volume yes
Create Volume from Snapshot yes
Delete Snapshot yes
CSI Volume Cloning yes
CSI Raw Block Volume yes
CSI ephemeral volumes no

How do I access different array or storage pool ?

It is possible to create multiple Kubernetes StorageClass to access different arrays, storage pools, or service levels.

If different arrays are managed by different Unisphere, you can install a new instance of the driver in a different namespace to access each Unisphere.

What are the supported protocols ?

The supported protocol to access PowerMax storage are iSCSI and Fiber Channel.

What are the known limitations?

The driver does not support topology (VOLUME_ACCESSIBILITY_CONSTRAINTS) ; this means a volume is seen by a single host at a time.
The driver does not support both iSCSI and FC connectivity for a single node.

How do I set up the PowerMax CSI driver?

The CSI driver can be installed with the installation scripts provided (with helm) or with the dell-csi-operator.

The operator is available directly from Openshift OperatorHub UI. For other distributions, you can download it from

For ease of installation, we recommend using the dell-csi-operator. If you want to have more control over the components helm is more open.

The details of each installation methods are documented in the product guide

A video showing the Operator usage in context CSI Isilon is viewable here : (

How to install the Snapshot beta ?

With support for beta volume snapshot, the installation process changed. They are now 2 extra steps:

  1. Install the Volume Snapshot Controller with kubectl create -f &&  kubectl create -f
  2. Install the beta CRD either independently with kubectl create -f dell-csi-helm-installer/beta-snapshot-crd or with the --snapshot-crd

The step1. is because the CSI external-snapshotter sidecar has been split into two controllers, a common
snapshot controller and a CSI external-snapshotter sidecar.

The step

How to uninstall the CSI driver?

Like for the installation, you can uninstall the driver with helm:
helm delete POWERMAX_RELEASE [--purge]

Or execute the script:
sh ./ --namespace powermax --release powermax 

For Operator deployments you can run : 

kubectl delete powermax/ -n

How to upgrade the CSI driver?

  • If you come from an installation with helm v2. You have to uninstall then reinstall the driver with helm v3 :

    • Uninstall the driver using the latest uninstall.powermax script
    • Make sure helm binary point to helm v3
    • Install the driver using the latest install.powermax script and the new myvalues.yaml file
  • If you come from an installation with helm v3 :

    • run the ./ --upgrade script
  • If you deployed with the dell-csi-operator :

    • Edit with the Openshift UI or kubectl edit the CSIPowerMax image to image: "dellemc/csi-powermax:v1.4.0.000R"

What are the pre-requisites for CSI driver installation?

To check your system complies with the pre-requisites, you can execute the script sh verify.kubernetes

The exhaustive list of pre-requisites is given in the product guide.

Which k8s distributions are supported?

The supported versions are upstream Kubernetes 1.14, 1.16 and OpenShift 4.2, 4.3.

What operating systems are supported?

The supported OS is Red Hat Enterprise Linux 7.6, 7.7 and 7.8

What PowerMax and Unisphere version are supported?
The driver 1.2 supports 5978.221.221 (ELM SR), 5978.444.444 (Foxtail), 5978.479.479 (Foxtail SR) & future Hickory with Unisphere 9.0 and 9.1

Note: 1.0 driver is not compliant with Unisphere 9.1. Make sure that before upgrading your Unisphere from 9.0 to 9.1, you upgrade the CSI driver from 1.0 to 1.2

How to troubleshoot the driver?

The driver can be troubleshooted using the usual kubectl commands as any other k8s pod/resources.
Most often used commands are :

  • kubectl get pods -n powermax : gives the status of the controller and drivers on every node
  • kubectl describe pods powermax-controller-0 -n powermax : provides details on the deployment for the controller
  • kubectl logs powermax-controller-0 -n powermax -c driver : logs the API calls between the driver and Unisphere

Which driver version is installed?


What version and model of PowerMax versions are supported?

CSI Driver interacts with Unisphere API for all the features (volume creation, reclamation, etc.).

The driver supports the Unisphere version 9.

Where do I submit an issue against the driver?

Dell EMC officially supports the PowerMax driver. Therefore you can open a ticket directly to the support website : or open a discussion in the forum :

Can I run this driver in the production environment?

Yes, the driver is production-grade. Please make sure your environment follows the pre-requisites and kubernetes best practices.

What support Dell EMC provides for the driver built on GitHub?

Dell EMC is fully committed to support the driver image on Dockerhub built from the sources hosted on GitHub.

Do I need to have programming skills to use CSI driver?


To use the driver, you need to have basic knowledge around kubernetes administration (How to create a PV, How to use a Volume, etc.).

How can connect Dell EMC storage to Kubernetes running on VMWare?

If you choose to host your cluster on a virtualized environment the preferred protocol is iSCSI.

What do I do if I discover an issue with the code on the GitHub?

Please open a ticket a discussion in :

1 Message

October 31st, 2019 08:00

Does this CSI Driver for PowerMax support SRDF/Metro?

166 Posts

November 12th, 2019 12:00


There is no support for SRDF/Metro from the driver yet.

It is indeed an excellent use-case for Geo distributed Kubernetes cluster.



1 Message

April 9th, 2020 21:00

Hi Flo_csl, Do we know when we are looking to provide support for SRDF/Metro in the driver?

From what I can see, it would be possible for us to use statically provisioned volumes with SRDF/Metro with Geo-distributed clusters. Is that right?

166 Posts

April 10th, 2020 00:00

Hi @AlexS123,
There is no built-in support for SRDF/Metro in the CSI driver today.
You are right static provisioning is the solution to expose a LUN to a distant node of the cluster.
I will be very interested in discussing with you in more detail your use-case; feel free to contact me directly in PM.

April 12th, 2020 20:00

Is it compatible with vmax250 storage?

If not, do you have any plans to support vmax csi driver?

72 Posts

April 13th, 2020 05:00

Hi Mason,

The CSI driver for PowerMax/VMAX is based on the operating environment running on the array and version of Unisphere.  Per the ESSM the VMAX 250F model is able to run the required PowerMax OS 5978.221.221 & 5978.444.444 and is therefore supported.


166 Posts

June 18th, 2020 05:00

Bump v1.3

166 Posts

September 17th, 2020 03:00

Update for 1.4 release

No Events found!