Start a Discussion
Dell Technologies

FAQ / CSI Driver for Unity

To which CSI driver Unity version does this FAQ apply?
This FAQ applies to version 1.3 of the CSI Driver for Unity available here:

What's new with 1.3?

  • Supports Kubernetes version 1.17, 1.18, 1.19
  • Supports OpenShift 4.3 and 4.4 with both RHEL and RHCOS worker nodes
  • Supports Docker EE 3.1
  • Supports volume expansion online and offline
  • Supports volume cloning
  • Supports CentOS versions 7.8, Red Hat Enterprise Linux 7.8
  • New installer scripts
  • Operator has multi-array support

Which CSI version does the driver conform to?
The driver works with Kubernetes 1.17, 1.18 and 1.19 with the CSI specification 1.1. The CSI interface must be enabled

What are the supported features?
The following table lists the supported action to manage the lifecycle of a Unity volume handled with the CSI driver.

ActionSupported (version)
Static Provisioningyes
Dynamic Provisioningyes
Retain Reclaimingyes
Delete Reclaimingyes
Recycle Reclaimingno
Expand Persistent Volumeyes
Shrink Persistent Volumeno
Create Snapshot Volumeyes
Create Volume from Snapshotyes
Delete Snapshotyes
CSI Volume Cloningno
CSI Raw Block Volumeno
CSI ephemeral volumesno
Access ModesROW, RWX (NFS only)


How do I access a different storage pool?
It is possible to create multiple or different StorageClass to access a separate storage pool or tier policy or use iSCSI, FC or NFS protocols.

To access different arrays you will have to update the secret that contains the list of arrays with all the details (API endpoint, credentials, array id, etc.).

What are the supported storage protocols?
The supported protocols to access Unity storage are Fiber Channel, iSCSI and NFS.

How do I set up the Unity CSI driver?

The CSI driver can be installed with the provided installation scripts under the directory dell-csi-helm-installer or with the dell-csi-operator.

The operator is available directly from Openshift OperatorHub UI. For other distributions, you can download it from

For ease of installation, we recommend using the dell-csi-operator. If you want to have more control over the components helm is more open.

The details of each installation methods are documented in the product guide.

A video showing the Operator usage in context CSI Isilon is viewable here : (

How to install the beta snapshot capability?

With the promotion of Volume Snapshot to beta, one significant change is the CSI external-snapshotter sidecar has been split into two controllers, a common snapshot controller and a CSI external-snapshotter sidecar. 

The provided install script will :

  • By default, install of the external-snaphotter for CSI Unity.
  • Optionally, install the beta snapshot CRD when the option --snapshot-crd is set during the initial installation.

It is up to your Kubernetes distribution or you to deploy the common snapshotter controller.

How to uninstall the CSI driver?
With helm deployments, you can uninstall the driver with helm:
helm delete UNITY_RELEASE [--purge]

Or execute the script:

How to upgrade the CSI driver?

Previously to version 1.3.0, the drivers supported Kubernetes alpha Volume Snapshot. With support for beta snapshot, the Product Guide recommends to remove volume snapshots, volume snapshot content, and volume snapshot class objects before anything else. Nevertheless, the Kubernetes blog announcing the beta release explains how to manually import the snapshots. That procedure haven't been tested by Dell EMC so feel free to share your experience in the forum

If you come from an installation with helm v2. You have to uninstall then reinstall the driver with helm v3 :
* Uninstall the driver using the latest
* Install the driver using the latest script and the new myvalues.yaml file

Because of the multi-array support the  and the secret with the credentials are very different from previous versions ; make sure secret.json is valid.

The version 1.3 is the first to bring the operator support for multi-array. Make sure to use the latest version of dell-csi-operator, before proceeding to the installation.

Please read to the product guide before upgrading.

What are the pre-requisites for CSI driver installation?
To check your system complies with the pre-requisites, you can execute the script sh

The exhaustive list of pre-requisites is given in the product guide.

Which K8s distributions are supported?
The supported versions are upstream Kubernetes 1.17, 1.18, 1.19 ; OpenShift 4.3, 4.4 with RHEL or RHCOS nodes ; Docker EE v3.1

What operating systems are supported for Kubernetes nodes?
The driver was qualified with Red Hat Enterprise Linux and CentOS 7.6, 7.7, 7.8.

How to troubleshoot the driver?
The driver can be troubleshooted using the usual kubectl commands as any other k8s pod/resources.
Most often used commands are :

  • kubectl get pods -n unity : gives the status of the controller and drivers on every node
  • kubectl describe pods unity-controller-0 -n unity  : provides details on the deployment for the controller
  • kubectl logs unity-controller-0 -n unity -c driver : logs the API calls between the driver and Unisphere

Which driver version is installed?
helm list -c UNITY_RELEASE

What Unity and Unisphere version are supported?
CSI Driver interacts with Unisphere API for all the features (volume creation, reclamation, etc.).

The driver supports Unity OE 5.0.0, 5.0.1, 5.0.2 and 5.0.3

Where do I submit an issue against the driver?
Dell EMC officially supports the Unity driver. Therefore you can open a ticket directly to the support website: or open a discussion in the forum :

Can I run this driver in a production environment?
Yes, the driver is production-grade. Please make sure your environment follows the pre-requisites and Kubernetes' best practices.

What support does Dell EMC provide for the driver built via the GitHub sources?
Dell EMC is fully committed to supporting the driver image on Dockerhub built from the sources hosted on GitHub.

Do I need to have programming skills to use CSI driver?

To use the driver, you need to have basic knowledge around Kubernetes administration (How to create a PV, How to use a Volume, etc.).

How can I connect Dell EMC Unity storage to Kubernetes running on VMWare?
If you choose to host your cluster in a virtualized environment, the preferred protocol are iSCSI and NFS.

If you want to use Fiber Channel protocol, you must use VMDirectPath IO to supply the VM with exclusive access to the HBA. That configure, while possible, haven't been extensively tested.

What do I do if I discover an issue with the code on the GitHub?
Please open a discussion in :

Replies (10)
2 Bronze
2 Bronze

trying to get unity to support k8s from the link here:

when i click on this" git repository" link, i get a 404 err.

request help

"You must have the downloaded files, including the Helm chart from the source git repository, "


The production guide show the following.  Can you give this a try to download the driver?

Before you begin
l You must have the downloaded files, including the Helm chart from, using the following command:
/home/test# git clone


Dell Technologies

Update for 1.1 release

Dell Technologies

Update release 1.2.0

Dell Technologies

Update for release v1.3

2 Bronze
2 Bronze

I want to ask for help with the technical question regarding CSI driver. I have not received an answer from our technical support.



The customer is about to deploy VMware PKS and connect to DELL Unity storage. They would like to use original VMware templates/stemcells base on Ubundu Xenial 16.x+docker-ce

Can you please confirm that we are supporting with CSI driver the scheme below or what scheme changes needs to be done.

There are discrepancies in Docker version. We are requesting Docker EE


Customer template:






Release date

November 4, 2020








Linux: v19.03.5
Windows: v19.03.11



Metrics Server




Percona XtraDB Cluster (PXC)






Ops Manager

Ops Manager v2.9.12 or later, or v2.10.2 or later.
Windows worker support on vSphere with NSX-T requires Ops Manager v2.10.2 or later

Xenial stemcells

See VMware Tanzu Network

Windows stemcells



v7.0, v6.7, v6.5

VMware Cloud Foundation (VCF)

v4.1, v4.0

CNS for vSphere

v1.0.2, v2.0


v3.0.2, v3.0.1.1, v2.5.2, v2.5.1*, v2.5.0*


v2.1.0, v2.0.3, v1.10.3


v1.4.2 and later



GitHub Release notes


CSI Driver For Dell EMC Unity Capabilities



Not supported


Persistent volumes creation, deletion, mounting, unmounting, expansion


Export, Mount

Mount volume as file system

Raw volumes, Topology

Data protection

Creation of snapshots, Create volume from snapshots, Volume Cloning


Types of volumes

Static, Dynamic


Access mode




v1.17, v1.18, v1.19

V1.16 or previous versions

Docker EE


Other versions


Helm v3.x, Operator



v4.3 (except snapshot), v4.4

Other versions


RHEL 7.6, RHEL 7.7, RHEL 7.8, CentOS 7.6, CentOS 7.7, CentOS 7.8

Ubuntu, other Linux variants


OE 5.0.0, 5.0.1, 5.0.2, 5.0.3

Previous versions and Later versions



Dell Technologies

Hi @vasekj,

The docker version mentioned below is Docker Enterprise Edition (which is Docker / Mirantis Kubernetes distro).

We do not qualify VMware Tanzu as part of our support matrix and therefore not officially support it.

That being said, the underlying Kubernetes distro is supported and should work if it complies with the required prerequisites (iscsi and nfs utils).


2 Bronze
2 Bronze


Kubernetes 1.17 released as beta a feature to migrate in-tree volumes to CSI implementations:

Does the Unity CSI support this feature? Does it needs to be supported? 

I see the CSI for vSphere is going to support this on version 2.10, but this could make sense as vSphere had a prior in-tree implementation. Given the fact that we had no in-tree implementation (only ScaleIO/PowerFlex OS had on), is that why there is no reference to this feature on Unity CSI implementation? 

Thanks in advance!


Hi @nachoarrieta ,

As you mentioned the migration is to migrate the provider in-tree to CSI provider. So there is no real use-case for CSI Unity here.

What are you trying to migrate here ? What is the source ?


Top Contributor
Latest Solutions