FAQ / CSI Driver for Unity
To which CSI driver Unity version does this FAQ apply?
This FAQ applies to version 1.3 of the CSI Driver for Unity available here: https://github.com/dell/csi-unity/tree/v1.3.0
What's new with 1.3?
- Supports Kubernetes version 1.17, 1.18, 1.19
- Supports OpenShift 4.3 and 4.4 with both RHEL and RHCOS worker nodes
- Supports Docker EE 3.1
- Supports volume expansion online and offline
- Supports volume cloning
- Supports CentOS versions 7.8, Red Hat Enterprise Linux 7.8
- New installer scripts
- Operator has multi-array support
Which CSI version does the driver conform to?
The driver works with Kubernetes 1.17, 1.18 and 1.19 with the CSI specification 1.1. The CSI interface must be enabled
What are the supported features?
The following table lists the supported action to manage the lifecycle of a Unity volume handled with the CSI driver.
|Expand Persistent Volume||yes|
|Shrink Persistent Volume||no|
|Create Snapshot Volume||yes|
|Create Volume from Snapshot||yes|
|CSI Volume Cloning||no|
|CSI Raw Block Volume||no|
|CSI ephemeral volumes||no|
|Access Modes||ROW, RWX (NFS only)|
How do I access a different storage pool?
It is possible to create multiple or different StorageClass to access a separate storage pool or tier policy or use iSCSI, FC or NFS protocols.
To access different arrays you will have to update the secret that contains the list of arrays with all the details (API endpoint, credentials, array id, etc.).
What are the supported storage protocols?
The supported protocols to access Unity storage are Fiber Channel, iSCSI and NFS.
How do I set up the Unity CSI driver?
The operator is available directly from Openshift OperatorHub UI. For other distributions, you can download it from operatorhub.io.
The details of each installation methods are documented in the product guide.
A video showing the Operator usage in context CSI Isilon is viewable here : (https://www.youtube.com/watch?v=l4z2tRqHnSg&list=PLbssOJyyvHuVXyKi0c9Z7NLqBiDiwF1eA&index=6)
How to install the beta snapshot capability?
With the promotion of Volume Snapshot to beta, one significant change is the CSI external-snapshotter sidecar has been split into two controllers, a common snapshot controller and a CSI external-snapshotter sidecar.
The provided install script will :
- By default, install of the external-snaphotter for CSI Unity.
- Optionally, install the beta snapshot CRD when the option --snapshot-crd is set during the initial installation.
It is up to your Kubernetes distribution or you to deploy the common snapshotter controller.
How to uninstall the CSI driver?
With helm deployments, you can uninstall the driver with helm:
helm delete UNITY_RELEASE [--purge]
Or execute the script:
How to upgrade the CSI driver?
Previously to version 1.3.0, the drivers supported Kubernetes alpha Volume Snapshot. With support for beta snapshot, the Product Guide recommends to remove volume snapshots, volume snapshot content, and volume snapshot class objects before anything else. Nevertheless, the Kubernetes blog announcing the beta release explains how to manually import the snapshots. That procedure haven't been tested by Dell EMC so feel free to share your experience in the forum
If you come from an installation with helm v2. You have to uninstall then reinstall the driver with helm v3 :
* Uninstall the driver using the latest csi-uninstall.sh
* Install the driver using the latest csi-install.sh script and the new myvalues.yaml file
Because of the multi-array support the and the secret with the credentials are very different from previous versions ; make sure secret.json is valid.
The version 1.3 is the first to bring the operator support for multi-array. Make sure to use the latest version of dell-csi-operator, before proceeding to the installation.
Please read to the product guide before upgrading.
What are the pre-requisites for CSI driver installation?
To check your system complies with the pre-requisites, you can execute the script sh verify.sh
The exhaustive list of pre-requisites is given in the product guide.
Which K8s distributions are supported?
The supported versions are upstream Kubernetes 1.17, 1.18, 1.19 ; OpenShift 4.3, 4.4 with RHEL or RHCOS nodes ; Docker EE v3.1
What operating systems are supported for Kubernetes nodes?
The driver was qualified with Red Hat Enterprise Linux and CentOS 7.6, 7.7, 7.8.
How to troubleshoot the driver?
The driver can be troubleshooted using the usual kubectl commands as any other k8s pod/resources.
Most often used commands are :
- kubectl get pods -n unity : gives the status of the controller and drivers on every node
- kubectl describe pods unity-controller-0 -n unity : provides details on the deployment for the controller
- kubectl logs unity-controller-0 -n unity -c driver : logs the API calls between the driver and Unisphere
Which driver version is installed?
helm list -c UNITY_RELEASE
What Unity and Unisphere version are supported?
CSI Driver interacts with Unisphere API for all the features (volume creation, reclamation, etc.).
The driver supports Unity OE 5.0.0, 5.0.1, 5.0.2 and 5.0.3
Where do I submit an issue against the driver?
Dell EMC officially supports the Unity driver. Therefore you can open a ticket directly to the support website: https://www.dell.com/support/ or open a discussion in the forum : https://www.dell.com/community/Containers/bd-p/Containers
Can I run this driver in a production environment?
Yes, the driver is production-grade. Please make sure your environment follows the pre-requisites and Kubernetes' best practices.
What support does Dell EMC provide for the driver built via the GitHub sources?
Dell EMC is fully committed to supporting the driver image on Dockerhub built from the sources hosted on GitHub.
Do I need to have programming skills to use CSI driver?
To use the driver, you need to have basic knowledge around Kubernetes administration (How to create a PV, How to use a Volume, etc.).
How can I connect Dell EMC Unity storage to Kubernetes running on VMWare?
If you choose to host your cluster in a virtualized environment, the preferred protocol are iSCSI and NFS.
If you want to use Fiber Channel protocol, you must use VMDirectPath IO to supply the VM with exclusive access to the HBA. That configure, while possible, haven't been extensively tested.
What do I do if I discover an issue with the code on the GitHub?
Please open a discussion in : https://www.dell.com/community/Containers/bd-p/Containers