To which CSI driver Unity version does this FAQ apply?
This FAQ applies to version 1.3 of the CSI Driver for Unity available here: https://github.com/dell/csi-unity/tree/v1.3.0
What's new with 1.3?
Which CSI version does the driver conform to?
The driver works with Kubernetes 1.17, 1.18 and 1.19 with the CSI specification 1.1. The CSI interface must be enabled
What are the supported features?
The following table lists the supported action to manage the lifecycle of a Unity volume handled with the CSI driver.
Action | Supported (version) |
Static Provisioning | yes |
Dynamic Provisioning | yes |
Binding | yes |
Retain Reclaiming | yes |
Delete Reclaiming | yes |
Recycle Reclaiming | no |
Expand Persistent Volume | yes |
Shrink Persistent Volume | no |
Create Snapshot Volume | yes |
Create Volume from Snapshot | yes |
Delete Snapshot | yes |
CSI Volume Cloning | no |
CSI Raw Block Volume | no |
CSI ephemeral volumes | no |
Access Modes | ROW, RWX (NFS only) |
How do I access a different storage pool?
It is possible to create multiple or different StorageClass to access a separate storage pool or tier policy or use iSCSI, FC or NFS protocols.
To access different arrays you will have to update the secret that contains the list of arrays with all the details (API endpoint, credentials, array id, etc.).
What are the supported storage protocols?
The supported protocols to access Unity storage are Fiber Channel, iSCSI and NFS.
How do I set up the Unity CSI driver?
The CSI driver can be installed with the provided installation scripts under the directory dell-csi-helm-installer or with the dell-csi-operator.
The operator is available directly from Openshift OperatorHub UI. For other distributions, you can download it from operatorhub.io.
For ease of installation, we recommend using the dell-csi-operator. If you want to have more control over the components helm is more open.
The details of each installation methods are documented in the product guide.
A video showing the Operator usage in context CSI Isilon is viewable here : (https://www.youtube.com/watch?v=l4z2tRqHnSg&list=PLbssOJyyvHuVXyKi0c9Z7NLqBiDiwF1eA&index=6)
How to install the beta snapshot capability?
With the promotion of Volume Snapshot to beta, one significant change is the CSI external-snapshotter sidecar has been split into two controllers, a common snapshot controller and a CSI external-snapshotter sidecar.
The provided install script will :
It is up to your Kubernetes distribution or you to deploy the common snapshotter controller.
How to uninstall the CSI driver?
With helm deployments, you can uninstall the driver with helm:
helm delete UNITY_RELEASE [--purge]
Or execute the script:
sh csi-uninstall.sh
How to upgrade the CSI driver?
Previously to version 1.3.0, the drivers supported Kubernetes alpha Volume Snapshot. With support for beta snapshot, the Product Guide recommends to remove volume snapshots, volume snapshot content, and volume snapshot class objects before anything else. Nevertheless, the Kubernetes blog announcing the beta release explains how to manually import the snapshots. That procedure haven't been tested by Dell EMC so feel free to share your experience in the forum
If you come from an installation with helm v2. You have to uninstall then reinstall the driver with helm v3 :
* Uninstall the driver using the latest csi-uninstall.sh
* Install the driver using the latest csi-install.sh script and the new myvalues.yaml file
Because of the multi-array support the and the secret with the credentials are very different from previous versions ; make sure secret.json is valid.
The version 1.3 is the first to bring the operator support for multi-array. Make sure to use the latest version of dell-csi-operator, before proceeding to the installation.
Please read to the product guide before upgrading.
What are the pre-requisites for CSI driver installation?
To check your system complies with the pre-requisites, you can execute the script sh verify.sh
The exhaustive list of pre-requisites is given in the product guide.
Which K8s distributions are supported?
The supported versions are upstream Kubernetes 1.17, 1.18, 1.19 ; OpenShift 4.3, 4.4 with RHEL or RHCOS nodes ; Docker EE v3.1
What operating systems are supported for Kubernetes nodes?
The driver was qualified with Red Hat Enterprise Linux and CentOS 7.6, 7.7, 7.8.
How to troubleshoot the driver?
The driver can be troubleshooted using the usual kubectl commands as any other k8s pod/resources.
Most often used commands are :
Which driver version is installed?
helm list -c UNITY_RELEASE
What Unity and Unisphere version are supported?
CSI Driver interacts with Unisphere API for all the features (volume creation, reclamation, etc.).
The driver supports Unity OE 5.0.0, 5.0.1, 5.0.2 and 5.0.3
Where do I submit an issue against the driver?
Dell EMC officially supports the Unity driver. Therefore you can open a ticket directly to the support website: https://www.dell.com/support/ or open a discussion in the forum : https://www.dell.com/community/Containers/bd-p/Containers
Can I run this driver in a production environment?
Yes, the driver is production-grade. Please make sure your environment follows the pre-requisites and Kubernetes' best practices.
What support does Dell EMC provide for the driver built via the GitHub sources?
Dell EMC is fully committed to supporting the driver image on Dockerhub built from the sources hosted on GitHub.
Do I need to have programming skills to use CSI driver?
No.
To use the driver, you need to have basic knowledge around Kubernetes administration (How to create a PV, How to use a Volume, etc.).
How can I connect Dell EMC Unity storage to Kubernetes running on VMWare?
If you choose to host your cluster in a virtualized environment, the preferred protocol are iSCSI and NFS.
If you want to use Fiber Channel protocol, you must use VMDirectPath IO to supply the VM with exclusive access to the HBA. That configure, while possible, haven't been extensively tested.
What do I do if I discover an issue with the code on the GitHub?
Please open a discussion in : https://www.dell.com/community/Containers/bd-p/Containers
trying to get unity to support k8s from the link here: https://github.com/dell/csi-unity#install-csi-driver-for-unity
when i click on this" git repository" link, i get a 404 err.
request help
"You must have the downloaded files, including the Helm chart from the source git repository, "
Hi,
The production guide show the following. Can you give this a try to download the driver?
Before you begin
l You must have the downloaded files, including the Helm chart from github.com/dell/csiunity, using the following command:
/home/test# git clone https://github.com/dell/csi-unity
Thanks,
Frank
Update for 1.1 release
Update release 1.2.0
Update for release v1.3
I want to ask for help with the technical question regarding CSI driver. I have not received an answer from our technical support.
The customer is about to deploy VMware PKS and connect to DELL Unity storage. They would like to use original VMware templates/stemcells base on Ubundu Xenial 16.x+docker-ce
Can you please confirm that we are supporting with CSI driver the scheme below or what scheme changes needs to be done.
There are discrepancies in Docker version. We are requesting Docker EE
Customer template:
Release | Details |
Version | v1.9.1 |
Release date | November 4, 2020 |
Component | Version |
Kubernetes | v1.18.8 |
CoreDNS | v1.6.7+vmware.3 |
Docker | Linux: v19.03.5 |
etcd | v3.4.3 |
Metrics Server | v0.3.6 |
NCP | v3.0.2.2 |
Percona XtraDB Cluster (PXC) | v0.30.0 |
UAA | v74.5.20 |
Compatibilities | Versions |
Ops Manager | Ops Manager v2.9.12 or later, or v2.10.2 or later. |
Xenial stemcells | |
Windows stemcells | v2019.24+ |
vSphere | v7.0, v6.7, v6.5 |
VMware Cloud Foundation (VCF) | v4.1, v4.0 |
CNS for vSphere | v1.0.2, v2.0 |
NSX-T | v3.0.2, v3.0.1.1, v2.5.2, v2.5.1*, v2.5.0* |
Harbor | v2.1.0, v2.0.3, v1.10.3 |
Velero | v1.4.2 and later |
GitHub Release notes
CSI Driver For Dell EMC Unity Capabilities
Capability | Supported | Not supported |
Provisioning | Persistent volumes creation, deletion, mounting, unmounting, expansion | |
Export, Mount | Mount volume as file system | Raw volumes, Topology |
Data protection | Creation of snapshots, Create volume from snapshots, Volume Cloning | |
Types of volumes | Static, Dynamic | |
Access mode | RWO(FC/iSCSI), RWO/RWX/ROX(NFS) | RWX/ROX(FC/iSCSI) |
Kubernetes | v1.17, v1.18, v1.19 | V1.16 or previous versions |
Docker EE | v3.1 | Other versions |
Installer | Helm v3.x, Operator | |
OpenShift | v4.3 (except snapshot), v4.4 | Other versions |
OS | RHEL 7.6, RHEL 7.7, RHEL 7.8, CentOS 7.6, CentOS 7.7, CentOS 7.8 | Ubuntu, other Linux variants |
Unity | OE 5.0.0, 5.0.1, 5.0.2, 5.0.3 | Previous versions and Later versions |
Protocol | FC, iSCSI, NFS |
Hi @vasekj,
The docker version mentioned below is Docker Enterprise Edition (which is Docker / Mirantis Kubernetes distro).
We do not qualify VMware Tanzu as part of our support matrix and therefore not officially support it.
That being said, the underlying Kubernetes distro is supported and should work if it complies with the required prerequisites (iscsi and nfs utils).
Rgds.
Hi,
Kubernetes 1.17 released as beta a feature to migrate in-tree volumes to CSI implementations:
https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/
Does the Unity CSI support this feature? Does it needs to be supported?
I see the CSI for vSphere is going to support this on version 2.10, but this could make sense as vSphere had a prior in-tree implementation. Given the fact that we had no in-tree implementation (only ScaleIO/PowerFlex OS had on), is that why there is no reference to this feature on Unity CSI implementation?
Thanks in advance!
Nacho
Hi @nachoarrieta ,
As you mentioned the migration is to migrate the provider in-tree to CSI provider. So there is no real use-case for CSI Unity here.
What are you trying to migrate here ? What is the source ?
Thx