FAQ / CSI Driver for PowerMax
To which CSI driver PowerMax version does this FAQ apply?
This FAQ applies to version 1.4 of the CSI Driver for PowerMax
What's new with 1.4 ?
* Support for beta snapshots
* Support for online Volume expansion
* Support for raw block volumes
* New CSI PowerMax Reverse Proxy service
* New installation scripts
* Kubernetes 1.17, 1.18, 1.19 support
* Openshift 4.4 support
* RHEL 7.8 support
Which CSI version driver conforms to?
The driver works with Kubernetes 1.17, 1.18 and 1.19 with the CSI specification 1.1. The CSI interface enabled
What are the supported features?
The following table lists the supported action to manage the lifecycle of a PowerMax volume handled with the CSI driver.
|Expand Persistent Volume||yes|
|Shrink Persistent Volume||no|
|Create Snapshot Volume||yes|
|Create Volume from Snapshot||yes|
|CSI Volume Cloning||yes|
|CSI Raw Block Volume||yes|
|CSI ephemeral volumes||no|
How do I access different array or storage pool ?
It is possible to create multiple Kubernetes StorageClass to access different arrays, storage pools, or service levels.
If different arrays are managed by different Unisphere, you can install a new instance of the driver in a different namespace to access each Unisphere.
What are the supported protocols ?
The supported protocol to access PowerMax storage are iSCSI and Fiber Channel.
What are the known limitations?
The driver does not support topology (VOLUME_ACCESSIBILITY_CONSTRAINTS) ; this means a volume is seen by a single host at a time.
The driver does not support both iSCSI and FC connectivity for a single node.
How do I set up the PowerMax CSI driver?
The CSI driver can be installed with the installation scripts provided (with helm) or with the dell-csi-operator.
The operator is available directly from Openshift OperatorHub UI. For other distributions, you can download it from operatorhub.io.
The details of each installation methods are documented in the product guide
A video showing the Operator usage in context CSI Isilon is viewable here : (https://www.youtube.com/watch?v=l4z2tRqHnSg&list=PLbssOJyyvHuVXyKi0c9Z7NLqBiDiwF1eA&index=6)
How to install the Snapshot beta ?
With support for beta volume snapshot, the installation process changed. They are now 2 extra steps:
- Install the Volume Snapshot Controller with kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-2.1/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml && kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-2.1/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
- Install the beta CRD either independently with kubectl create -f dell-csi-helm-installer/beta-snapshot-crd or with the csi-install.sh --snapshot-crd
The step1. is because the CSI external-snapshotter sidecar has been split into two controllers, a common
snapshot controller and a CSI external-snapshotter sidecar.
How to uninstall the CSI driver?
Like for the installation, you can uninstall the driver with helm:
helm delete POWERMAX_RELEASE [--purge]
Or execute the script:
sh ./csi-uninstall.sh --namespace powermax --release powermax
For Operator deployments you can run :
kubectl delete powermax/ -n
How to upgrade the CSI driver?
If you come from an installation with helm v2. You have to uninstall then reinstall the driver with helm v3 :
- Uninstall the driver using the latest uninstall.powermax script
- Make sure helm binary point to helm v3
- Install the driver using the latest install.powermax script and the new myvalues.yaml file
If you come from an installation with helm v3 :
- run the ./csi-install.sh --upgrade script
If you deployed with the dell-csi-operator :
- Edit with the Openshift UI or kubectl edit the CSIPowerMax image to image: "dellemc/csi-powermax:v1.4.0.000R"
What are the pre-requisites for CSI driver installation?
To check your system complies with the pre-requisites, you can execute the script sh verify.kubernetes
The exhaustive list of pre-requisites is given in the product guide.
Which k8s distributions are supported?
The supported versions are upstream Kubernetes 1.14, 1.16 and OpenShift 4.2, 4.3.
What operating systems are supported?
The supported OS is Red Hat Enterprise Linux 7.6, 7.7 and 7.8
What PowerMax and Unisphere version are supported?
The driver 1.2 supports 5978.221.221 (ELM SR), 5978.444.444 (Foxtail), 5978.479.479 (Foxtail SR) & future Hickory with Unisphere 9.0 and 9.1
Note: 1.0 driver is not compliant with Unisphere 9.1. Make sure that before upgrading your Unisphere from 9.0 to 9.1, you upgrade the CSI driver from 1.0 to 1.2
How to troubleshoot the driver?
The driver can be troubleshooted using the usual kubectl commands as any other k8s pod/resources.
Most often used commands are :
- kubectl get pods -n powermax : gives the status of the controller and drivers on every node
- kubectl describe pods powermax-controller-0 -n powermax : provides details on the deployment for the controller
- kubectl logs powermax-controller-0 -n powermax -c driver : logs the API calls between the driver and Unisphere
Which driver version is installed?
helm list -c POWERMAX_RELEASE
What version and model of PowerMax versions are supported?
CSI Driver interacts with Unisphere API for all the features (volume creation, reclamation, etc.).
The driver supports the Unisphere version 9.
Where do I submit an issue against the driver?
Dell EMC officially supports the PowerMax driver. Therefore you can open a ticket directly to the support website : https://www.dell.com/support/ or open a discussion in the forum : https://www.dell.com/community/Containers/bd-p/Containers:
Can I run this driver in the production environment?
Yes, the driver is production-grade. Please make sure your environment follows the pre-requisites and kubernetes best practices.
What support Dell EMC provides for the driver built on GitHub?
Dell EMC is fully committed to support the driver image on Dockerhub built from the sources hosted on GitHub.
Do I need to have programming skills to use CSI driver?
To use the driver, you need to have basic knowledge around kubernetes administration (How to create a PV, How to use a Volume, etc.).
How can connect Dell EMC storage to Kubernetes running on VMWare?
If you choose to host your cluster on a virtualized environment the preferred protocol is iSCSI.
What do I do if I discover an issue with the code on the GitHub?
Please open a ticket a discussion in : https://www.dell.com/community/Containers/bd-p/Containers