The production guide show the following. Can you give this a try to download the driver?
Before you begin l You must have the downloaded files, including the Helm chart from github.com/dell/csiunity, using the following command: /home/test# git clone https://github.com/dell/csi-unity
I want to ask for help with the technical question regarding CSI driver. I have not received an answer from our technical support.
The customer is about to deploy VMware PKS and connect to DELL Unity storage. They would like to use original VMware templates/stemcells base on Ubundu Xenial 16.x+docker-ce
Can you please confirm that we are supporting with CSI driver the scheme below or what scheme changes needs to be done.
There are discrepancies in Docker version. We are requesting Docker EE
Customer template:
Release
Details
Version
v1.9.1
Release date
November 4, 2020
Component
Version
Kubernetes
v1.18.8
CoreDNS
v1.6.7+vmware.3
Docker
Linux: v19.03.5 Windows: v19.03.11
etcd
v3.4.3
Metrics Server
v0.3.6
NCP
v3.0.2.2
Percona XtraDB Cluster (PXC)
v0.30.0
UAA
v74.5.20
Compatibilities
Versions
Ops Manager
Ops Manager v2.9.12 or later, or v2.10.2 or later. Windows worker support on vSphere with NSX-T requires Ops Manager v2.10.2 or later
Does the Unity CSI support this feature? Does it needs to be supported?
I see the CSI for vSphere is going to support this on version 2.10, but this could make sense as vSphere had a prior in-tree implementation. Given the fact that we had no in-tree implementation (only ScaleIO/PowerFlex OS had on), is that why there is no reference to this feature on Unity CSI implementation?
Paiajay
8 Posts
0
January 7th, 2020 10:00
trying to get unity to support k8s from the link here: https://github.com/dell/csi-unity#install-csi-driver-for-unity
when i click on this" git repository" link, i get a 404 err.
request help
"You must have the downloaded files, including the Helm chart from the source git repository, "
gashof
1 Rookie
•
46 Posts
0
January 7th, 2020 11:00
Hi,
The production guide show the following. Can you give this a try to download the driver?
Before you begin
l You must have the downloaded files, including the Helm chart from github.com/dell/csiunity, using the following command:
/home/test# git clone https://github.com/dell/csi-unity
Thanks,
Frank
Flo_csI
2 Intern
•
167 Posts
0
April 10th, 2020 00:00
Update for 1.1 release
Flo_csI
2 Intern
•
167 Posts
0
June 25th, 2020 01:00
Update release 1.2.0
Flo_csI
2 Intern
•
167 Posts
0
September 24th, 2020 02:00
Update for release v1.3
vasekj
1 Message
0
December 8th, 2020 03:00
I want to ask for help with the technical question regarding CSI driver. I have not received an answer from our technical support.
The customer is about to deploy VMware PKS and connect to DELL Unity storage. They would like to use original VMware templates/stemcells base on Ubundu Xenial 16.x+docker-ce
Can you please confirm that we are supporting with CSI driver the scheme below or what scheme changes needs to be done.
There are discrepancies in Docker version. We are requesting Docker EE
Customer template:
Release
Details
Version
v1.9.1
Release date
November 4, 2020
Component
Version
Kubernetes
v1.18.8
CoreDNS
v1.6.7+vmware.3
Docker
Linux: v19.03.5
Windows: v19.03.11
etcd
v3.4.3
Metrics Server
v0.3.6
NCP
v3.0.2.2
Percona XtraDB Cluster (PXC)
v0.30.0
UAA
v74.5.20
Compatibilities
Versions
Ops Manager
Ops Manager v2.9.12 or later, or v2.10.2 or later.
Windows worker support on vSphere with NSX-T requires Ops Manager v2.10.2 or later
Xenial stemcells
See VMware Tanzu Network
Windows stemcells
v2019.24+
vSphere
v7.0, v6.7, v6.5
VMware Cloud Foundation (VCF)
v4.1, v4.0
CNS for vSphere
v1.0.2, v2.0
NSX-T
v3.0.2, v3.0.1.1, v2.5.2, v2.5.1*, v2.5.0*
Harbor
v2.1.0, v2.0.3, v1.10.3
Velero
v1.4.2 and later
GitHub Release notes
CSI Driver For Dell EMC Unity Capabilities
Capability
Supported
Not supported
Provisioning
Persistent volumes creation, deletion, mounting, unmounting, expansion
Export, Mount
Mount volume as file system
Raw volumes, Topology
Data protection
Creation of snapshots, Create volume from snapshots, Volume Cloning
Types of volumes
Static, Dynamic
Access mode
RWO(FC/iSCSI), RWO/RWX/ROX(NFS)
RWX/ROX(FC/iSCSI)
Kubernetes
v1.17, v1.18, v1.19
V1.16 or previous versions
Docker EE
v3.1
Other versions
Installer
Helm v3.x, Operator
OpenShift
v4.3 (except snapshot), v4.4
Other versions
OS
RHEL 7.6, RHEL 7.7, RHEL 7.8, CentOS 7.6, CentOS 7.7, CentOS 7.8
Ubuntu, other Linux variants
Unity
OE 5.0.0, 5.0.1, 5.0.2, 5.0.3
Previous versions and Later versions
Protocol
FC, iSCSI, NFS
Flo_csI
2 Intern
•
167 Posts
0
December 8th, 2020 06:00
Hi @vasekj,
The docker version mentioned below is Docker Enterprise Edition (which is Docker / Mirantis Kubernetes distro).
We do not qualify VMware Tanzu as part of our support matrix and therefore not officially support it.
That being said, the underlying Kubernetes distro is supported and should work if it complies with the required prerequisites (iscsi and nfs utils).
Rgds.
nachoarrieta
4 Posts
0
December 21st, 2020 04:00
Hi,
Kubernetes 1.17 released as beta a feature to migrate in-tree volumes to CSI implementations:
https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/
Does the Unity CSI support this feature? Does it needs to be supported?
I see the CSI for vSphere is going to support this on version 2.10, but this could make sense as vSphere had a prior in-tree implementation. Given the fact that we had no in-tree implementation (only ScaleIO/PowerFlex OS had on), is that why there is no reference to this feature on Unity CSI implementation?
Thanks in advance!
Nacho
Flo_csI
2 Intern
•
167 Posts
0
December 28th, 2020 01:00
Hi @nachoarrieta ,
As you mentioned the migration is to migrate the provider in-tree to CSI provider. So there is no real use-case for CSI Unity here.
What are you trying to migrate here ? What is the source ?
Thx
nachoarrieta
4 Posts
0
December 30th, 2020 04:00
Hi @Flo_csI ,
Thanks for your prompt answer
I was just reading about the deprecation of in-tree providers, and got curious. I did not found any reference to ScaleIO/PowerFlex migration though.
Best Regards and again thank you!
Nacho