Start a Conversation

Unsolved

E

1 Rookie

 • 

106 Posts

1507

February 20th, 2021 02:00

csi-unity nfs claim pending

Hello all

I'm trying to configure csi-unity with nfs only provisioning. I've successfully installed the csi plugin with these params

#cat myvalues.yaml 

csiDebug: "true"
volumeNamePrefix : csivol
snapNamePrefix: csi-snap
imagePullPolicy: Always
certSecretCount: 1
syncNodeInfoInterval: 5
controllerCount: 2
createStorageClassesWithTopology: true
storageClassProtocols:
- protocol: "NFS"
storageArrayList:
- name: "CKM00XXXXXX50"
isDefaultArray: "true"
tieringPolicy: "2"
storageClass:
storagePool: Pool0
reclaimPolicy: Delete
allowVolumeExpansion: true
protocol: "NFS"
hostIoSize: "8192"
nasServer: "nas_14"
snapshotClass:
retentionDuration: "2:2:23:45"

when I try to claim a pv, it goes in pending state with the following error:

 failed to provision volume with StorageClass "unity-nfs": error generating accessibility requirements: no available topology found 

#k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
unity-nfs csi-unity.dellemc.com Delete Immediate true 14m
#k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myclaim Pending unity-nfs 13m

#cat claim-unity.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: unity-nfs

Any advice?

thanks

Matteo

1 Rookie

 • 

106 Posts

February 21st, 2021 02:00

I found some errors:

- pool must be indicate as cli id

- before use nfs an iscsi session must be setup, and initiators must be logged in

After that csi is able to create filesystem and nfs share, but now worker node doesn't mount it

Operation for "{volumeName:kubernetes.io/csi/csi-unity.dellemc.com^csivol-cffcb5b02b-NFS-virt21087h85ot-fs_1 podName: nodeName:}" failed. No retries permitted until 2021-02-21 04:58:52.024755582 -0500 EST m=+607308.118069457 (durationBeforeRetry 2m2s). Error: "MountVolume.NewMounter initialization failed for volume \"csivol-cffcb5b02b\" (UniqueName: \"kubernetes.io/csi/csi-unity.dellemc.com^csivol-cffcb5b02b-NFS-virt21087h85ot-fs_1\") pod \"three-crown-postgresql-0\" (UID: \"b5ffc562-debe-49ed-a104-827b66b0fd00\") : kubernetes.io/csi: expected valid fsGroupPolicy, received nil value or empty string"

OR

MountVolume.NewMounter initialization failed for volume "csivol-cffcb5b02b" : kubernetes.io/csi: expected valid fsGroupPolicy, received nil value or empty string

 

Thanks

Matteo

166 Posts

February 22nd, 2021 01:00

Hi @errevi_mancio,

You are right on both points, that is to say, the driver uses the CLI ID identifier not the label and there is a know bug using NFS only : https://dell.github.io/storage-plugin-docs/docs/release/unity/#known-issues ; this will be fixed with the next release.

 

On the error fsGroupPolicy , received nil value ; can share more details about your configuration (kubernetes version and flavor, any feature gates enabled, etc.) ?

The fsGroupPolicy  has been introduced recently to specify if a CSI driver should honor or not the permission change defined in the fsGroup directive of your Pod specification.

In the past, we faced issues with chown on NFS (cf. https://github.com/kubernetes/kubernetes/issues/90123#issuecomment-613133490 that states "fsGroup was not designed to handle NFS”).

Your best bet is to modify : https://github.com/dell/csi-unity/blob/master/helm/csi-unity/templates/csidriver.yaml to add fsGroupPolicy: ReadWriteOnceWithFSType to the spec section.

 

Let us know if that works or send me a private message if you need more support.

1 Rookie

 • 

106 Posts

February 23rd, 2021 11:00

Hello @Flo_csI 

now I'm trying to attach a iscsi lun, but i receive same error 

MountVolume.NewMounter initialization failed for volume "csivol-f702818e8b" : kubernetes.io/csi: expected valid fsGroupPolicy, received nil value or empty string

$ kubectl get node
NAME STATUS ROLES AGE VERSION
srvmkb01 Ready master 9d v1.20.2
srvmkb02 Ready master 9d v1.20.2
srvmkb03 Ready master 9d v1.20.2
srvwkb01 Ready 9d v1.20.2
srvwkb02 Ready 9d v1.20.2
srvwkb03 Ready 9d v1.20.2

$ cat myvalues-vsa-iscsi.yml
csiDebug: "true"
volumeNamePrefix : csivol
snapNamePrefix: csi-snap
imagePullPolicy: Always
certSecretCount: 1
syncNodeInfoInterval: 5
controllerCount: 2
createStorageClassesWithTopology: false
storageClassProtocols:
- protocol: "NFS"
- protocol: "iSCSI"
storageArrayList:
- name: "VIRT21087H85OT"
isDefaultArray: "true"
storageClass:
storagePool: pool_1
tieringPolicy: "0"
reclaimPolicy: Delete
FsType: ext4
isDataReductionEnabled: false
allowVolumeExpansion: true
nasServer: "nas_1"
hostIoSize: "8192"
snapshotClass:
retentionDuration: "2:2:23:45"

 

 

I've modified csidriver.yaml

$ cat ./helm/csi-unity/templates/csidriver.yaml
{ {- if or (eq .Capabilities.KubeVersion.Minor "17") (eq .Capabilities.KubeVersion.Minor "17+")}}
apiVersion: storage.k8s.io/v1beta1
{ {- else }}
apiVersion: storage.k8s.io/v1
{ {- end }}
kind: CSIDriver
metadata:
name: csi-unity.dellemc.com
spec:
attachRequired: true
fsGroupPolicy: ReadWriteOnceWithFSType
podInfoOnMount: true
volumeLifecycleModes:
- Persistent
- Ephemeral

Before this mod, I've uninstalled the csi-driver, modify the yml and than I've reinstalled csi-driver

Behaviour: The lun has been created and attached to the node, but not partition or fs are present

root@srvwkb01:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 16G 0 disk
└─sda1 8:1 0 16G 0 part
└─template--vg-root 253:0 0 16G 0 lvm /
sdb 8:16 0 8G 0 disk
sr0 11:0 1 1024M 0 rom
root@srvwkb01:~# fdisk -l /dev/sdb
Disk /dev/sdb: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 4194304 bytes

Thanks

Matteo

 

 

No Events found!

Top