Start a Conversation

Solved!

Go to Solution

18421

August 4th, 2020 02:00

MountVolume.MountDevice failed for volume

Hi Experts, 

I'm deploying a pod on Unity storage using iSCSI protocol. I have the pvc created, and volume looks good on Unity side. 

[root@master helm]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-iscsi-vol-claim Bound jij8-csivol-369833ea70 10Gi RWO unity-iscsi 24m  

But my pod is not able to start correct, below are from kubectl describe pod:

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled default-scheduler Successfully assigned default/task-pv-pod to worker
Normal SuccessfulAttachVolume 7m23s attachdetach-controller AttachVolume.Attach succeeded for volume "jij8-csivol-369833ea70"
Warning FailedMount 6m59s kubelet, worker MountVolume.MountDevice failed for volume "jij8-csivol-369833ea70" : rpc error: code = Internal desc = runid=87 Unable to find device after multiple discovery attempts: [registered device not found]

My pod yaml file is just a simple one, to mount the Unity volume to the pod, see below yaml file:

apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: test-iscsi-vol-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage

 

From the my work node, journal log is displaying below message :

 

Aug 04 03:40:04 worker kubelet[5354]: E0804 03:40:04.086921 5354 nestedpendingoperations.go:270] Operation for "\\"kubernetes.io/csi/csi-unity.dellemc.com^jij8-csivol-efcff9ba60-iSCSI-apm00194717505-sv_128\\"" failed. No retries permitted until 2020-08-04 03:40:04.586904855 -0400 EDT m=+812.451239824 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \\"jij8-csivol-efcff9ba60\\" (UniqueName: \\"kubernetes.io/csi/csi-unity.dellemc.com^jij8-csivol-efcff9ba60-iSCSI-apm00194717505-sv_128\\") pod \\"task-pv-pod\\" (UID: \\"a78741d4-f040-4fbd-a96b-06cdb601c197\\") "\cf1\highlight2


Aug 04 03:40:04 worker kubelet[5354]: E0804 03:40:04.590183 5354 nestedpendingoperations.go:270] Operation for "\\"kubernetes.io/csi/csi-unity.dellemc.com^jij8-csivol-efcff9ba60-iSCSI-apm00194717505-sv_128\\"" failed. No retries permitted until 2020-08-04 03:40:05.590141672 -0400 EDT m=+813.454476645 (durationBeforeRetry 1s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \\"jij8-csivol-efcff9ba60\\" (UniqueName: \\"kubernetes.io/csi/csi-unity.dellemc.com^jij8-csivol-efcff9ba60-iSCSI-apm00194717505-sv_128\\") pod \\"task-pv-pod\\" (UID: \\"a78741d4-f040-4fbd-a96b-06cdb601c197\\") "\cf1\highlight2


Aug 04 03:40:05 worker kubelet[5354]: E0804 03:40:05.597634 5354 nestedpendingoperations.go:270] Operation for "\\"kubernetes.io/csi/csi-unity.dellemc.com^jij8-csivol-efcff9ba60-iSCSI-apm00194717505-sv_128\\"" failed. No retries permitted until 2020-08-04 03:40:07.597609413 -0400 EDT m=+815.461944410 (durationBeforeRetry 2s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \\"jij8-csivol-efcff9ba60\\" (UniqueName: \\"kubernetes.io/csi/csi-unity.dellemc.com^jij8-csivol-efcff9ba60-iSCSI-apm00194717505-sv_128\\") pod \\"task-pv-pod\\" (UID: \\"a78741d4-f040-4fbd-a96b-06cdb601c197\\") "\cf1\highlight2


Aug 04 03:40:07 worker kubelet[5354]: E0804 03:40:07.611613 5354 nestedpendingoperations.go:270] Operation for "\\"kubernetes.io/csi/csi-unity.dellemc.com^jij8-csivol-efcff9ba60-iSCSI-apm00194717505-sv_128\\"" failed. No retries permitted until 2020-08-04 03:40:11.611593657 -0400 EDT m=+819.475928639 (durationBeforeRetry 4s). Error: "Volume not attached according to node status for volume \\"jij8-csivol-efcff9ba60\\" (UniqueName: \\"kubernetes.io/csi/csi-unity.dellemc.com^jij8-csivol-efcff9ba60-iSCSI-apm00194717505-sv_128\\") pod \\"task-pv-pod\\" (UID: \\"a78741d4-f040-4fbd-a96b-06cdb601c197\\") "\cf1\highlight2

 

Could you please help understand why the attach volume failed?

 

2 Intern

 • 

166 Posts

August 6th, 2020 02:00

If you do a lsblk on the node that is supposed to mount the volume do you see it ?

Also, can you run kubectl get pv,pvc,volumeattachment -o yaml and attach the logs (if possible clean it so we can see only the related volume details and not all the objects )

23 Posts

August 6th, 2020 07:00

Bingo!

It's the host was not able initiator rescan was not working, and the lsblk was not displaying anything. I actually did reboot the worker, it thought it was going to do a re-scan, just to save time. But I got pay back on this. After I can the rescan, I can mount the volume now, and pod is running ok.

 

echo "- - -"> /sys/class/scsi_host/host4/scan

echo "- - -"> /sys/class/scsi_host/host3/scan

 

[root@worker containerd]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root
253:0 0 17G 0 lvm /
└─centos-swap
253:1 0 2G 0 lvm
sdb 8:16 0 10G 0 disk
sdc 8:32 0 10G 0 disk /var/lib/kubelet/pods/0fe82c78-da1d-4893-b8e5-220968d0a8a5/volumes/kubernetes.io~csi/jij8-csivol-d

 

[root@master helm]# kubectl exec task-pv-pod -- df -h
Filesystem Size Used Avail Use% Mounted on
overlay 17G 2.6G 15G 15% /
tmpfs 64M 0 64M 0% /dev
tmpfs 910M 0 910M 0% /sys/fs/cgroup
/dev/mapper/centos-root 17G 2.6G 15G 15% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/sdc 9.8G 37M 9.2G 1% /usr/share/nginx/html  >>>>>

2 Intern

 • 

166 Posts

August 4th, 2020 05:00

Hi,

Can you confirm on the array side that the volume is present ?

Can you have a look at the status of iscsi on the node (systemctl status iscsid ; iscsiadm -m session) ?

Also, if there is nothing useful can you enable the csiDebug on the node thanks to kubectl edit -n unity deployment unity-node-xxx

 

Rgds

 

23 Posts

August 5th, 2020 05:00

Hi,

 

Yes, I can confirm that on array side, once I created the pvc, the new storage lun can be created ok, and presenting to the worker node.

The worker node iSCSI session is also looking fine. Paths are showing active on Storage Unisphere as well.

 

[root@worker ~]# iscsiadm -m session
tcp: [1] 10.241.185.142:3260,1 iqn.1992-04.com.emc:cx.apm00194717505.a2 (non-flash)
tcp: [2] 10.241.185.143:3260,2 iqn.1992-04.com.emc:cx.apm00194717505.b2 (non-flash)

 

And about the CSI debug, I think I have enabled it by specified myvalues.yaml file:

 

[root@master helm]# cat myvalues.yaml
certSecretCount: 1
syncNodeInfoInterval: 15
volumeNamePrefix: jij8-csivol
snapNamePrefix: jij8-csi-snap
csiDebug: "true"

kubectl edit -n unity po unity-node-8g78s

value: unix:///var/lib/kubelet/plugins/unity.emc.dell.com/csi_sock
- name: X_CSI_MODE
value: node
- name: X_CSI_DEBUG
value: "true"

 

No Events found!

Top