Highlighted
acd13
1 Copper

csi-isilon: isilon-controller-0 pods in crashloopbackoff

Jump to solution

After running the install.isilon,

I see that the pods in isilon-controller-0 keeps restarting and then finally goes into crashloopbackoff:

[root@localhost helm]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
isilon isilon-controller-0 2/4 CrashLoopBackOff 12 16m
isilon isilon-node-4bxgl 2/2 Running 0 16m
kube-system coredns-584795fc57-78wgr 1/1 Running 8 2d22h
kube-system coredns-584795fc57-7cfkt 1/1 Running 9 2d22h
kube-system etcd-localhost.localdomain 1/1 Running 5 2d22h
kube-system kube-apiserver-localhost.localdomain 1/1 Running 5 2d21h
kube-system kube-controller-manager-localhost.localdomain 1/1 Running 6 2d21h
kube-system kube-flannel-ds-amd64-jnvgh 1/1 Running 6 2d22h
kube-system kube-proxy-7zpkf 1/1 Running 5 2d22h
kube-system kube-scheduler-localhost.localdomain 1/1 Running 6 2d21h
kube-system tiller-deploy-586965d498-q6wln 1/1 Running 3 2d21h

Looking at the logs i see following errors - 


[root@localhost helm]# kubectl logs -f isilon-controller-0 -n isilon provisioner
I1216 17:40:32.391524 1 feature_gate.go:226] feature gates: &{map[]}
I1216 17:40:32.391584 1 csi-provisioner.go:98] Version: v1.2.1-0-g971feacb
I1216 17:40:32.391594 1 csi-provisioner.go:112] Building kube configs for running in cluster...
I1216 17:40:32.398031 1 connection.go:151] Connecting to unix:///var/run/csi/csi.sock
W1216 17:40:42.398876 1 connection.go:170] Still connecting to unix:///var/run/csi/csi.sock
W1216 17:40:52.399053 1 connection.go:170] Still connecting to unix:///var/run/csi/csi.sock
E1216 17:40:53.400371 1 connection.go:129] Lost connection to unix:///var/run/csi/csi.sock.
F1216 17:40:53.400470 1 connection.go:91] Lost connection to CSI driver, exiting

[root@localhost helm]# kubectl logs -f isilon-controller-0 -n isilon attacher
I1216 17:38:44.424424 1 main.go:91] Version: v1.1.1-0-g0d83c92
I1216 17:38:44.425497 1 connection.go:151] Connecting to unix:///var/run/csi/csi.sock
W1216 17:38:54.425740 1 connection.go:170] Still connecting to unix:///var/run/csi/csi.sock
W1216 17:39:04.425755 1 connection.go:170] Still connecting to unix:///var/run/csi/csi.sock
E1216 17:39:06.234790 1 connection.go:129] Lost connection to unix:///var/run/csi/csi.sock.
W1216 17:39:14.426026 1 connection.go:170] Still connecting to unix:///var/run/csi/csi.sock
W1216 17:39:24.426267 1 connection.go:170] Still connecting to unix:///var/run/csi/csi.sock
E1216 17:39:27.273596 1 connection.go:129] Lost connection to unix:///var/run/csi/csi.sock.
W1216 17:39:34.426565 1 connection.go:170] Still connecting to unix:///var/run/csi/csi.sock

I tried deleting the csi.sock, and reinstalling the driver (helm purge isilon) but come back to this issue. I see the container image gets downloaded successfully but can't figure out this portion:

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m55s default-scheduler Successfully assigned isilon/isilon-controller-0 to localhost.localdomain
Normal Pulled 3m54s kubelet, localhost.localdomain Container image "quay.io/k8scsi/csi-attacher:v1.1.1" already present on machine
Normal Created 3m54s kubelet, localhost.localdomain Created container attacher
Normal Started 3m54s kubelet, localhost.localdomain Started container attacher
Normal Pulling 3m54s kubelet, localhost.localdomain Pulling image "dellemc/csi-isilon:v1.0.0"
Normal Started 3m54s kubelet, localhost.localdomain Started container snapshotter
Normal Created 3m54s kubelet, localhost.localdomain Created container snapshotter
Normal Pulling 3m54s kubelet, localhost.localdomain Pulling image "quay.io/k8scsi/csi-snapshotter:v1.2.0"
Normal Pulled 3m54s kubelet, localhost.localdomain Successfully pulled image "quay.io/k8scsi/csi-snapshotter:v1.2.0"
Normal Pulled 3m53s kubelet, localhost.localdomain Successfully pulled image "dellemc/csi-isilon:v1.0.0"
Normal Created 3m53s kubelet, localhost.localdomain Created container driver
Normal Started 3m53s kubelet, localhost.localdomain Started container driver
Normal Started 2m55s (x3 over 3m54s) kubelet, localhost.localdomain Started container provisioner
Warning BackOff 2m20s (x3 over 3m11s) kubelet, localhost.localdomain Back-off restarting failed container
Normal Created 2m6s (x4 over 3m54s) kubelet, localhost.localdomain Created container provisioner
Normal Pulled 2m6s (x4 over 3m54s) kubelet, localhost.localdomain Container image "quay.io/k8scsi/csi-provisioner:v1.2.1" already present on machine

 

Any ideas?

 

 

 

 

 

 

 

Reply
1 Solution

Accepted Solutions
Highlighted
acd13
1 Copper

Re: csi-isilon: isilon-controller-0 pods in crashloopbackoff

Jump to solution

Yes that's fine.

I was able to resolve it by using the "weave net" network on k8s instead of Flannel. For this i had to reinstall the environment (thanks for Florian C).

This can be added in the documentation - that we require a specific pod network.

Also i didn't see the step for 'kubectl create namespace test' which is required to run the tests as they are in the test namespace.

 

Anjan

View solution in original post

Reply
4 Replies
SJ-CSI
1 Copper

Re: csi-isilon: isilon-controller-0 pods in crashloopbackoff

Jump to solution

Hi, 

Is the Isilon cluster ip and credentials (secret) provided correct? can you connect to the ip with those credentials from your K8S cluster?

Reply
Highlighted
acd13
1 Copper

Re: csi-isilon: isilon-controller-0 pods in crashloopbackoff

Jump to solution

Yes that's fine.

I was able to resolve it by using the "weave net" network on k8s instead of Flannel. For this i had to reinstall the environment (thanks for Florian C).

This can be added in the documentation - that we require a specific pod network.

Also i didn't see the step for 'kubectl create namespace test' which is required to run the tests as they are in the test namespace.

 

Anjan

View solution in original post

Reply
Highlighted
SJ-CSI
1 Copper

Re: csi-isilon: isilon-controller-0 pods in crashloopbackoff

Jump to solution

Thanks for the updates and suggestions. 

We have verified driver with Flannel. Probably some issue with the Flannel network setup. 

There is step mentioned in the Product Guide to create "test" namespace under "Test the CSI Driver".

Thanks!

 

Reply
Highlighted
cocampbe
1 Copper

Re: csi-isilon: isilon-controller-0 pods in crashloopbackoff

Jump to solution

@acd13 I am using canal and have not had this issue. I am running RKE. So YMMV.

Reply