Containers

Start a Discussion
2 Bronze
2 Bronze

K8S Create POD using PowerStore Lun, Failed with mount

Dear Support:

      We create PVC/PV in customer's PowerStore(ISCSI)+K8S environment, but fail with mount volume.

Need your kindly help to give suggestion . Thx!

[root@ran-test-hw12-1 iscsi]# kubectl -n csi-powerstore get pvc

NAME               STATUS   VOLUME                  CAPACITY   ACCESS MODES   STORAGECLASS     AGE

ps-block-vol       Bound    csi-pstore-bba0825c14   8Gi        RWO            powerstore-raw   24h

ps-block-vol-32g   Bound    csi-pstore-5286e2fcd2   32Gi       RWO            powerstore-raw   20h

ps-xfs-vol         Bound    csi-pstore-ce9c065534   16Gi       RWO            powerstore-xfs   24h

 

[root@ran-test-hw12-1 iscsi]# kubectl -n csi-powerstore get pv

NAME                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                             STORAGECLASS     REASON   AGE

csi-pstore-5286e2fcd2   32Gi       RWO            Delete           Bound    csi-powerstore/ps-block-vol-32g   powerstore-raw            20h

csi-pstore-bba0825c14   8Gi        RWO            Delete           Bound    csi-powerstore/ps-block-vol       powerstore-raw            24h

csi-pstore-ce9c065534   16Gi       RWO            Delete           Bound    csi-powerstore/ps-xfs-vol         powerstore-xfs            24h

 

[root@ran-test-hw12-1 iscsi]# cat create_pod.yaml

apiVersion: v1

kind: Pod

metadata:

    name: ps-pod

    namespace: csi-powerstore

spec:

    containers:

      - name: ps-pod

        image: centos:latest

        command: [ "/bin/sleep", "3600" ]

        volumeDevices:

          - devicePath: "/dev/ps-raw"

            name: ps-raw-vol-1

        volumeMounts:

          - mountPath: "/ps-xfs"

            name: ps-xfs-vol-1

    nodeName: rancher-dg-tn8

    volumes:

      - name: ps-raw-vol-1

        persistentVolumeClaim:

          claimName: ps-block-vol

      - name: ps-xfs-vol-1

        persistentVolumeClaim:

          claimName: ps-xfs-vol

 

 

[root@ran-test-hw12-1 powerstore_csi_install]# kubectl -n csi-powerstore describe pods ps-pod

Name:         ps-pod

Namespace:    csi-powerstore

Priority:     0

Node:         ran-test-hw12-1/10.2.56.231

Start Time:   Tue, 28 Sep 2021 12:13:12 +0800

Labels:       <none>

Annotations:  kubectl.kubernetes.io/last-applied-configuration:

                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"ps-pod","namespace":"csi-powerstore"},"spec":{"containers":[{"command...

              kubernetes.io/psp: default-psp

Status:       Pending

IP:          

IPs:          <none>

Containers:

  ps-pod:

    Container ID: 

    Image:         centos:latest

    Image ID:     

    Port:          <none>

    Host Port:     <none>

    Command:

      /bin/bash

    State:          Waiting

      Reason:       ContainerCreating

    Ready:          False

    Restart Count:  0

    Environment:    <none>

    Mounts:

      /ps-xfs from ps-xfs-vol-1 (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ls6cc (ro)

    Devices:

      /dev/ps-raw from ps-raw-vol-1

Conditions:

  Type              Status

  Initialized       True

  Ready             False

  ContainersReady   False

  PodScheduled      True

Volumes:

  ps-raw-vol-1:

    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)

    ClaimName:  ps-block-vol

    ReadOnly:   false

  ps-xfs-vol-1:

    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)

    ClaimName:  ps-xfs-vol

    ReadOnly:   false

  default-token-ls6cc:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-ls6cc

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type     Reason                  Age    From                      Message

  ----     ------                  ----   ----                      -------

  Normal   SuccessfulAttachVolume  3m53s  attachdetach-controller   AttachVolume.Attach succeeded for volume "csi-pstore-ce9c065534"

  Normal   SuccessfulAttachVolume  3m53s  attachdetach-controller   AttachVolume.Attach succeeded for volume "csi-pstore-bba0825c14"

  Warning  FailedMount             112s   kubelet, ran-test-hw12-1  MountVolume.MountDevice failed for volume "csi-pstore-ce9c065534" : rpc error: code = DeadlineExceeded desc = context deadline exceeded

  Warning  FailedMapVolume         112s   kubelet, ran-test-hw12-1  MapVolume.SetUpDevice failed for volume "csi-pstore-bba0825c14" : rpc error: code = DeadlineExceeded desc = context deadline exceeded

  Warning  FailedMount             111s   kubelet, ran-test-hw12-1  Unable to attach or mount volumes: unmounted volumes=[ps-xfs-vol-1 ps-raw-vol-1], unattached volumes=[ps-xfs-vol-1 default-token-ls6cc ps-raw-vol-1]: timed out waiting for the condition

 

Replies (4)
2 Bronze
2 Bronze

Some background information:

1.Operating System and its version:   A: centos 7.6

2.Orchestrator type and its version (e.g., Kubernetes v1.19.4 or OpenShift 4.7):  A: Kubernetes v1.18.17

3.Driver Version (e.g., CSI-PowerScale 1.7):   A: CSI-Powerstore 1.3

4.Installer Type (Helm or Operator); In case of Operator, its version: A: Operator

5.Storage array version: A: Powerstore 2.0.0.0

 

 

Dell Technologies

Hi @Jae.Liang,

Can you share the logs of the driver (controller + node from which the Pod is running)? 

 

Thanks 

Hi,Flo_csl.

I guess the problem comes from failure of the function getDeviceWWNWithSCSIID() in gobrick mudule, as scsi_id cannot be found in Powerstore CSI Node image and there's no wwid file created for the scanned dev in K8S node.

Actually, from the CSI Node driver log, multipath can use chroot correctly, but scsi_id not. Compared the two files in gobrick module: 

https://github.com/dell/gobrick/blob/master/pkg/scsi/scsi.go

https://github.com/dell/gobrick/blob/master/pkg/multipath/multipath.go

There's a minor difference, looks like there's a line missing in scsi.go, which cause s.chroot is nil, then chroot scsi_id is not working:

scsi.pngmultipath.png

 

2 Bronze
2 Bronze

Hi @Flo_csI 

the field team engaged support for this issue, then I created CSITRIDENT-1009 . All logs were uploaded to the CSITRIDENT-1009. 

I checked the logs, found the work node can't read the device when scan the devices via iscsi .

But not sure about it, so raised a ticket.

Top Contributor
Latest Solutions