Start a Conversation

Unsolved

W

6 Posts

2456

July 9th, 2020 20:00

[CSI][Powermax]How to create a customized PV

    I tried this test case in CSI test directory : csi-powermax/test/helm/xfspre ,failed to create pod,the reason is the PV creating not successful ,i checked this file configuration :csi-powermax/test/helm/xfspre/templates/pv.yaml ,can you please help to explain the redline,and how to prepare for this test case.

##############################

apiVersion: v1
kind: PersistentVolume
metadata:
name: vol4
namespace: { { .Values.namespace }}
spec:
capacity:
storage: 16Gi
csi:
driver: powermax.emc.dell.com
fsType: xfs
volumeHandle: 72cebf0c00000001
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: powermax

#############################

 

Thank you !

15 Posts

August 25th, 2020 18:00

Hello,

We are still checking the test cases.
Any comments would be appreciated

The other tests ran fine but xfspre.

No modification made to files under csi-powermax/test/helm/xfspre.

On csi-powermax/test/helm/xfspre/templates/pv.yaml, volumeHandle parameter set as is.
Then, running get.volume.ids outputs 72cebf0c00000001(same as what is written in above pv.yaml)

 

Now, running sudo kubectl describe pod powermaxtest-0 -n test outputs:

Warning FailedAttachVolume 117s (x11 over 10m) attachdetach-controller AttachVolume.Attach failed for volume "vol4" : attachment timeout for volume 72cebf0c00000001

 

Also, running sudo kubectl logs powermax-controller-0 -n powermax driver -f outputs nothing.

Csi driver looks taking no action and so we wonder if there is pre-required settings (volume, SG, MV).
Is there any setup or configuration check we should look into prior to running xfspre test?

Thank you,

Satoshi

46 Posts

August 27th, 2020 09:00

Hi Satoshi,

Sorry it looks like we missed your first post.  That looks like a test to use an existing volume.  Let me see if I can gather some details around this and reply back to you.

Thanks,
Frank

15 Posts

August 31st, 2020 18:00

Hi frank_g,

Appreciate your help.

Under csi-powermax/test/helm/xfspre/templates/, we see that pv.yaml is used where other tests like 2vols use pvc.yaml so wondering it could be something to do with pvc..?

 

Thank you,

Satoshi

46 Posts

September 2nd, 2020 08:00

Hi Satoshi,

Thank you for your patience.  We can use a PersistentVolume pv.yaml with the volumeHandle to reference a volume created by Kubernetes or manually created on the array.  This will allow the PersistentVolumeClaim pvc.yaml to use that storage.  I attached the Kubernetes explanation below and a link. 

Unfortunately the the example pv.yaml under xfspre is not correct.  I am working with engineering to correct this and put together some steps on how to accomplish this.   I will update this thread with those steps when finished.

Thanks,
Frank

https://kubernetes.io/docs/concepts/storage/persistent-volumes/

PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes).

While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the StorageClass resource.

46 Posts

September 9th, 2020 17:00

Hi Satoshi,

The following are the steps that I would use to test using an existing volume with the xfspre test directory.  Let me know if this helps.

Create volume
Create a volume on the PowerMax array. Give it a name under Advanced Options. Put it in the storagegroup you are working with. After creation, note the array, Volume Identifier, and Symmetrix Vol ID.

Prepare the required files

Copy the /csi-powermax/test/helm/xfspre directory to xfspre-test for files to work with.

# cp -R xfspre xfspre-test

pv.yaml
Edit the pv.yaml file

  • Replace the volumeHandle with the following in the correct format

Format:

volumeHandle: - -

Example:

volumeHandle: satoshis-xfs-000######704-00159

Note: I was not able to do this with a volume that had a blank Volume Identifier. This is a prereq.

  • Remove the namespace reference from pv.yaml it is not needed
    namespace: { { .Values.namespace }}
  • Change the driver name
    from driver: powermax.emc.dell.com to driver: csi-powermax.dellemc.com
  • Since the name of the test is xfs and it shows type xfs, I listed my storageclasses to find my powermax-xfs storage class:
    # kubectl get sc
    NAME PROVISIONER AGE
    csi-isilon-b (default) csi-isilon.dellemc.com 7d19h
    powermax (default) csi-powermax.dellemc.com 22h
    powermax-xfs csi-powermax.dellemc.com 22h

And put in the powermax-xfs as the storageClassName
storageClassName: powermax-xfs

  • Removed the xfs ref from the pv.yaml, it seemed redundant.
  • Added this to the pv.yaml (the storage class could be changed to retain too)
    persistentVolumeReclaimPolicy: Retain
  • Check that the volume size matches.

The pv.yaml should look like this:

# more pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: vol4
spec:
  capacity:
    storage: 16Gi
csi:
  driver: csi-powermax.dellemc.com
  volumeHandle: satoshis-xfs-000######704-00159
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: powermax-xfs

 

pvc4.yaml

Change the pvc4.yaml storageClassName to powermax-xfs too
Confirm volume size matches

The pvc4.yaml should look like this:
# more pvc4.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvol4
  namespace: { { .Values.namespace }}
spec:
  accessModes:
  - ReadWriteOnce
  volumeMode: Filesystem
  volumeName: vol4
  resources:
    requests:
      storage: 16Gi
  storageClassName: powermax-xfs

Run test

sh starttest.sh -t xfspre-test -n test

To find what node the powermaxtest-0 is on and the container ID for the CentOS container that is used in these tests:

kubectl describe pods powermaxtest-0 -n test

Note these lines:

Node: /
and
Container ID: docker://7a48bc6a259c5c8eb0dc4c69ba23645d159131b95313db9e3469e12ff55eff47


Run docker exec to access the CentOS container to populate data on the volume

docker exec -it 7a48bc6a259c5c8eb0dc4c69ba23645d159131b95313db9e3469e12ff55eff47 /bin/bash

This message likely indicates that the docker command was run on the wrong node:

# docker exec -it 7a48bc6a259c5c8eb0dc4c69ba23645d159131b95313db9e3469e12ff55eff47 /bin/bash
Error: No such container: 7a48bc6a259c5c8eb0dc4c69ba23645d159131b95313db9e3469e12ff55eff47

Populated some test data on the volume:

[root@powermaxtest-0 /]# ls
bin dev home lib64 media opt root sbin sys usr
data4 etc lib lost+found mnt proc run srv tmp var
[root@powermaxtest-0 /]# cd data4
[root@powermaxtest-0 data4]# ls
lost+found
[root@powermaxtest-0 data4]# mkdir satoshisdirectory
[root@powermaxtest-0 data4]# cd satoshisdirectory/
[root@powermaxtest-0 satoshisdirectory]# vi satoshi.txt
[root@powermaxtest-0 satoshisdirectory]# more satoshi.txt
hello world

Stop the test

sh stoptest.sh -t xfspre-test -n test

The volume is still on the array

Confirm data
Run the test again to check that the data is still there

# sh starttest.sh -t xfspre-test -n test

Described the pod

# kubectl describe pods powermaxtest-0 -n test

This test pod is running on a different node now, docker exec should be run on this node.

Node: /

Container ID: docker://d5ca42d7d2b866f85a66e1b8c922b74c3fcf000064ffa853bcdf184ad96f652c

Use docker exec and verify the data is still there.

# docker exec -it d5ca42d7d2b866f85a66e1b8c922b74c3fcf000064ffa853bcdf184ad96f652c /bin/bash

[root@powermaxtest-0 /]# ls
bin data4 dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var
[root@powermaxtest-0 /]# cd data4
[root@powermaxtest-0 data4]# ls
lost+found satoshisdirectory
[root@powermaxtest-0 data4]# cd satoshisdirectory/
[root@powermaxtest-0 satoshisdirectory]# ls
satoshi.txt
[root@powermaxtest-0 satoshisdirectory]# more satoshi.txt
hello world

 

 

 

46 Posts

September 9th, 2020 17:00

Hi Satoshi,

The following are the steps that I would use to test using an existing volume with the xfspre test directory.  Let me know if this helps.

Create volume
Create a volume on the PowerMax array. Give it a name under Advanced Options. Put it in the storagegroup you are working with. After creation, note the array, Volume Identifier, and Symmetrix Vol ID.

Prepare the required files

Copy the /csi-powermax/test/helm/xfspre directory to xfspre-test for files to work with.

# cp -R xfspre xfspre-test

pv.yaml
Edit the pv.yaml file

  • Replace the volumeHandle with the following in the correct format

Format:

volumeHandle: - -

Example:

volumeHandle: satoshis-xfs-000######704-00159

Note: I was not able to do this with a volume that had a blank Volume Identifier. This is a prereq.

  • Remove the namespace reference from pv.yaml it is not needed
    namespace: { { .Values.namespace }}
  • Change the driver name
    from driver: powermax.emc.dell.com to driver: csi-powermax.dellemc.com
  • Since the name of the test is xfs and it shows type xfs, I listed my storageclasses to find my powermax-xfs storage class:
    # kubectl get sc
    NAME PROVISIONER AGE
    csi-isilon-b (default) csi-isilon.dellemc.com 7d19h
    powermax (default) csi-powermax.dellemc.com 22h
    powermax-xfs csi-powermax.dellemc.com 22h

And put in the powermax-xfs as the storageClassName
storageClassName: powermax-xfs

  • Removed the xfs ref from the pv.yaml, it seemed redundant.
  • Added this to the pv.yaml (the storage class could be changed to retain too)
    persistentVolumeReclaimPolicy: Retain
  • Check that the volume size matches.

The pv.yaml should look like this:

# more pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: vol4
spec:
  capacity:
    storage: 16Gi
csi:
  driver: csi-powermax.dellemc.com
  volumeHandle: satoshis-xfs-000######704-00159
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: powermax-xfs

 

pvc4.yaml

Change the pvc4.yaml storageClassName to powermax-xfs too
Confirm volume size matches

The pvc4.yaml should look like this:
# more pvc4.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvol4
  namespace: { { .Values.namespace }}
spec:
  accessModes:
  - ReadWriteOnce
  volumeMode: Filesystem
  volumeName: vol4
  resources:
    requests:
      storage: 16Gi
  storageClassName: powermax-xfs

Run test

sh starttest.sh -t xfspre-test -n test

To find what node the powermaxtest-0 is on and the container ID for the CentOS container that is used in these tests:

kubectl describe pods powermaxtest-0 -n test

Note these lines:

Node: /
and
Container ID: docker://7a48bc6a259c5c8eb0dc4c69ba23645d159131b95313db9e3469e12ff55eff47


Run docker exec to access the CentOS container to populate data on the volume

docker exec -it 7a48bc6a259c5c8eb0dc4c69ba23645d159131b95313db9e3469e12ff55eff47 /bin/bash

This message likely indicates that the docker command was run on the wrong node:

# docker exec -it 7a48bc6a259c5c8eb0dc4c69ba23645d159131b95313db9e3469e12ff55eff47 /bin/bash
Error: No such container: 7a48bc6a259c5c8eb0dc4c69ba23645d159131b95313db9e3469e12ff55eff47

Populated some test data on the volume:

[root@powermaxtest-0 /]# ls
bin dev home lib64 media opt root sbin sys usr
data4 etc lib lost+found mnt proc run srv tmp var
[root@powermaxtest-0 /]# cd data4
[root@powermaxtest-0 data4]# ls
lost+found
[root@powermaxtest-0 data4]# mkdir satoshisdirectory
[root@powermaxtest-0 data4]# cd satoshisdirectory/
[root@powermaxtest-0 satoshisdirectory]# vi satoshi.txt
[root@powermaxtest-0 satoshisdirectory]# more satoshi.txt
hello world

Stop the test

sh stoptest.sh -t xfspre-test -n test

The volume is still on the array

Confirm data
Run the test again to check that the data is still there

# sh starttest.sh -t xfspre-test -n test

Described the pod

# kubectl describe pods powermaxtest-0 -n test

This test pod is running on a different node now, docker exec should be run on this node.

Node: /

Container ID: docker://d5ca42d7d2b866f85a66e1b8c922b74c3fcf000064ffa853bcdf184ad96f652c

Use docker exec and verify the data is still there.

# docker exec -it d5ca42d7d2b866f85a66e1b8c922b74c3fcf000064ffa853bcdf184ad96f652c /bin/bash

[root@powermaxtest-0 /]# ls
bin data4 dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var
[root@powermaxtest-0 /]# cd data4
[root@powermaxtest-0 data4]# ls
lost+found satoshisdirectory
[root@powermaxtest-0 data4]# cd satoshisdirectory/
[root@powermaxtest-0 satoshisdirectory]# ls
satoshi.txt
[root@powermaxtest-0 satoshisdirectory]# more satoshi.txt
hello world

 

 

 

15 Posts

September 14th, 2020 04:00

Hi frank_g,

 

Thank you for making nicely outlined procedure.

Greatly appreciate it and so does my customer.

He has gone through the steps and looks that a pod is successfully created.

All the tests under the test folder is finally completed.

Thank you again!

No Events found!

Top