I spent days trying to get this driver to work. I finally got it working. Most of the problems I ran into are from me being a novice as storage management. I want to share things I learned, and some recommendations to the developers of csi. I can only assume the following can be used for the other types of emc systems (PowerFlex, PowerMax, etc.) but not sure.
Prerequisites before installing
- First of all, the documentation only says the following for iscsi requirements "To use iSCSI protocol, iSCSI initiator utils packages needs to be installed". There is a lot more I needed to do to make sure the nodes iscsi initiators were preped. Would recommend dell update their documentation highlight, nodes NEED to have iscsi logged in ahead of time, not just have packages installed. The following was configured on my virtual nodes running Ubuntu 18.04. Feel free to add this to your docs if you want!
- Make sure iscsi tools are installed on all nodes.
> sudo apt install open-iscsi multipath-tools -y
- Check iscsi initators have unique ids on all nodes
> cat /etc/iscsi/initiatorname.iscsi
- If any nodes imitators ids are identical (A symptom of cloning VMs from vmware templates) they must be changed, you can run the following to generate new ids of initators
> cp /etc/iscsi/initiatorname.iscsi /root/initiatorname.iscsi.backup
> echo "InitiatorName=`/sbin/iscsi-iname`" > /etc/iscsi/initiatorname.iscsi
- All nodes must discover the emc iscsi targets. I have two targets on my one array. If you have multiple arrays with multiple targets, you need to do this for all targets. Examples represent my two targets.
- If you have CHAP enabled you need to configure login settings, add the following
> vi /etc/iscsi/iscsid.conf
node.session.auth.authmethod = CHAP
node.session.auth.username_in = user
node.session.auth.password_in = password
- Discover targets:
> iscsiadm -m discovery -t st -p 10.172.192.11
> iscsiadm -m discovery -t st -p 10.172.192.12
- Login targets:
> iscsiadm --mode node --portal 10.172.192.11:3260 --targetname iqn.1992-04.your.emc:iqn.number123456789.b0 --login
> iscsiadm --mode node --portal 10.172.192.12:3260 --targetname iqn.1992-04.your.emc:iqn.number123456789.a0 --login
- restart services
> systemctl restart open-iscsi
- Configure imitators to login automatically after reboots (do for all targets to be used, should be separate folders for each)
> cat /etc/iscsi/nodes/<target iqn>/<target ip>/default
- change "node.startup=manual" to "node.startup=automatic"
- Confirm nodes are prepared, run as root
- Do a reboot on each node first (this is to check to make sure iniators login properly after reboots)
- Make sure your user account you are using can read the imitator file
> cat /etc/iscsi/initiatorname.iscsi
- check open-iscsi is running (You may see one error, that is ok, make sure the whole service is running)
> systemctl status open-iscsi
- Check iscsid is running (pid must show pids)
> pgrep iscsid
Once you have this all complete, you should be good on the nodes. No need to create Luns/blocks/or mount anything on the nodes. The charts / pods do all that for you.
- Another problem I ran into was this from the docs "Collect information from the Unity Systems like Unique ArrayId, IP address, username and password. "
- Did not know what an "ArrayId" was. I found out that this value can be found from the Unity web interface. Go to System > System View. Its the "AMP***********" name of the system. I always thought this was a serial number but its actually the array id.
- Another area of confusion for myself was the deployment of "snapshot" pods. I dont have snapshots set up on our EMC, so at first I wondered "Why does the the driver require this?". Dont worry about the name. I think these pods are in place just in case the feature is ever used. Deploy as the guide says, dont believe snapshots will be created unless you tell your template to create snapshots.
- Re: The values.yaml file, there is a lot in this file, and it was fairly intimidating when I first saw it. @ankur.patel has a really good video that goes over what you actually need to change in this file which you can find here. Would recommend copying the file found in helm/csi-unity/values.yaml, and just change the following:
- Uncomment out "storageArrayList" section. Configure "name" (ArrayID), isDefaultArray, and storagePool.
- Comment out storageClassProtocls you dont use. We dont use FC, so I commented that out.
- Before running the install, just run the dell-csi-helm-installer/verify.sh script by itself, the installer does this by default, but its a good test.
verify.sh (this is a message to the devs.)
- This script seems to REQUIRE a user with root privilege's, but not sudo. Our org does not usually change the root password of our ubuntu hosts for security purposes. I had to edit the "verify-csi-unity.sh" script, line 49, to "sudo cat /etc/iscsi/iniatorname.iscsi". Then I needed to adjust the sudoers file on each host. Is there a way you can adjust this script to perform actions as sudo?
- Why does the "--namespace" parameter exist? Your documentation specifically requires the use of "unity", also your scripts grep on checking for the secret name space be "unity" and "unity-certs-0". The verify will fail every time if a different namespace is used. This parameter causes confusion, I would recommend removing it and hard coding the name space to "unity".
- Trick: I found it annoying to have to type the same password over and over again for each node when verifying them. This does not scale well, what if I had 100 nodes? The following tip is not secure, but if you remember to delete the file after running the script its fine. Do the following so you dont have to type the password over and over again.
- create a file in the same folder as verify.sh called "protocheckfile", put your password in that file (yes plain text)
- run this command in the same directory as verify.sh
> sed -i '43,46 s/^/#/' verify-csi-unity.sh;sed -i '61 s/^/#/' verify-csi-unity.sh
- Now when you run the verify script it will not prompt you for a password. When done DELETE "protocheckfile" file.
Keep an eye on your Unity web interface. The unity pods do a lot of cool stuff. They automatically build out the initiators in your EMC, and configure the hosts. However if you uninstall or start over... they don't get rid of what was created. So make sure to clean up hosts / initiators in the EMC if you need to start over for whatever reason.
Last recommendation for the devs, please open up the issues section of your github. I can understand how you want people to come here to share problems. But I think having the issues section open will defiantly help build a community more easier then using this portal. Also depending on your CI-CD, might make things easier to build branches against problems.
Hope this helps others!