I'm new to EMC, and as a result, I'm a bit lost. I'm primarily responsible for vSphere, but we recently inherited some VNX arrays we need to integrate with our existing vSphere 5.5 environment. So, I'm trying to find out how to do that. I read through some documents I found hoping to find clear steps to do so; Using EMC VNX with vSphere and the EMC Storage Integration with VMware vSphere Best Practices guide. However, these documents seemed very basic, and only gave an introduction on the various technologies offered. Our storage team has set up the EMC array, but they are only trained on the array, and not vSphere. I'm trained on vSphere, but not EMC. We have been using various HP EVA models up until now and have had little trouble with them. Following the basic steps we would use for the EVAs to set up the VNX array gets us a working environment, but in the little testing we've run so far issues have arisen. While storage vmotioning a large VM the host would become disconnected. When adding a disk to a vm the task would time out and disconnect with an error. Things that just shouldn't happen. EMC support looked at the logs and said it looked good to them before quickly closing the ticket leaving us to rely on VMware support. I have a ticket open with VMware going on a month now with little to no progress. EMC support actually suggested disabling primitive hardware accelerated mode which I took to mean turning off VAAI. That doesn't make sense to me at all, but he swore that his performance team recommended it, and that it was even recommended by VMware. I asked to see the kb article, and he said he would send it.
Everything is running the latest software/firmware as far as I'm aware; hosts are 5.5.0 build 1892794, and the array is 05.32.000.5.215 OE. VMW_SATP_ALUA_CX is set as round robin on the hosts.
I'm basically looking for some direction as to what should be done and how. From what I've read I believe I should (not that I need to, but should) install the Virtual Storage Integrator for web client version 6.2. In reading about that I saw mention of needing the Solutions Integration Service vApp. In reading about that I saw mention of the Solutions Enabler vApp. It's a veritable Russian nesting doll of solutions to the novice! Would anybody be able to help straighten this out for me so that I can get our vSphere environment where it needs to be to properly talk to the EMC arrays the way they were designed to?
we would need a bit more information on your environment to assist you here.
Because you mentioned using NMP RR I assume you're using FC connectivity.
I.e., when you're using older Brocade SAN Switches in 8GBit Mode you could run into an issue with the portcfgfillword inproperly set.
On BladeCenter environments, you have to double check the BC components if they do run proper firmware to work with vSphere 5.5.
You should also askyour server vendor if they do have a configuration best practice for 4/8 GBit FC environments.
If you're using HP Servers, you'll find many of these informations here.
Do you see any SCSI/FC related error messages in the vmkernel/vmkwarning logs of the ESXi Server hosting the VM where you want to add another vmdk?
When you directly connect to the ESXi host using the VMware Infrastructure Client I assume it could be managed when you can't do so via your vCenter.
If you're ESXi servers do have internal disks or other external storage not located on the VNX, do the problems disappear when using these storage devices for your tests?
As I said, without further infos we would not be able to assist you.
Thank you for your response. To be honest, the majority of your questions are over my head since I don't really deal with the storage fabric, and as I mentioned, I'm new to EMC in general.
We are using FC connections for the storage. The majority of the hosts are using Emulex LPe12000 8GB hba, but I'm not aware of the switches they connect to outside of the enclosure. They are all HP BL685c G7 blades though.
We are not yet up-to-date with the September 2014 recipe guide, but working towards it.
To answer your last question regarding whether the problems go away when using other storage; yes, when using the HP EVA storage we have never had any disconnects or errors while storage vmotioning.
A bit of an update; we installed the Solutions Integration Service vApp and connected that to the vcenter server. we also installed the Virtual Storage Integrator for web client version 6.2. We can now deploy datastores from the web client which creates the LUNs on the array. We seem to be good there. It's working as it should. The disconnect issues are not actively being investigated because now VASA implementation is a problem. We have 2 arrays, one unified, one block. We had no problem adding the unified storage provider to vsphere. Storage capabilities appeared, we created storage profiles using the capabilities of the storage in use and the matching datastores appeared when searching for matches. The block array, however, proved to me more difficult to integrate. After a lot of back and forth with EMC support we were no closer to getting it integrated. It would continuously fail complaining of authentication issues. Finally we found a blog post going over steps to convert the vsphere certificate into PEM format and manually adding it to ECOM on the Solution Enabler product. This allowed us to add the block array storage provider! We thought we finally got everything in order, but now no EMC storage matches any storage capabilities. The previously compliant unified datastores no longer appear when searching for matching disks in the previously created storage profiles. Newly created profiles yield no results. The original storage profile even shows the selected capability grayed out with "Currently not existing" next to it. It's almost like the newly added block array capabilities are cancelling out the unified. I'm not sure. I thought we had it figured out, and now I feel we're back to square one.