Start a Conversation

Unsolved

This post is more than 5 years old

2324

January 4th, 2018 15:00

Equal Logic MEM v1.5.0 on ESX6.5 caused Error 99 on Scan for Updates

Hi, after updating the Equal Logic SANs to 9.1.4 I've had issues installing the MEM 1.5.0 on some HP DL360 host servers (sorry!).  Our existing R720s appear to be fine and 1.5.0 installed okay via VMware UM.

Initial installation appears to go okay but then a repeated scan throws an Error 99 associated with the VIBs just installed.  Error is shown below.

esxupdate: ERROR: ValueError: Cannot merge VIBs Dell_bootbank_dell-eql-routed-psp_1.5.0-437336, Dell_bootbank_dell-eql-routed-psp_1.5.0-437336 with unequal payloads attributes: ([dell-eql-routed: 48.595 KB], [dell-eql-routed: 42.558 KB])

I am able to uninstall the VIBs, reboot host, scans proceed okay without the Error 99, and correctly identify the missing MEM 1.5.0.

Suspecting the VCSA (6.5) Update Manager I may just dump the repo, database and start again but I thought I'd ask if anyone else was having issues with MEM 1.5.0 before doing so.

Cheers...Andy

5 Practitioner

 • 

274.2K Posts

January 4th, 2018 15:00

Hello Andy, 

 Are the HP's running an HP OEM version of ESXi?   

 If the same VUM, repo, installs fine onto a Dell R720 it should also work on HP server with same OS. 

 This is not an error I"m familiar with.   How many arrays do you have in your group?   If more than one, how many are in a pool?  If you have a single member group or pool, then MEM doesn't have much benefit over setting the MPIO to Round Robin with IOs per path set to 3.  (default 1000).,.  So if the issue is with HP ESXi, this would be a work around. 

Regards,

Don

6 Posts

January 4th, 2018 16:00

Don,

Thanks...I'll open a support case next week as I'm away from the office Friday.

Once again many thanks for your help...Andy

6 Posts

January 4th, 2018 16:00

Don,

Sorry I should have added more meat to this query.  The R720s are ESX 5.5 and the HP DL360s are ESX 6.5.  Therefore two different versions of MEM will be installed.

The R720 are fine with the 5.* VIBs and I have no issues scanning for updates after the installation.  The MEM 1.5.0 6.* is the one that's causing an issue with Error 99.

Yes we are using the HPE OEM version of ESXi.

Cheers...Andy

PS. I should also add that we have two arrays in one Group of 125TB.

5 Practitioner

 • 

274.2K Posts

January 4th, 2018 16:00

Hello, 

 They both can run MEM v1.5.0.  There are some packing differences but code basically same for both. 

 I've seen issues with some OEM versions, caused by drivers, I.e. the EMULEX driver was the cause once awhile ago. 

 You might need to open a support case so they can dig into the logs for MEM and ESXi. 

 Don 

5 Practitioner

 • 

274.2K Posts

January 4th, 2018 17:00

Hi Andy, 

 You are welcome.  If you could, run a VM support from those HP servers ASAP.  The logs roll quickly.  Also grab array diags as well, so those logs will better line up.   Then when you open the case you can send them the logs right away.  Save from having to re-run the tests after you open the call. 

Have a great weekend. 

 Regards, 

Don 

1 Rookie

 • 

1.5K Posts

September 19th, 2018 10:00

Hello,

 Have you opened a support case with Dell?

 Regards,

Don

 

September 19th, 2018 10:00

Any fix for this? I have the same issue. Default VMware build

VCSA 6.5u2 build 9451637

Cluster of three Dell R805's on 5.5 build 9919047 have the issue stated above

Cluster of three Dell R430 on 6.5 build 9298722 no issue.

As stated above I have uploaded both VIB's to VUM as soon as I scan I get errors. Prior to scanning VUM was used to patch the hosts. Then I uploaded the VIB's created a new extension baseline and attached to the correct cluster. Then I scan and get the error on the 5.5 cluster but 6.5 shows fine.

Only fix was https://blog.ganser.com/after-upgrade-to-vcsa-6-5-scan-for-updates-fails-with-unknown-error/ solution #1

basically delete the VUM DB.

Once deleted I scanned everything again and VUM shows good.

I re-uploaded the vibs and the issue returns.

 

 

 

September 19th, 2018 10:00

Hello Don,

 

Thanks for the fast response!

Unfortunately we do not have support on the Equal Logic, it is in a test lab. So, no ticket opened with Dell.

As for the members it looks like we have 2 members and they are in the same pool. I am not the admin for this but I can look around.

1 Rookie

 • 

1.5K Posts

September 19th, 2018 10:00

Hello, 

 Also how many members are in your EQL group?  If more than one, are they in the same pool?

 

Don

 

1 Rookie

 • 

1.5K Posts

September 19th, 2018 11:00

1 Rookie

 • 

1.5K Posts

September 19th, 2018 11:00

Hello,

Re: Benefit.  If you were to do benchmarking and push the members to max then MEM would show an increase. The reality of daily normal IO operations would likely not show any difference.

 When you are not using MEM or HIT software and you have multiple members in the same pool, a connection is made from one member to every other member having data for that volume.  So if you are connected to Member A and the data requested is on Member B that interconnection link will be used to get the data to Member A then back to the server.

 When you use MEM or HIT each port on the server makes a connection itself to each member having data for the volume.  So it is more efficient and as you increase members in a pool becomes more so.  The arrays are extremely efficient at moving data between them. even without MEM or HIT.

 Regards,

Don

 

 

September 19th, 2018 11:00

Don,

 

Thanks for explaining it. This is just a test set up to mimic our 100+ production clusters that are using similar hardware. So It sounds like not having it installed will not be an issue and our test shouldn't show a difference if it is on or off.

 

Thanks again!  

September 19th, 2018 11:00

Don,

 

Do you have a link to the guide you are talking about in your previous post?

So, to be clear, we will not see any benefits of this MEM/vib because we have a two member setup? Once we get to three or more it would help? I was just told we should use this if we were going to use the EQL storage for our testing.

1 Rookie

 • 

1.5K Posts

September 19th, 2018 11:00

Hello,

 You are very welcome. Just to be clear, the license to use the EQL bundled software is maintained by the support contract not the hardware.  So you should not be using MEM.  You also don't have to use MEM in order to work with EQL storage.  As you scale to 3x members and beyond the performance benefit increases.

 This guide will show you how to set best practices for both ESXi v5.5 and 6.x.  Both can use the VMware Round Robin and setting the IOs per path to 3 from default of 1000 will improve MPIO performance.

 Regards,

Don

1 Rookie

 • 

1.5K Posts

September 19th, 2018 11:00

Hello,

 You are very welcome.

  Regards,

Don

No Events found!

Top