Start a Conversation

Unsolved

This post is more than 5 years old

M

13775

February 8th, 2018 21:00

RED016 Mount of remote share failed

When attempting to use Dell OME to update firmware on an iDRAC, I am receiving errors such as the following:

Message: Mount of remote share failed.
Message ID: RED016

 

The following is allowed through the firewall between the Dell OME appliance and the iDRAC (bidirectional):

TCP/22
TCP/80
TCP/111
TCP and UDP/137-138
TCP/139
UDP/161-162
TCP/443
TCP/445
UDP/623

 

I had one of our network engineers perform a packet capture while trying to perform the firmware update, and he said the devices were trying to communicate on various ports in the 600-900 range. He temporarily allowed that range, and I was able to successfully push a firmware update, but he needs to configure the firewall to use the exact port range due to network security policy. Can anyone advise on the specific additional ports that are required?

2 Intern

 • 

2.8K Posts

February 9th, 2018 08:00

Thanks for the question.

I think the doc says 111 is for NFS.  2049 might be in the mix too.  I'm trying to get confirmation on that, but you may check.  They 600-900 range surprises me.  If you can double check 111 and 2049 and still confirm 600-900 I'll go back to the team for another check.

Hope that helps,

Rob

http://topics-cdn.dell.com/pdf/dell-openmanage-enterprise-tech-release_users-guide_en-us.pdf

February 9th, 2018 14:00

OK, I'll request they allow 2049 and see what happens. Due to internal change control policy, it'll probably be a week or two before the additional port will be allowed. Will report back with findings.

February 17th, 2018 12:00

After getting the firewall rule for 2049, firmware deployments still fail with the same error. I will coordinate with our network personnel to perform another packet capture to see what is happening.

2 Intern

 • 

2.8K Posts

February 19th, 2018 06:00

Ok, thanks.  Keep me posted.

Question: Are you using just the internal dell catalog download or a network share catalog?

Thx

Rob

February 22nd, 2018 19:00

Will do. Performed a packet capture, but did not see the 600-900 range this time. Currently waiting on our network personnel to temporarily allow all traffic between OME and the iDRAC's, then will perform another packet capture.

We are using the Dell catalog.

March 24th, 2018 14:00

Turns out the failures to mount the file share were not related to network connectivity. We increased the RAM from 2GB to 4GB, and are now able to deploy firmware updates successfully and consistently.

25 Posts

June 14th, 2018 07:00

Where exactly did you increase RAM?  OME appliance or Destination server?

Our appliance is already using 8GB

2 Intern

 • 

2.8K Posts

June 18th, 2018 08:00

I'm guessing he moved his RAM up on the OMEnt appliance itself.  If you are at 8GM on the appliance and still experiencing trouble, I think we might need a ticket to dig into it a bit more closely.

Thanks!

Rob

 

--

 

I'll put the phone number here, but I need to enter it in a goofy format since this forum software strips out a lot of numbers.

8-0-0-9-4-5-3-3-5-5

7 Posts

June 26th, 2018 07:00

Reboot of appliance fixed the issue in our case.
VM is using 8 GB RAM.

53 Posts

June 28th, 2018 16:00

I'm also seeing this behaviour. There are no firewall rules between OMEnt & the iDRAC. The firmware catalog being used is the "Latest" from the Dell site - ie. no local repository being used.

Have tried following troubleshooting:

  • Restart iDRAC
  • Restart OMEnt Appliance
  • Selective of Firmware update rather than just running all
  • Delete Server and Rediscover/Reinventory.

I'm happy enough to raise a support call with Pro Support - but being a tech release are they going to be able to assist? :)

 

2 Intern

 • 

2.8K Posts

June 29th, 2018 06:00

Hi and thanks for the post.

Yes, it is a supported product even as a Tech Release.

Hmmm, is this setup all internal LAN or is it WAN?  Only other thing to note is that it needs port 111 for the internal NFS share (readonly) on the OMEnt appliance.

Thanks,

Rob

--

I'll put the phone number here, but I need to enter it in a goofy format since this forum software strips out a lot of numbers.

8-0-0-9-4-5-3-3-5-5

53 Posts

July 1st, 2018 20:00

Thanks for the response Rob

I work in a rather large government agency, so there is what you would term WAN connectivity between sites, with LAN IP address space. In this situation there are gigabit fibre links between sites, with latency of < 5ms.

I'll raise a job with Pro Support and update the forum if a solution is found.

37 Posts

July 23rd, 2018 03:00

bcshort - I had the same issue, and it got worse; systems I had partially updated were no longer taking any updates. I re-deployed the appliance (thankfully, it's a quick thing with no major setup) and it came back to life again. Absolutely no idea why. We had no firewalls inbetween either.

2 Intern

 • 

2.8K Posts

July 23rd, 2018 06:00

Thanks for this additional detail.  Sounds like there may be something for the support team to look at so we can try to nail it down.

Another alternative to a re-deploy _may_ be the "Restart Services" option from the Text User Interface (TUI, on the console window).

Thanks,

Rob

37 Posts

October 1st, 2018 08:00

Hi

So, upgraded to Version 3.0.0 (Build 990) and still getting lots of:

Job status for JID_384280720777 is Failed

Message: Mount of remote share failed.
Message ID: RED016
 
My previous posts here indicate there is no firewall between the devices, and yet this is happening constantly. I even went so far as to upgrade iDRAC to 2.60.60 manually first, in the hope there was something causing a problem in the old one.
 
Will log a call with support, as this is getting a bit ridiculous. I've wasted a full day on this already, in the expectation that something simple works. Previously, I only got it to work sporadically (not changing anything on the proxy/network etc) by re-reploying the HV 1.0.0 appliance. I don't want to have to do this every ten minutes just because this gives up and we've no way to try and work out what's gone wrong.
No Events found!

Top