Start a Conversation

Unsolved

This post is more than 5 years old

10213

June 18th, 2015 11:00

Cannot mount NFS export on the ESX

Hi All,

I have an issue mounting an NFS datastore on ESX: get the following error:

Call "HostDatastoreSystem.CreateNasDatastore" for object "ha-datastoresystem" on ESXi "172.21.11.126" failed.

NFS mount 172.21.13.200:/NFS_1 failed: The mount request was denied by the NFS server. Check that the export exists and that the client is permitted to mount it.

An unknown error has occurred.

The export exists, and it is completely open for all clients to R/W and access it.

Appreciate your input.

2 Intern

 • 

20.4K Posts

June 18th, 2015 11:00

i know you are saying open for "all" clients, why ?  Configure export and specify ip of the interface on ESXi that will be making the connection, if you have multiple interfaces something you have to set static routes on ESXi to force it to use correct route. Also try to add ip address of VMKernel port to the export.

June 18th, 2015 17:00

What I want is not to restrict the NFS exports for any client IP (this is actually the customer requirement), therefore when creating an export, I do not specify any value for 'root' and 'access' options - per EMC support it is supposed to leave them open for connections from any host that will ping the VDM interface on which the export resides.

So when I do that, I can mount the export on the ESX, but I cannot write to it: cannot create a new folder or a VM.

These is the command I use to configure export:

'server_export VDM_NFS_01 -P nfs /NFS_3'

Then it mounts on the server with no issue, but no write access.

Is there a way to use a wild card, or any specific command, switch option, to make it read/write for all hosts?

2 Intern

 • 

20.4K Posts

June 18th, 2015 17:00

did you try to add * to read/write area of the export

June 18th, 2015 18:00

It does not let me add * in the GUI, or the CLI.

In fact, you cannot even manage them from the GUI, because there is a bug, and after sometime the exports disappear from unisphere. This freaked me out today a bit when I first saw it. here is an EMC kb for that:

https://emc--c.na5.visual.force.com/apex/KB_BreakFix_1?id=kA17000000019FV

2 Intern

 • 

20.4K Posts

June 18th, 2015 20:00

works for me

[nasadmin@emcn1vsacs1 ~]$ server_export server_2 -P nfs -list

server_2 :

export "/fs02" rw=*

export "/" anon=0 access=128.221.252.100:128.221.253.100:128.221.252.101:128.221.253.101

8.6K Posts

June 19th, 2015 01:00

You are using a VDM NFS export – they arent support in the GUI – only in the CLI

8.6K Posts

June 19th, 2015 03:00

Yes – if you don’t specify any other export clause then its read/write for any hosts.

BUT If you dont specify a value for root= or use anon=0 then root users get mapped to anon

Unless you use a umask that allows write for others that usually doesn’t work well with clients where users/apps work as root

For details on how which NFS export clause works and how they interact see the VNX NFS pdf manual – especially the Appendix A

8.6K Posts

June 19th, 2015 03:00

Anyway – do a „showmount –e“ from the client that wants to mount to see if it shows up there

June 20th, 2015 07:00

I am trying to figure out what I am missing in this command syntax:

[root@PRODVNX5400-CS0 nasadmin]# server_export VDM_NFS_01 -P nfs -o rw=* /NFS_3

VDM_NFS_01 :

Error 22: VDM_NFS_01 : Invalid argument

Export Error: syntax error in rw list.

What is it that it does not like in the 'rw' list?

The only way that I can make this NFS export open for all clients is this:

[root@PRODVNX5400-CS0 nasadmin]# server_export VDM_NFS_01 -P nfs -o anon=0 /NFS_3


I tried root=*, access=*, I cannot mount it.


What other options do I have?

2 Intern

 • 

20.4K Posts

June 21st, 2015 16:00

server_export server_2 -option root=* /root_vdm_1/fs01

2 Intern

 • 

301 Posts

June 21st, 2015 23:00

verify from the client you propose to mount from via;

showmount -e < IP name of NFS server being accessed > to see what reality the client believes exists

2 Intern

 • 

178 Posts

June 29th, 2015 13:00

Can you check if NFS adapter address on ESXi host is responding to ping request from From Datamover ?

Since ESXi host uses the gate way of MGMT address,  During the write operations, ESXi host uses the MGMT interface instead of  NFS adapter for layer 3 connections.

We had the similar issue while mounting Isilon share on ESXi hosts, mount operations worked but write io's are denied.

The issue resolved after using layer 2 NFS adapter addresses across all ESXi  with NAS server.

No Events found!

Top