Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

9679

August 7th, 2012 08:00

NFS Mount on ESX

I have created Celerra NFS Exports with all required required parameteres **rw/access/root hosts (on file system in VDM and also in server_2)*

Given VMKERNEL IP as shown in primus.

I am able to mount the NFS share from ESX host via VirtualCenter & also via CLI

But when i write file it gives out error as "Call "FileManager.MakeDirectory" for object "FileManager" on vCenter Server"

I can't write any file.

thanks in advance.

96 Posts

August 8th, 2012 03:00

ls -l output

[nasadmin@EMCNAS]$ ls -l

drwxr-xr-x 6 root root 1 Aug  7 16:20 fs1

674 Posts

August 8th, 2012 06:00

ok.

with drwxr-xr-x 6 root root 1024 Aug  7 16:20 fs1

permissions, no other user than root will ever be able to write in fs1.

If root is unable to write into fs1, then the server_export is missing a "root=IP" options, from the IP you are coming from. So root will be mapped to nobody.

root will be only able to write, if it is not mapped to nobody, so you need this root= option.

This is standard NFS behaviour.

Please post your server_export command for fs1

Thanks

96 Posts

August 8th, 2012 06:00

As i stated above in my notes. I can mount using root and i cannot write on the same root user.

**I tested in control station too!

**chmod 777 fs1 will work ->  but i do not want to do that; as there are chances for issues in future incase if any permission applied in subdirectories....I wanted to test without using chown/chmod

**-and also without IP restriction & by giving parameters anon=0 --> works well for all users/root

My objective

-root should be able to write though it is able to mount

-IP  to be given r/w,access,root as a best practice in NFS export entries

-no anon=0 parameters

96 Posts

August 8th, 2012 13:00

Thanks for all replies. I just added VMconsole IP's in r/w of NFS exports without deleting VMKERNEL IP's (our primus shows to use VMKERNEL IPs).....It worked....

Thanks!

1 Rookie

 • 

20.4K Posts

August 8th, 2012 14:00

what were you using before ?

August 11th, 2012 21:00

For next time and also for those that might stumble on this post, I would highly recommend installing the EMC VSI (Virtual Storage Integrator) for vSphere plug-in; specifically the Unified Storage Management feature.

This allows for the following (after discovering array and configuring DHSM) all within the vSphere Client:

1) Create and mount (on Celerra/VNX data mover) NFS filesystem per best practices with the important performance consideration being "Direct Writes (uncached)":

- uxfs,perm,rw,noscan,uncached

- can also consider: noprefetch

2) Creates exports using VMkernel ports then mounts on ESX(i) server(s)

- properly assigns root= and access= on data mover export

- {however, see comments below about separate subnets}

3) (checked by default) also updates Advanced Software settings within ESX/ESXi servers to further improve performance and availability

- NFS.*

- Net.*

Finally, I would like to make a quick comment about the fact that vmk0 (which is a VMkernel port itself and of course eligible also for NFS and iSCSI connectivity) was even able to talk to the interfaces on the data movers is against best practices.  Your NFS subnets should be separate from your mangement LAN; however, from the discussion above it suggests that in your environment:

1) vmk0 (part of the default "Management Network" created along with the initial vSwitch0) is on the same subnet as the data mover interfaces

2) or, the subnet you assigned to the data mover interfaces is routeable (and not a segregated subnet)

- Unless you absolutely require these to be routed, they should be isolated subnets

No Events found!

Top