Start a Conversation

Unsolved

This post is more than 5 years old

A

7795

June 16th, 2016 13:00

solaris super cluster server cannot mount /nfs mount on vnx even though...

Hello,

I created an NFS mount point on the VNX, added the hosts for read/write, root privileges. A new Oracle Super Cluster has been installed and I added the Super Cluster hosts.

We opened network ports for NFS to/from the the hosts to the VNX control station and primary data mover. Yet they still cannot mount the slices. Other hosts(not on the Super Cluster) on a different network can mount the nfs slices.

They are also using data pump.

Any input you have is greatly appreciated.

Admin

1 Rookie

 • 

20.4K Posts

June 16th, 2016 14:00

super cluster, sounds important

can they run "showmount -e  datamover_interface"  from one of these hosts ?  This command will display all nfs exports available on that datamover.

614 Posts

June 16th, 2016 14:00

Thank you Dynamox! I will ask him to do that now.

614 Posts

June 16th, 2016 15:00

I got the group to give me a login and I executed the command it just hangs. The version is Solaris 11.

643 Posts

June 16th, 2016 19:00

If other hosts can mount the NFS from the same VNX, it is more likely that the issue is with Solaris itself or network side.  Here are steps for troubleshooting:

  1. From Solaris, ping the IP of data mover (the same IP which you use to mount the NFS export from VNX).  If ping is not working, it is a network issue.

  1. If ping works fine, use “/usr/sbin/rpcinfo –p ” on Solaris to check if the ports 2049/1234/111 on data mover can be connected.  A normal output should be as below:

   

    # /usr/sbin/rpcinfo -p server_2 |egrep "portmap|nfs|mount"

    100003 2   udp   2049  nfs

    100003 3   udp   2049  nfs

    100003 2   tcp   2049  nfs

    100003 3   tcp   2049  nfs

    100005 3   tcp   1234  mountd

    100005 2   tcp   1234  mountd

    100005 1   tcp   1234  mountd

    100005 3   udp   1234  mountd

    100005 2   udp   1234  mountd

    100005 1   udp   1234  mountd

    100000 2   udp    111  portmapper

100000 2   tcp    111  portmapper

If there is no output or it is timed out, the issue should be that the network router between Solaris and VNX filters traffic on these ports.

  1. If step 1 and 2 pass, on Solaris use “/usr/sbin/showmount –e ” to check the list of NFS exports.  If you can see the NFS export you want to mount, you can go ahead to mount it and see if there is any error.  Then the issue would be related with the NFS permission.

As per the problem description, I suppose the issue would be found from step 1 or step 2. 

614 Posts

June 16th, 2016 19:00

2049, 4045 are for data pump

614 Posts

June 16th, 2016 19:00

baulisano@efsdzdbclient010101:/etc$ cat vfstab

#device         device          mount           FS      fsck    mount   mount

#to mount       to fsck         point           type    pass    at boot options

#

/devices        -               /devices        devfs   -       no      -

/proc           -               /proc           proc    -       no      -

ctfs            -               /system/contract ctfs   -       no      -

objfs           -               /system/object  objfs   -       no      -

sharefs         -               /etc/dfs/sharetab       sharefs -       no      -

fd              -               /dev/fd         fd      -       no      -

swap            -               /tmp            tmpfs   -     

1 Rookie

 • 

20.4K Posts

June 16th, 2016 19:00

what firewall ports did you open ? Something is still blocking it.

614 Posts

June 16th, 2016 19:00

Thank you Dynamox, we opened

(UDP, TCP both ways) on the primary data mover and control station. Did not open for the failover datamover. Could that be causing it?


111

2049

4045

1110

614 Posts

June 16th, 2016 19:00

Thank you! Here is my output:

baulisano@efsdzdbclient010101:~$ ping 10.129.226.16

10.129.226.16 is alive

baulisano@efsdzdbclient010101:~$ /usr/sbin/rpcinfo -p 10.129.226.16

-bash: /usr/sbin/rpcinfo: No such file or directory

baulisano@efsdzdbclient010101:~$ rpcinfo -p 10.129.226.16

   program vers proto   port  service

824395111    1   tcp  50682

824395111    1   udp  49628

    100011    1   tcp  55233  rquotad

    100011    1   udp  62062  rquotad

536870914    1   udp   4658

536870914    1   tcp   4658

    100021    3   udp  63498  nlockmgr

    100021    2   udp  63498  nlockmgr

    100021    1   udp  63498  nlockmgr

    100021    4   udp  63498  nlockmgr

    100021    3   tcp  64240  nlockmgr

    100021    2   tcp  64240  nlockmgr

    100021    1   tcp  64240  nlockmgr

    100021    4   tcp  64240  nlockmgr

    100024    1   udp  49162  status

    100024    1   tcp  49162  status

    100003    2   udp   2049  nfs

    100003    3   udp   2049  nfs

    100003    2   tcp   2049  nfs

    100003    3   tcp   2049  nfs

    140391    1   udp  31491

    100005    3   tcp   1234  mountd

    100005    2   tcp   1234  mountd

    100005    1   tcp   1234  mountd

    100005    3   udp   1234  mountd

    100005    2   udp   1234  mountd

    100005    1   udp   1234  mountd

536870919    3   tcp  12345

536870919    1   tcp  12345

536870919    3   udp  12345

536870919    1   udp  12345

    102660    1   udp  50170

    102660    1   tcp  56752

    100000    2   udp    111  rpcbind

    100000    2   tcp    111  rpcbind

baulisano@efsdzdbclient010101:~$

It looks good, however, showmount -e just hangs. It's so odd, again, the nfs mount point is working on the other network.

Thanks so much!

643 Posts

June 16th, 2016 21:00

It sounds interesting.  Is this a production system?  Is it possible to have SSH access to login have a look?

1 Rookie

 • 

20.4K Posts

June 17th, 2016 10:00

admingirl wrote:

Thank you Dynamox, we opened

(UDP, TCP both ways) on the primary data mover and control station. Did not open for the failover datamover. Could that be causing it?


111

2049

4045

1110

you don't need control station for NFS/CIFS connectivity, management only.  Standby datamover will assume primary datamover's IP address so nothing special there either.

Could your firewall guys temporarily create a rule to all all ports between that host and the interface on VNX. If firewall is wide open for that host that it could be host related,  is there a local firewall on the host itself ?

614 Posts

June 17th, 2016 17:00

So I can login, but I found that the VNX file NFS default port is 2049. The Oracle super cluster is using 4045. My legacy linux servers use 2049.

I don't think I can change the default port and if I do, other stuff will not be working I don't think.

Admingirl

No Events found!

Top