given the way NFSv2/3 works this is just the best you can do and should give you a superset of mounted clients
Even if you were to look at the packets on the wire (like Rainfinity does) you could only see clients that are active - either sending NFS requests or have an open NFS/TCP connection.
NFS (using the default hard mount option) was built like this on purpose so that a rebooting / crashing server or a network outage doesn't cause I/O errors on the NFS clients. This was best implemented with a mount not really having any server-side state - it merely checks for access and gives you the cleint the root filehandle.
At that time server and network outages were frequent and customers would rather have long-running compute jobs pause for a couple of minutes and then resume as opposed to exit with error and having to restart them from the beginning.
You could mount NFS mount a file system from a client, disconnect its network cable for a year and when you reconnect it later the NFS mount would still work and running jobs would continue - unless you changed something major on the NFS server that makes the file handle go stale.
- if your NFS clients are all using NFS over TCP you can identify them doing a "server_netstat server_X | grep nfs"
- or you can do a showmount -a (or showmount -a server_X from the Celerra control station)
Just note that (as with all NFS servers) the showmount can list you clients that are now longer mounted.
The reason is that NFS being stateless the NFS server itself doesnt track client mounts.
The mount daemon part does, but if a client just gets turned off or crashes without unmounting the mountd still counts it as having an NFS share mounted.
Sun and HP clients in particular, lately, seem to be tending not to explicitly unmount filesystems when they reboot or shut down. If they're rebooting it's normally not an issue since they'll just re-mount the FS when they start back up, but if the system goes offline or if they decide not to remount the FSes when restarting (for example, if the FSes were automounted, or if FS configuration files were manually changed and take effect only on next startup), then there's a potential that the DM (or any NFS server) will never learn that the client no longer has the FS mounted.
Using TCP connections to figure out which clients have ANY FS mounted isn't foolproof, either. Many clients (particularly ones using automounter) will disconnect their TCP connections if an FS has been idle for a certain amount of time. However, as far as the client is concerned, the NFS filesystem is still mounted. And, of course, you completely miss any client that's using NFS over UDP.
With the two pieces of information you can usually approximate who's using NFS and which FSes at any given time, but it's not guaranteed to be accurate. It's one of the difficulties of working with NFSv3.
With NFSv4, as with CIFS, this is a lot easier. There is an explicit state on which clients have which FSes mounted at any given time, and the DM and clients should be in sync on this (within reason - if a client loses power the DM may not learn that it's offline for a while, until heartbeat messages start timing out). You can use the server_nfs command to get a list of which NFSv4 clients are connected at any given time.
If the clients are ACTIVELY doing I/O, then that'd be strange.
If the clients are using automounter (such as with /net), then the OS will attempt to unmount the FS when it's not currently in use by an application. Some OSes (or configurations) are very aggressive about this, and you may see clients constantly unmounting and remounting an FS. Even if an application is only idle for a second or two, the FS may be unmounted.
Kind of the opposite question............any thoughts on why the showmount command would not return hosts who have it mounted (and doing active i/o?). I tried finding a rmtab file in the root file system but didn't see anything. Does it have anything to do with how they mount (i.e. a /net mount)?
Rainer_EMC
4 Operator
•
8.6K Posts
0
December 10th, 2007 11:00
given the way NFSv2/3 works this is just the best you can do and should give you a superset of mounted clients
Even if you were to look at the packets on the wire (like Rainfinity does) you could only see clients that are active - either sending NFS requests or have an open NFS/TCP connection.
NFS (using the default hard mount option) was built like this on purpose so that a rebooting / crashing server or a network outage doesn't cause I/O errors on the NFS clients.
This was best implemented with a mount not really having any server-side state - it merely checks for access and gives you the cleint the root filehandle.
At that time server and network outages were frequent and customers would rather have long-running compute jobs pause for a couple of minutes and then resume as opposed to exit with error and having to restart them from the beginning.
You could mount NFS mount a file system from a client, disconnect its network cable for a year and when you reconnect it later the NFS mount would still work and running jobs would continue - unless you changed something major on the NFS server that makes the file handle go stale.
Rainer_EMC
4 Operator
•
8.6K Posts
1
December 10th, 2007 09:00
there are two options:
- if your NFS clients are all using NFS over TCP you can identify them doing a "server_netstat server_X | grep nfs"
- or you can do a showmount -a (or showmount -a server_X from the Celerra control station)
Just note that (as with all NFS servers) the showmount can list you clients that are now longer mounted.
The reason is that NFS being stateless the NFS server itself doesnt track client mounts.
The mount daemon part does, but if a client just gets turned off or crashes without unmounting the mountd still counts it as having an NFS share mounted.
IanSchorr
117 Posts
1
December 10th, 2007 09:00
Using TCP connections to figure out which clients have ANY FS mounted isn't foolproof, either. Many clients (particularly ones using automounter) will disconnect their TCP connections if an FS has been idle for a certain amount of time. However, as far as the client is concerned, the NFS filesystem is still mounted. And, of course, you completely miss any client that's using NFS over UDP.
With the two pieces of information you can usually approximate who's using NFS and which FSes at any given time, but it's not guaranteed to be accurate. It's one of the difficulties of working with NFSv3.
With NFSv4, as with CIFS, this is a lot easier. There is an explicit state on which clients have which FSes mounted at any given time, and the DM and clients should be in sync on this (within reason - if a client loses power the DM may not learn that it's offline for a while, until heartbeat messages start timing out). You can use the server_nfs command to get a list of which NFSv4 clients are connected at any given time.
Jens-t6YFy
6 Posts
0
December 11th, 2007 01:00
Message was edited by:
Jens
IanSchorr
117 Posts
0
January 17th, 2008 13:00
If the clients are using automounter (such as with /net), then the OS will attempt to unmount the FS when it's not currently in use by an application. Some OSes (or configurations) are very aggressive about this, and you may see clients constantly unmounting and remounting an FS. Even if an application is only idle for a second or two, the FS may be unmounted.
DanLah
6 Posts
0
January 17th, 2008 13:00
IanSchorr
117 Posts
0
January 17th, 2008 13:00
We currently have a 32KB limit to the size of showmount output. So if you have a substantial amount of entries in the list, it may be truncated.