Try this one: isi statistics client list
Plenty of options, --nodes=all, --protocols=smb2 etc...
You could also try: isi statistics protocol list
An EMC rep suggested we can use the following options to help with troubleshooting:
I tried to test command "isi statistics protocol list" via CLI.
If there ais no io about NFS, no results.
Maybe only InsightIQ do check mounted clients.
For SMB you can use "isi smb sessions list" with several options too.
But that command is only related to the local node, so you have to use isi_for_array to see the data for all nodes in the cluster.
For NFS I don't know a similar command. It's difficult because NFS is a stateless protocol. You can try "isi nfs nlm sessions list", but that's only showing NFS locks that are currently active. Not really useful. Or you can try the old command "isi_classic isi nfs clients list". But I would not really rely on that data. Because there is the same isi_classic command for SMB and that's showing no data for my clusters, although there are a lot of SMB connections.
As Mike told, you can use isi statistics to get some statistic type data. E.g. "isi statistics client list --nodes=all --protocols=nfs3 --sort=remote_addr". Just playing around with some parameters to narrow your search list down.
And you can try to play around with "showmounts", of course with isi_for_array too.
But cutting a long story short, it seems that the best way to get a list of all SMB and NFS connections is, taking it out of the webUI view or from InsightIQ. Both are using isi statistics data and I suppose some more internal data too.
The tricky part about NFS is that it is stateless so any NFS server can only really track the active TCP sessions from clients at a given point in time. It's quite possible for a TCP session to timeout, disconnect and go away for days, even weeks or months, but the client is still holding a valid handle and sees the filesystem as mounted from its point of view but this is invisible to the NFS server. You might say that we should wait for an explicit umount and then delete the mount from the list, but the problem is that clients aren't required to do so and some clients just get halted and may or may not ever come back. That is perfectly legal in NFS. Therefore, if you rely on a list generated this way, you could end up with a non-trivial amount of false positives.
SMB relies on the TCP session to know that you are connected so it's much easier to track mounted clients. NFS is trickier as I hope you can see here.