Currently i'm checking the % busy inodes in a nfs client but I have found that it exist a incongruent with the inodes numbers with busy space (df -h) and inode ocupation (df -i)
With one client the display information it seems coherence but with other client it's totally different.
For example :
S.files Tamaño Usados Disp Uso% Montado en
devtmpfs 1,9G 0 1,9G 0% /dev
tmpfs 1,9G 0 1,9G 0% /dev/shm
tmpfs 1,9G 60M 1,8G 4% /run
tmpfs 1,9G 0 1,9G 0% /sys/fs/cgroup
/dev/mapper/rhel-root 17G 2,2G 15G 13% /
/dev/sda1 1014M 149M 866M 15% /boot
isilon.domain.local:/ifs/data/enterprise 500G 132G 369G 27% /mount1
tmpfs 378M 0 378M 0% /run/user/0
S.files Nodos-i NUsados NLibres NUso% Montado en
devtmpfs 479688 354 479334 1% /dev
tmpfs 482670 1 482669 1% /dev/shm
tmpfs 482670 1250 481420 1% /run
tmpfs 482670 16 482654 1% /sys/fs/cgroup
/dev/mapper/rhel-root 8910848 36390 8874458 1% /
/dev/sda1 524288 325 523963 1% /boot
isilon.domain.local:/ifs/data/enterprise 3997696000 3225664720 772031280 81% /mount1
tmpfs 482670 1 482669 1% /run/user/0
With the client #1 it seems that display total files number from cluster (Previously run the LinCount job for that count all files)
So I understand that this occupation it's with a cluster information.
with other client
df -h /Isilon/Repository
S.ficheros Tamaño Usados Disp Uso% Montado en
isilon.domain.local:/ifs/data/enterprise2 250G 206G 45G 83% /mount2
df -i /Isilon/Repository
S.ficheros Nodos-i NUsados NLibres NUso% Montado en
isilon.domain.local:/ifs/data/enterprise2 1998848000 1905119440 93728560 96% /mount2
Why with the same isilon cluster obtain diffente total inode size between client1 and client2?
Other usefull info :
Requested Protection +2d:1n
Number of nodes = 5
Drives by node = 36
Many thanks in advanced.
I'm not sure why two clients are giving radically different answers if you're talking to the same cluster at the same time. I'm a bit concerned about the ".local" name. Are you sure the DNS is pointing at the same cluster here? The second client is reporting a different filesystem size, use and free space. That suggests it's pointing to a completely different cluster. If the export were different, the size could be explained by a container quota but not if it's the same path.
Aside from that, it is important to understand that the inode free stats are meaningless for OneFS because inodes are dynamically allocated and so you won't run out of inodes unless you run out of free space. Because of that, you should simply monitor free blocks/space and ignore the inode statistics because they are not helpful. But in this case, your total, used and free space numbers are completely different which strongly suggests they are two different clusters.
Many thanks for the answer. Regarding ".local" name has been for hidden the real name in out environment. Really is the same smartconnect name that it has setting the isilon, a pool with DNS dynamic that is pointing at the same cluster.
Both NFS exports has quota with "container =Yes" set.
Does it maybe this topic affect? That's why that they show different info for inodes.
directory DEFAULT /ifs/data/enterprise No 500.00G - - 131.909G
directory DEFAULT /ifs/data/enterprise2 No 250.00G - - 205.307G
My question is, if the quota reach 100% their capacity. Does we loose the free inodes space too for that export?
In summary, this info meaningless when it has quotas set in differents resources (always I refer the same cluster).
Many thanks and regards.
In OneFS 8.2 the inode reporting has been improved for directory "container" quotas:
The inode count in df is exactly the inode count from the quota domain (i.e. directory).
This can be useful for tracking file creation activity right from the Linux terminal or via simple shell scripts (no API needed), for example.
As @isi_tim said, there is no reserved space for inodes in OneFS, so the total and available inode counts shown by df are simply computed by OneFS as the number of 512-Byte inodes that would fit into the total and the available space respectively.