PowerScale OneFS 9.11 and later: Vnodes Not Reclaimed Efficiently Causing Nodes to Run OOM

Summary: A high number of vnodes accumulating in memory can result in one or more nodes getting into an Out Of Memory (OOM) condition. Any cluster using Snapshots and SmartPools that is running OneFS 9.11 or later can have this issue. ...

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

An OOM condition can lead to node panics and unresponsiveness resulting in performance degradation and data unavailability events. The number of vnodes in the memory is higher than the required maximum vnodes. Enabling or disabling vfs.vnlru_reuse_freevnodes (vnode recycling) has no effect on the issue.

Issue can be identified from OOM messages, vmlogs, and a minidump. Example output from messages on an F200 node with 48GB memory:

2025-06-11T05:31:56.025986+02:00 <0.4> - /boot/kernel.amd64/kernel: OOM: v_wire_count: 11262575, v_active_count: 7 events_since_last_log 674
2025-06-11T05:31:56.025992+02:00 <0.4> - /boot/kernel.amd64/kernel: Malloc Pigs:
2025-06-11T05:31:56.025997+02:00 <0.4> - /boot/kernel.amd64/kernel: Type                   InUse   MemUse   Requests
2025-06-11T05:31:56.026004+02:00 <0.4> - /boot/kernel.amd64/kernel: iaddr_set            15701654  981354K 4078195298
2025-06-11T05:31:56.026010+02:00 <0.4> - /boot/kernel.amd64/kernel: devbuf                171845  649730K   35775475
2025-06-11T05:31:56.026016+02:00 <0.4> - /boot/kernel.amd64/kernel: isi_hash              137112  505157K  760464664
2025-06-11T05:31:56.026022+02:00 <0.4> - /boot/kernel.amd64/kernel: newblk                     4  131072K    1483813
2025-06-11T05:31:56.026027+02:00 <0.4> - /boot/kernel.amd64/kernel: inodedep                   4   65536K     531073
2025-06-11T05:31:56.026035+02:00 <0.4> - /boot/kernel.amd64/kernel: vfscache                   4   32817K          4
2025-06-11T05:31:56.026041+02:00 <0.4> - /boot/kernel.amd64/kernel: bar_owner_vec264         259   32256K     191809
2025-06-11T05:31:56.026047+02:00 <0.4> - /boot/kernel.amd64/kernel: linux                 169103   28170K  142448723
2025-06-11T05:31:56.026052+02:00 <0.4> - /boot/kernel.amd64/kernel: statistics data        13062   20453K    1189902
2025-06-11T05:31:56.026058+02:00 <0.4> - /boot/kernel.amd64/kernel: vfs_hash                   1   16384K          1
2025-06-11T05:31:56.026064+02:00 <0.4> - /boot/kernel.amd64/kernel: pagedep                    4   16384K     151925
2025-06-11T05:31:56.026069+02:00 <0.4> - /boot/kernel.amd64/kernel: sysctloid             269900   14747K     274194
2025-06-11T05:31:56.026075+02:00 <0.4> - /boot/kernel.amd64/kernel: acpica                194000   12817K    3055662
2025-06-11T05:31:56.026081+02:00 <0.4> - /boot/kernel.amd64/kernel: 8kB dinodes             2315   11909K 10694470462
2025-06-11T05:31:56.026086+02:00 <0.4> - /boot/kernel.amd64/kernel: pcb                      136    9230K     470386
2025-06-11T05:31:56.026092+02:00 <0.4> - /boot/kernel.amd64/kernel: Unshown bins account for 100723K
2025-06-11T05:31:56.026103+02:00 <0.4> - /boot/kernel.amd64/kernel: Total: 2628733K
2025-06-11T05:31:56.026109+02:00 <0.4> - /boot/kernel.amd64/kernel: UMA Zalloc Pigs:
2025-06-11T05:31:56.026114+02:00 <0.4> - /boot/kernel.amd64/kernel: NAME              SIZE      LIMIT      COUNT   MEM USED
2025-06-11T05:31:56.026120+02:00 <0.4> - /boot/kernel.amd64/kernel: IFSINODE          616,         0,  17222149,  11481500K
2025-06-11T05:31:56.026126+02:00 <0.4> - /boot/kernel.amd64/kernel: VNODE             584,         0,  17224440,   9842604K
2025-06-11T05:31:56.026132+02:00 <0.4> - /boot/kernel.amd64/kernel: mbuf_jumbo_p     4096,         0,    272268,   1089072K
2025-06-11T05:31:56.026137+02:00 <0.4> - /boot/kernel.amd64/kernel: VM OBJECT         272,         0,     49671,    839200K
2025-06-11T05:31:56.026143+02:00 <0.4> - /boot/kernel.amd64/kernel: UMA Slabs 0        80,         0,   3078132,    246336K
2025-06-11T05:31:56.026149+02:00 <0.4> - /boot/kernel.amd64/kernel: BUF TRIE          144,         0,    122678,    217204K
2025-06-11T05:31:56.026154+02:00 <0.4> - /boot/kernel.amd64/kernel: RADIX NODE        144,         0,    333231,    141808K
2025-06-11T05:31:56.026160+02:00 <0.4> - /boot/kernel.amd64/kernel: vmem btag          56,         0,    143433,    114884K
2025-06-11T05:31:56.026166+02:00 <0.4> - /boot/kernel.amd64/kernel: mbuf              256,         *,         *,     75888K
2025-06-11T05:31:56.026172+02:00 <0.4> - /boot/kernel.amd64/kernel:  mbuf             256,  19265886,    284503,          *
2025-06-11T05:31:56.026177+02:00 <0.4> - /boot/kernel.amd64/kernel:  mbuf_packet      256,         0,        64,          *
2025-06-11T05:31:56.026183+02:00 <0.4> - /boot/kernel.amd64/kernel: lki_mds_ent       160,         0,     62000,     69428K
2025-06-11T05:31:56.026189+02:00 <0.4> - /boot/kernel.amd64/kernel: md3               512,         0,    131072,     65540K
2025-06-11T05:31:56.026194+02:00 <0.4> - /boot/kernel.amd64/kernel: md0               512,         0,    131072,     65540K
2025-06-11T05:31:56.026200+02:00 <0.4> - /boot/kernel.amd64/kernel: pbuf             1024,         *,         *,     38760K
2025-06-11T05:31:56.026206+02:00 <0.4> - /boot/kernel.amd64/kernel:  pbuf            1024,       256,         0,          *
2025-06-11T05:31:56.026211+02:00 <0.4> - /boot/kernel.amd64/kernel:  vnpbuf          1024,       512,         0,          *
2025-06-11T05:31:56.026217+02:00 <0.4> - /boot/kernel.amd64/kernel:  clpbuf          1024,     15872,         0,          *
2025-06-11T05:31:56.026223+02:00 <0.4> - /boot/kernel.amd64/kernel:  mdpbuf          1024,      1638,         0,          *
2025-06-11T05:31:56.026228+02:00 <0.4> - /boot/kernel.amd64/kernel:  nfspbuf         1024,      8192,         0,          *
2025-06-11T05:31:56.026234+02:00 <0.4> - /boot/kernel.amd64/kernel:  swwbuf          1024,      4096,         0,          *
2025-06-11T05:31:56.026240+02:00 <0.4> - /boot/kernel.amd64/kernel:  swrbuf          1024,      8192,         0,          *
2025-06-11T05:31:56.026245+02:00 <0.4> - /boot/kernel.amd64/kernel: lkc_gen_ent        64,         0,     21333,     18100K
2025-06-11T05:31:56.026253+02:00 <0.4> - /boot/kernel.amd64/kernel: MAP ENTRY          96,         0,    138594,     15488K
2025-06-11T05:31:56.026259+02:00 <0.4> - /boot/kernel.amd64/kernel: Top zones:        24321352K
2025-06-11T05:31:56.026265+02:00 <0.4> - /boot/kernel.amd64/kernel: Malloc zones:      2344608K
2025-06-11T05:31:56.026270+02:00 <0.4> - /boot/kernel.amd64/kernel: Other zones:        117512K
2025-06-11T05:31:56.026276+02:00 <0.4> - /boot/kernel.amd64/kernel: UMA total:        26783472K

The following entry count is over 17 million:

NAME              SIZE      LIMIT      COUNT   MEM USE
IFSINODE          616,         0,  17222149,  11481500
VNODE             584,         0,  17224440,   9842604

Where maximum vnodes is 2,900,000 which can be found in vmlogs > kern.maxvnodes.

The number of vnodes can be verified on the cluster and referenced for maximum vnodes with the following sysctl:

# isi_for_array -s "sysctl vfs.numvnodes"
# isi_for_array -s "sysctl kern.maxvnodes"

Cause

Issue is under investigation.

Resolution

A permanent resolution is in OneFS 9.11.0.2 and 9.12.0.0 and later OneFS releases.

Workaround:
The current workaround is to flush memory cache on all nodes every hour. The below example loop can be used in a screen session on any node:

# while true; do date; isi_for_array -s 'sysctl vfs.numvnodes; isi_flush '; sleep 3600; done

 

Note: The script can be canceled at any time during the 3600 second (1 hour) wait, except when the isi_flush command is in progress. Do not kill the isi_flush process as that may cause a temporary service disruption.

 

In order to further automate and make cache memory flushing more reliable, a cron job can be scheduled. The example below is going to flush the cache 7 minutes past every hour:

Edit /etc/mcp/override/crontab with any text editor and enter the following:

7 * * * * root pgrep isi_flush || /usr/bin/isi_flush

More information about how to edit crontab is found in the article Isilon How to edit crontab

SmartLock compliance mode
For SmartLock compliance mode clusters, sudo should be added before isi_for_array and isi_flush:

# while true; do date; sudo isi_for_array -s 'sysctl vfs.numvnodes; sudo isi_flush '; sleep 3600; done

If using cron job, compadmin cannot edit the /etc/mcp/override/crontab, the crontab CLI tool with -e option should be used on each node to add the required line as below. The line does not contain the column "who" which was root in /etc/mcp/override/crontab, because it is a cron job by compadmin only.

Edit cron job using:

# crontab -e

In the editor:
Press i to switch into insert mode.
Paste below line:

7 * * * *  pgrep isi_flush || sudo /usr/bin/isi_flush

Press Esc
Press :wq! to save changes.

Affected Products

Isilon, PowerScale, Isilon Gen6.5, Isilon Gen6, PowerScale OneFS
Article Properties
Article Number: 000342005
Article Type: Solution
Last Modified: 19 Nov 2025
Version:  6
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.