PowerEdge: Linux unexpectedly reboot while running PF_RING
Summary: While running pf_ring for network packets capturing, the server rebooted unexpectedly. No matter that it is Red Hat Enterprise Linux or Ubuntu.
This article applies to
This article does not apply to
This article is not tied to any specific product.
Not all product versions are identified in this article.
Symptoms
While running pf_ring for network packets capturing, the server rebooted unexpectedly. No matter that it is Red Hat Enterprise Linux or Ubuntu.
Cause
Enabled kdump and captured vmcore as following
[113309.901854] Modules linked in: i40e(OE) vxlan ip6_udp_tunnel udp_tunnel pf_ring(OE) tcp_lp binfmt_misc xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun devlink ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat iptable_mangle iptable_security iptable_raw nf_conntrack ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter sunrpc dm_mirror dm_region_hash dm_log dm_mod dell_smbios dcdbas dell_wmi_descriptor joydev vfat fat amd64_edac_mod edac_mce_amd kvm_amd kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper
[113309.901871] ablk_helper cryptd pcspkr ipmi_ssif sg k10temp i2c_piix4 wmi ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm ahci libahci crct10dif_pclmul drm crct10dif_common crc32c_intel libata tg3 megaraid_sas ptp drm_panel_orientation_quirks pps_core fuse [last unloaded: i40e]
[113309.901882] CPU: 16 PID: 0 Comm: swapper/16 Kdump: loaded Tainted: G B OE ------------ 3.10.0-1160.el7.x86_64 #1
[113309.901883] Hardware name: /0PYVT1, BIOS 2.6.6 01/13/2022
[113309.901885] Call Trace:
[113309.901886] <IRQ> [<ffffffffa2b81340>] dump_stack+0x19/0x1b
[113309.901890] [<ffffffffa2b7befd>] bad_page.part.75+0xdc/0xf9
[113309.901892] [<ffffffffa25c8625>] get_page_from_freelist+0x7a5/0xac0
[113309.901895] [<ffffffffa2a3c6f7>] ? kfree_skbmem+0x37/0x90
[113309.901898] [<ffffffffa25c8aa6>] __alloc_pages_nodemask+0x166/0x450
[113309.901902] [<ffffffffc0778158>] i40e_alloc_rx_buffers+0x168/0x320 [i40e]
[113309.901907] [<ffffffffc07788dc>] i40e_clean_rx_irq+0x5cc/0xbc0 [i40e]
[113309.901912] [<ffffffffc077939c>] i40e_napi_poll+0x3ac/0x830 [i40e]
[113309.901915] [<ffffffffa2a555bf>] net_rx_action+0x26f/0x390
[113309.901917] [<ffffffffa24a4b95>] __do_softirq+0xf5/0x280
[113309.901920] [<ffffffffa2b974ec>] call_softirq+0x1c/0x30
[113309.901922] [<ffffffffa242f715>] do_softirq+0x65/0xa0
[113309.901924] [<ffffffffa24a4f15>] irq_exit+0x105/0x110
[113309.901927] [<ffffffffa2b98936>] do_IRQ+0x56/0xf0
[113309.901930] [<ffffffffa2b8a36a>] common_interrupt+0x16a/0x16a
[113309.901932] <EOI> [<ffffffffa2b89000>] ? __cpuidle_text_start+0x8/0x8
[113309.901936] [<ffffffffa2b8924b>] ? native_safe_halt+0xb/0x20
[113309.901939] [<ffffffffa2b8901e>] default_idle+0x1e/0xc0
[113309.901942] [<ffffffffa2437ca0>] arch_cpu_idle+0x20/0xc0
[113309.901945] [<ffffffffa25011ea>] cpu_startup_entry+0x14a/0x1e0
[113309.901948] [<ffffffffa245a7f7>] start_secondary+0x1f7/0x270
[113309.901951] [<ffffffffa24000d5>] start_cpu+0x5/0x14
[113309.901953] BUG: Bad page state in process swapper/16 pfn:7ef0797
[113309.901955] page:ffffdeca3bc1e5c0 count:65533 mapcount:0 mapping: (null) index:0x0
[113309.901957] page flags: 0x6fffff00000000()
[113309.901960] page dumped because: nonzero _count
From the vmcore analysis, the i40e rebooted the server by which is the network card driver, and the i40e driver was taint.
Note:
OE beside the i40e driver as above means the i40e is not the in box drive or driver that Red Hat signed.
Resolution
The taint i40e driver is from PF_RING. Update PF_RING to latest version which uses the new i40e driver (i40e-2.14.13 and above).
Affected Products
OEMR R240, OEMR R250, OEMR R260, OEMR R340, OEMR R350, OEMR R360, OEMR R440, OEMR R450, OEMR R540, OEMR R550, OEMR R640, OEMR R650, OEMR R650xs, OEMR R6515, OEMR R6525, OEMR R660, OEMR R660xs, OEMR R6615, OEMR R6625, OEMR R740, OEMR R740xd
, OEMR R740xd2, OEMR R750, OEMR R750xa, OEMR R750xs, OEMR R7515, OEMR R7525, OEMR R760, OEMR R760xa, OEMR R760XD2, OEMR R760xs, OEMR R7615, OEMR R7625, OEMR R840, OEMR R860, OEMR R940, OEMR R940xa, OEMR R960, OEMR XR4510c, OEMR XR4520c, OEMR XR5610, OEMR XR8610t, OEMR XR8620t, PowerEdge R240, PowerEdge R250 Rack Server, PowerEdge R260 Rack Server, PowerEdge R340, PowerEdge R350 Rack Server, PowerEdge R360 Rack Server, PowerEdge R440, PowerEdge R450 Rack Server, PowerEdge R540, PowerEdge R550 Rack Server, PowerEdge R640, PowerEdge R650 Rack Server, PowerEdge R650xs Rack Server, PowerEdge R6515 Rack Server, PowerEdge R6525 Rack Server, PowerEdge R660 Rack Server, PowerEdge R660xs Rack Server, PowerEdge R6615 Rack Server, PowerEdge R6625 Rack Server, PowerEdge R740, PowerEdge R740XD, PowerEdge R740XD2, PowerEdge R750 Rack Server, PowerEdge R750xa Rack Server, PowerEdge R750xs Rack Server, PowerEdge R7515 Rack Server, PowerEdge R7525 Rack Server, PowerEdge R760 Rack Server, PowerEdge R760XA Rack Server, PowerEdge R760xd2 Rack Server, PowerEdge R760xs Rack Server, PowerEdge R7615 Rack Server, PowerEdge R7625 Rack Server, PowerEdge R840 Rack Server, PowerEdge R860 Rack Server, PowerEdge R940 Rack Server, PowerEdge R940xa, PowerEdge R960 Rack Server, PowerEdge XR4510c, PowerEdge XR4520c, PowerEdge XR5610, PowerEdge XR8610t, PowerEdge XR8620t, Red Hat Enterprise Linux Version 7, Red Hat Enterprise Linux Version 8, Ubuntu Server LTS
...
Article Properties
Article Number: 000201628
Article Type: Solution
Last Modified: 02 Sept 2025
Version: 2
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.