Unsolved
This post is more than 5 years old
1 Message
0
1090
February 27th, 2018 10:00
R810 performance problem with 10GBit Interfaces
I have a throughput problem with 10GBit Ethernet interface cards in a Dell R810.
When running "iperf3" I get 5Gbit/s instead of the expected 9.4Gbit/s.
The same cards give 9.4GBit/s in a Dell R710.
System:
- Dell R810
- 4 CPUs Xeon E74820
- 251GB Ram
- 2x Dual 10GBit Intel 82599ES SFP+ in slots 3+4
(moving them to slots 5+6 does not help) - CentOS 7.4
- two 10GBit interfaces configured as bond0 (lacp)
- the servers are connected through a Mellanox switch
Commands:
- on R810 (server):
iperf3 -s
- on R710 (client):
iperf3 -c SERVER_IP
If I reverse the direction, I get the expected throughput of 9.4GBit/s
What I tried:
- increase MTU to 9000
- move 10GBit cards to slots 5+6
- pin iperf to a cpu
- pin interrupts to a cpu through
"/proc/irq/N/smp_affinity"
This increases the throughput from 4.8GBit/s to 5.2GBit/s - increase ring parameters from 512 to 4096
"ethtool -G p6p1 rx 4096 tx 4096"
The R810 are in a lab, and I can take them down at any time.
Any ideas?
Regards
No Events found!



DELL-Josh Cr
Moderator
•
9.4K Posts
0
February 27th, 2018 12:00
Hi,
Can you Private message me the service tag so we can get some additional device information?
What happens if you do a file copy instead of using iperf? Do you have any other pcie cards in the system?
whaskes
1 Message
0
December 9th, 2022 08:00
Hi!
I also have this problem with a Dell PowerEdge R720-8BAY-LFF-CTO server. My SFP card is Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection. I am currently using IPFire linux. I took the picture during data transfer, it only loads 1 cpu core...
Is there a solution for this?