PowerScale: Using iperf3 with OneFS
Summary: Using iperf to test bandwidth from a client to a OneFS cluster.
This article applies to
This article does not apply to
This article is not tied to any specific product.
Not all product versions are identified in this article.
Instructions
The iperf3 program tests the raw network throughput from the client to the server without the protocol layer. This enables you to establish a rough baseline of what raw traffic over the network looks like.
Note: iperf3 is going to show you the available bandwidth. If there is other traffic running, this other traffic is not taken into consideration.
Achieving line rate on a 40G or 100G test host often requires parallel streams. However, using iperf3 it is not as simple as adding a -P flag because each iperf3 process is single-threaded, including all streams used by that iperf process for a parallel test. This means all the parallel streams for one test use the same CPU core. If you are core-limited (this is often the case for a 40G host and it is usually the case for a 100G host), adding parallel streams will not help you unless you do so by adding additional iperf3 processes which can use additional cores.
To run multiple iperf3 processes and use additional CPU cores for a testing a high-speed host, do the following:
Start multiple servers by running:
If you want to get server results in the client output, use the --get-server-output option like this:
Note: These values are not absolute; they are meant to be used as a guide.
Note: iperf3 is going to show you the available bandwidth. If there is other traffic running, this other traffic is not taken into consideration.
Achieving line rate on a 40G or 100G test host often requires parallel streams. However, using iperf3 it is not as simple as adding a -P flag because each iperf3 process is single-threaded, including all streams used by that iperf process for a parallel test. This means all the parallel streams for one test use the same CPU core. If you are core-limited (this is often the case for a 40G host and it is usually the case for a 100G host), adding parallel streams will not help you unless you do so by adding additional iperf3 processes which can use additional cores.
To run multiple iperf3 processes and use additional CPU cores for a testing a high-speed host, do the following:
Start multiple servers by running:
iperf3 -s -p 5101 &; iperf3 -s -p 5102 &; iperf3 -s -p 5103 &And then run multiple clients, using the -T flag to label the output:
iperf3 -c hostname -T s1 -p 5101 &; iperf3 -c hostname -T s2 -p 5102 &; iperf3 -c hostname -T s3 -p 5103 &OneFS from Windows clients with a little tweaking of client NIC parameters can expect results from the client to the cluster consistently around 38Gbe/s with 8 parallel streams. This is an example running iperf3 with 8 threads which is what you want to use for 40G network testing.
If you want to get server results in the client output, use the --get-server-output option like this:
$ iperf3 -c 192.168.188.11 -P 8 -t 600 --get-server-outputThis example connects to host 192.168.188.11, port 5201, 8 streams, and runs for 5 minutes.
PS C:\tmp> iperf3 -c 192.168.188.11 -P 8 -t 600 [ 4] local 192.168.188.57 port 60221 connected to 192.168.188.11 port 5201 [ 6] local 192.168.188.57 port 60227 connected to 192.168.188.11 port 5201 [ 8] local 192.168.188.57 port 60228 connected to 192.168.188.11 port 5201 [ 10] local 192.168.188.57 port 60229 connected to 192.168.188.11 port 5201 [ 12] local 192.168.188.57 port 60230 connected to 192.168.188.11 port 5201 [ 14] local 192.168.188.57 port 60231 connected to 192.168.188.11 port 5201 [ 16] local 192.168.188.57 port 60232 connected to 192.168.188.11 port 5201 [ 18] local 192.168.188.57 port 60233 connected to 192.168.188.11 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 444 MBytes 3.73 Gbits/sec [ 6] 0.00-1.00 sec 896 MBytes 7.51 Gbits/sec [ 8] 0.00-1.00 sec 440 MBytes 3.69 Gbits/sec [ 10] 0.00-1.00 sec 572 MBytes 4.79 Gbits/sec [ 12] 0.00-1.00 sec 432 MBytes 3.62 Gbits/sec [ 14] 0.00-1.00 sec 559 MBytes 4.69 Gbits/sec [ 16] 0.00-1.00 sec 543 MBytes 4.55 Gbits/sec [ 18] 0.00-1.00 sec 422 MBytes 3.54 Gbits/sec [SUM] 0.00-1.00 sec 4.21 GBytes 36.1 Gbits/secCompare the average of the values from your iperf3 tests with the values in the "Average interface values" table below. The table indicates the average throughput you can expect to get from various interface types.
Note: These values are not absolute; they are meant to be used as a guide.
- If your throughput results are substantially slower than the throughput listed in the table, the problem might be related to your physical network.
- If your throughput results are approximately the same as the throughput listed in the table, then the problem is probably not with your physical network.
| Network interface type | Average throughput |
|---|---|
| 1 GbE | 800 Mb/sec |
| 10 GbE | 3 Gb/sec with MTU 1500 6 Gb/sec with MTU 9000 |
| 1 GbE aggregate | (0.95 Gb/sec) x (number of interfaces) |
| 10 GbE aggregate | 6 Gb/sec |
Affected Products
PowerScale OneFSArticle Properties
Article Number: 000188735
Article Type: How To
Last Modified: 23 Oct 2025
Version: 8
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.