Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Article Number: 000187640


Dell EMC VxRail: VSAN Hosts Network Performance Test does not exceed 10,000Mb/s bandwidth on 25Gb or higher network infrastructure.

Summary: When running the proactive "VSAN Hosts Network Performance Test" available from the VSAN plugin UI the "Received Bandwidth" result does not exceed 10,000Mb/s even though expectations may be higher when running on a 25, 40 or higher xxx Gb physical network infrastructure. ...

Article Content


Symptoms

Example from a 25Gb network infrastructure running the proactive "VSAN Hosts Network Performance Test"

vSphere Client GUI 25Gb network infrastructure running the proactive "VSAN Hosts Network Performance Test"
  • As highlighted in green the result is less then 10,000Mb/s running on a 25Gb network.
 

Cause

This is as per expectations of the test as it was designed to validate/simulate network performance at 10Gb bandwidth.

We can see from the logs that the test uses iperf and is pre-configured to use the "-b 10G" parameter which means the test target bandwidth to achieve is 10Gb accordingly the test will not exceed this value regardless of the underlying physical network capabilities for example 25Gb or higher:
vsanmgmt.log:2021-05-30T10:31:12.280Z warning vsand[2100968] [opID=f67999c8-5bfc VsanHealthSystemImpl::RunIperf] Cmd: ['/usr/lib/vmware/vsan/bin/iperf3.copy', '++group=host/vim/tmp', '-J', '-s', '-B', 'x.x.x.12']
vsanmgmt.log:2021-05-30T10:31:42.278Z warning vsand[2100902] [opID=f67999c8-5c4e VsanHealthSystemImpl::RunIperf] Cmd: ['/usr/lib/vmware/vsan/bin/iperf3.copy', '++group=host/vim/tmp', '-J', '-c', 'x.x.x.11', '-b', '10G', '-w', '2048K', '-t', '15', '-O', '10']

[root@esx02:/vmfs/volumes/602242cd-4532e87e-d3bd-0c42a10cf71c/log]  /usr/lib/vmware/vsan/bin/iperf3.copy -h
Usage: iperf [-s|-c host] [options]
       iperf [-h|--help] [-v|--version]

Server or Client:
  -p, --port      #         server port to listen on/connect to
  -f, --format    [kmgKMG]  format to report: Kbits, Mbits, KBytes, MBytes
  -i, --interval  #         seconds between periodic bandwidth reports
  -F, --file name           xmit/recv the specified file
  -A, --affinity n/n,m      set CPU affinity
  -B, --bind      <host>    bind to a specific interface
  -V, --verbose             more detailed output
  -J, --json                output in JSON format
  --logfile f               send output to a log file
  --forceflush              force flushing output at every interval
  -d, --debug               emit debugging output
  -v, --version             show version information and quit
  -h, --help                show this message and quit
Server specific:
  -s, --server              run in server mode
  -D, --daemon              run the server as a daemon
  -I, --pidfile file        write PID file
  -1, --one-off             handle one client connection then exit
Client specific:
  -c, --client    <host>    run in client mode, connecting to <host>
  -u, --udp                 use UDP rather than TCP
  -b, --bandwidth #[KMG][/#] target bandwidth in bits/sec (0 for unlimited)
                            (default 1 Mbit/sec for UDP, unlimited for TCP)
                            (optional slash and packet count for burst mode)
  -t, --time      #         time in seconds to transmit for (default 10 secs)
  -n, --bytes     #[KMG]    number of bytes to transmit (instead of -t)
  -k, --blockcount #[KMG]   number of blocks (packets) to transmit (instead of -t or -n)
  -l, --len       #[KMG]    length of buffer to read or write
                            (default 128 KB for TCP, dynamic or 1 for UDP)
  --cport         <port>    bind to a specific client port (TCP and UDP, default: ephemeral port)
  -P, --parallel  #         number of parallel client streams to run
  -R, --reverse             run in reverse mode (server sends, client receives)
  -w, --window    #[KMG]    set window size / socket buffer size
  -C, --congestion <algo>   set TCP congestion control algorithm (Linux and FreeBSD only)
  -M, --set-mss   #         set TCP/SCTP maximum segment size (MTU - 40 bytes)
  -N, --no-delay            set TCP/SCTP no delay, disabling Nagle's Algorithm
  -4, --version4            only use IPv4
  -6, --version6            only use IPv6
  -S, --tos N               set the IP 'type of service'
  -L, --flowlabel N         set the IPv6 flow label (only supported on Linux)
  -Z, --zerocopy            use a 'zero copy' method of sending data
  -O, --omit N              omit the first n seconds
  -T, --title str           prefix every output line with this string
  --get-server-output       get results from server
  --udp-counters-64bit      use 64-bit counters in UDP test packets

[KMG] indicates options that support a K/M/G suffix for kilo-, mega-, or giga-
This can be further confirmed running the test over ssh between 2 hosts instead of the UI, reference the additional info section on how to set up testing.
  • These tests consume bandwidth by design so it is not recommended to run this on live production systems as VSAN performance will be impacted if congestion points are reached.
  • Test results may vary and this article is not a reference for benchmarking as various NIC vendor, cable and switch combinations may result in different results.

Resolution

No resolution required as this is working by design, this may change in the future if/when VMware makes updates accordingly.

As of vSphere 8 we can see that the network test is capable of utilizing /testing 25Gb network.

VxRail GUI showing that the network test is capable of utilizing /testing 25Gb network.
 

Additional Information

Reference popular blogs for additional information and guidance on this sort of testing: This hyperlink is taking you to a website outside of Dell Technologies.
  • https://williamlam.com/2016/03/quick-tip-iperf-now-available-on-esxi.html
  • https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/iperf/multi-stream-iperf3/
Setting up host 1 as single "Server":
  • Single threaded to simulate the UI test.
  • Using the VSAN vmkernel IP to simulate the UI test.
#disable the firewall
[root@esx01:~] esxcli network firewall set --enabled false
$start the server side
[root@esx01:~] /usr/lib/vmware/vsan/bin/iperf3.copy -s -B x.x.x.11
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Running the test from host 2:
[root@esx02:/vmfs/volumes/602242cd-4532e87e-d3bd-0c42a10cf71c/log]  /usr/lib/vmware/vsan/bin/iperf3.copy -c x.x.x.11 -b 10G -w 2048k -t 15 -O 10
Connecting to host x.x.x.11, port 5201
[  4] local x.x.x.12 port 31273 connected to x.x.x.11 port 5201
iperf3: getsockopt - Function not implemented
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.14 GBytes  9.76 Gbits/sec  8634728   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[  4]   1.00-2.00   sec  1.17 GBytes  10.0 Gbits/sec    0   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
...
[  4]   1.00-2.00   sec  1.16 GBytes  9.92 Gbits/sec    0   0.00 Bytes
iperf3: getsockopt - Function not implemented
[  4]   2.00-3.00   sec  1.18 GBytes  10.1 Gbits/sec    0   0.00 Bytes
iperf3: getsockopt - Function not implemented
[  4]   3.00-3.58   sec   661 MBytes  9.56 Gbits/sec  4286332568   0.00 Bytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-3.58   sec  4.18 GBytes  10.0 Gbits/sec  3412112096             sender
Removing the 10Gb "Bandwidth Target"  results in higher single threaded bandwidth:
[root@esx02:/vmfs/volumes/602242cd-4532e87e-d3bd-0c42a10cf71c/log]  /usr/lib/vmware/vsan/bin/iperf3.copy -c x.x.x.11 -w 2048k -t 15 -O 10
Connecting to host x.x.x.11, port 5201
[  4] local x.x.x.12 port 12960 connected to x.x.x.11 port 5201
iperf3: getsockopt - Function not implemented
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.69 GBytes  14.6 Gbits/sec  8634728   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[  4]   1.00-2.00   sec  1.81 GBytes  15.5 Gbits/sec    0   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[  4]   2.00-3.00   sec  1.82 GBytes  15.7 Gbits/sec    0   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[  4]   3.00-4.00   sec  1.74 GBytes  14.9 Gbits/sec    0   0.00 Bytes       (omitted)
...
[  4]   4.00-5.00   sec  1.79 GBytes  15.3 Gbits/sec    0   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[  4]   5.00-6.00   sec  1.65 GBytes  14.2 Gbits/sec    0   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[  4]   6.00-6.65   sec  1.22 GBytes  16.0 Gbits/sec  4286332568   0.00 Bytes       (omitted)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-6.65   sec  11.7 GBytes  15.1 Gbits/sec    0             sender
To further consume more bandwidth in testing multiple sessions can be ran.
On host 1 start multiple servers on different ports:
[root@esx01:~] /usr/lib/vmware/vsan/bin/iperf3.copy -s -B x.x.x.11 -p 5201 &
[root@esx01:~] -----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
[root@esx01:~] /usr/lib/vmware/vsan/bin/iperf3.copy -s -B x.x.x.11 -p 5202 &
[root@esx01:~] -----------------------------------------------------------
Server listening on 5202
-----------------------------------------------------------
Start 2 client sessions on host 2 per below example:
[root@esx02:/vmfs/volumes/602242cd-4532e87e-d3bd-0c42a10cf71c/log]  /usr/lib/vmware/vsan/bin/iperf3.copy -c x.x.x.11 -p 5201 -w 2048k -t 15 -O 10 &
[root@esx02:/vmfs/volumes/602242cd-4532e87e-d3bd-0c42a10cf71c/log] Connecting to host x.x.x.11, port 5201
[  4] local x.x.x.12 port 58594 connected to x.x.x.11 port 5201
/usr/lib/vmware/vsan/bin/iperf3.copy -c x.x.x.11 -p 5202 -w 2048k -t 15 -O 10iperf3: getsockopt - Function not implemented
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.19 GBytes  10.2 Gbits/sec  8634728   0.00 Bytes       (omitted)

Connecting to host x.x.x.11, port 5202
[  4] local x.x.x.12 port 58598 connected to x.x.x.11 port 5202
iperf3: getsockopt - Function not implemented
[  4]   1.00-2.00   sec  1.30 GBytes  11.1 Gbits/sec    0   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[  4]   2.00-3.00   sec   849 MBytes  7.12 Gbits/sec    0   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   934 MBytes  7.83 Gbits/sec  8634728   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[  4]   3.00-4.00   sec  1.43 GBytes  12.3 Gbits/sec    0   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[  4]   1.00-2.00   sec   977 MBytes  8.19 Gbits/sec    0   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
[  4]   4.00-5.00   sec  1.32 GBytes  11.3 Gbits/sec    0   0.00 Bytes       (omitted)
iperf3: getsockopt - Function not implemented
...
[  4]  10.00-11.00  sec  1.19 GBytes  10.2 Gbits/sec    0   0.00 Bytes
iperf3: getsockopt - Function not implemented
[  4]  13.00-14.00  sec  1.17 GBytes  10.0 Gbits/sec    0   0.00 Bytes
iperf3: getsockopt - Function not implemented
[  4]  11.00-12.00  sec  1.17 GBytes  10.1 Gbits/sec    0   0.00 Bytes
[  4]  12.00-13.00  sec  1.18 GBytes  10.2 Gbits/sec    0   0.00 Bytes
[  4]  14.00-15.00  sec  1.17 GBytes  10.0 Gbits/sec  4286332568   0.00 Bytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-15.00  sec  17.6 GBytes  10.1 Gbits/sec  102662309             sender
[  4]   0.00-15.00  sec  17.7 GBytes  10.1 Gbits/sec                  receiver

iperf Done.
[  4]  13.00-14.00  sec  1.79 GBytes  15.4 Gbits/sec    0   0.00 Bytes
[  4]  14.00-15.00  sec  1.78 GBytes  15.3 Gbits/sec  4286332568   0.00 Bytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-15.00  sec  18.6 GBytes  10.6 Gbits/sec  919557101             sender
[  4]   0.00-15.00  sec  18.7 GBytes  10.7 Gbits/sec                  receiver

iperf Done.
  • We can see 2 active sessions averaging at 10.6 and 10.7 Gb/s respectively resulting in +-21.3 Gbit/sec 
  • More parallel iperf sessions will consume more bandwidth depending on multiple variables from CPU cores, network settings, ESXi port-group policies, NIC vendor/firmware, cable vendor and switch vendor/configuration.

Article Properties


Product

VxRail, VxRail Appliance Family, VxRail Appliance Series, VxRail Software

Last Published Date

01 Feb 2023

Version

2

Article Type

Solution