Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.
Some article numbers may have changed. If this isn't what you're looking for, try searching all articles. Search articles

How to Configure Guest RDMA on Windows Server 2019

Summary: This article explains how to configure Guest RDMA on Windows Server 2019.

This article may have been automatically translated. If you have any feedback regarding its quality, please let us know using the form at the bottom of this page.

Article Content


Instructions

Table of Contents

  1. Introduction of Remote Direct Memory Access (RDMA)

  2. Lab Environment

  3. Hardware Configuration

  4. Configuring Guest RDMA

  5. Powershell Cmdlets

  6. Download Links


1.Introduction of Remote Direct Memory Access (RDMA)

Remote Direct Memory Access (RDMA) is a great technology that enables computers to transfer data across the network without involving CPU or OS resources of the hosts involved (Compute/Storage), improving throughput and performance, reducing latency and CPU overhead.

There are two popular RDMA implementations today:

RoCE
- Transport: UDP/IP (RoCE v2)
- Rely on DCB (Data Center Bridging)

iWarp
- Underlying Network: TCP/IP
- TCP provides flow control and congestion management


RoCE relies heavily on DCB configuration such as ETS (Enhanced Transmission Service) and PFC (Priority Flow Control), which can become a problem if network switches are not configured properly. iWARP does not require any switch configuration.

Microsoft started supporting RDMA on Windows Server 2012 and added new features in the later Windows Server OS's. One feature available on Microsoft’s newest OS, Windows Server 2019, is the ability to present RDMA to the Guest OS (VM). This allows the Guest to have the same low-latency access to a network storage as the native host, reducing CPU overhead and improving throughput/performance directly in the VM.


Dell EMC offers great options for 25Gbps RDMA such as the Cavium QLogic FastLinQ 41262 Dual Port 25 GbE SFP28 (iWarp/RoCE) and the Mellanox ConnectX-4 Lx 25Gbps RDMA (RoCE). This example uses the Mellanox ConnectX-4 Lx RDMA (RoCEv2 mode) to demo the Guest RDMA feature.

2.Lab Environment

Servers: 2 x Dell EMC R7425 (AMD Epyc 7551 32-Core Processor), 256GB Memory, Mellanox ConnectX-4 Lx fully updated (BIOS, Firmware, Drivers and OS)
Roles/Features Installed: Hyper-V, DCB, Failover Clustering, S2D
Switch: Dell EMC S5048F-ON – MGMT VLAN 2, SMB VLAN 15
 
Dell EMC recommends updating BIOS, firmwares, drivers and Operating System as part of your scheduled update cycle. BIOS, firmware, driver and OS updates are intended to improve the reliability, stability and security of your system.

3.Hardware Configuration

1. Reboot the servers and go to the System Setup (press F2 during POST).

2. Select Device Settings.


HOW16693_en_US__1Fig1 - BiosDevSet
Figure 1 - BIOS Device Settings
 
3. Select the NIC in Slot 1 Port 1 - Mellanox

HOW16693_en_US__2Fig2 - BiosDevSetMellanox
Figure 2 - Mellanox Slot 1 Port 1 Device Settings
 
4. Go to Device Level Configuration

HOW16693_en_US__3Fig3 - BiosDevLevConf
Figure 3 - Device Level Configuration
 
5. Select SR-IOV in Virtualization Mode.

HOW16693_en_US__4Fig4 - BiosDevSriov
Figure 4 - SR-IOV Setting 
 
6. Repeat the steps above on the NIC in Slot 1 Port 2 - Mellanox.

HOW16693_en_US__5Fig5 - BiosDevSetMellanox2
Figure 5 - Mellanox Slot 1 Port 2 Device Settings
 
7. Go back to System Setup Main Menu then select System BIOS.

HOW16693_en_US__6Fig6 - SystemBios
Figure 6 - System BIOS
 
8. Select Integrated Devices.

HOW16693_en_US__7Fig7 - IntegratedDev
Figure 7 - BIOS Integrated Devices
 
9. Enable SR-IOV Global Enable option.

HOW16693_en_US__8Fig8 - SriovGlobal
Figure 8 - SR-IOV Global
 
10. Save your configuration and reboot the server.
 

4.Configuring Guest RDMA


1. Install Windows Server 2019
2. Install the Hyper-V Role and the Data Center Bridging (DCB) feature.
3. Configure QoS (Quality-of-Service), DCB, PFC, ETS. Make sure that the server NIC and QoS configuration matches the switch configuration.
4. Configure Hyper-V SET (Switch Embedded Team).

HOW16693_en_US__9Fig9 - vSwitch
Figure 9 - vSwitch Configuration
 
5. Test RDMA communication between the physical servers prior configuring the VMs. Download Microsoft Diskspd and the Microsoft Test-RDMA PowerShell script. Continue with the steps below only if communication is working properly. Otherwise check the switch configuration and/or DCB settings on the host.

HOW16693_en_US__10Fig10 - Test-RDMA1
Figure 10 - Test-Rdma Physical Hosts
 
6. Verify if SR-IOV is enabled on the RDMA adapters on both servers.

HOW16693_en_US__11Fig11 - SRIOVEnab
Figure 11 - SR-IOV Enabled
 
7. Create two Gen 2 VMs (Guest OS), one on each server then install Windows Server 2019. In this scenario,  a Guest OS is created with two vNICs, one for MGMT traffic (VLAN 2) and one for SMB traffic (VLAN 15).

HOW16693_en_US__12Fig12 - VMNetConfig
Figure 12 - Guest OS Network Configuration Host R7425-01

HOW16693_en_US__13Fig13 - VMNetConfig2
Figure 13 - Virtual Machine Network Configuration Host R7425-02
 
8. Shutdown the VMs.
9. Enable SR-IOV and RDMA on the Guest OS.

HOW16693_en_US__14Fig14 - EnableSriovRdmaGuest
Figure 14 - Enable SR-IOV/RDMA on Guest OSes
 
10. Start the VMs then open Device Manager. The Mellanox Virtual Function (VF) should be listed under Network Adapters. The VF is not presented as a regular network adapter in Network Connections as seen on Figure 15. 

HOW16693_en_US__15Fig16 - VMDevManager
Figure 15 - Guest OS Device Manager and Network Connections
 
NOTE: A NIC driver may need to be installed to enable RDMA in the Guest Operating System.
11. Enable RDMA on SMB vNIC. RDMA functionality is already enabled on the Mellanox VF (Ethernet4 - Figure 16 ).

HOW16693_en_US__16Fig15 - VMRdmaEnab
Figure 16 - Enable RDMA on SMB vNIC
 
12. Test Guest RDMA. 
HOW16693_en_US__17icon Note: It is important to specify the IfIndex (vNIC Interface Index) and the VfIndex (Mellanox VF Interface Index).

HOW16693_en_US__18Fig17 - Test-RdmaVM
Figure 17 - Test-RDMA Guest OS
 

5.Powershell Cmdlets


#Create new virtual switch with SRIOV option enabled
New-VMSwitch -Name xxxx -NetadapterName xxxx,xxxx -AllowManagementOS $true -EnableEmbeddedTeaming $true -EnableIov $true

#Verify if SRIOV is enabled on physical adapter
Get-NetAdapterSriov -Name xxxx

#Get VM network configuration
Get-VM -Name xxxx| Get-VMNetworkAdapter

#Get VM network VLAN configuration
Get-VM -Name | Get-VMNetworkAdapterVlan

#Set VM SRIO and RDMA on Virtual Machine(Guest OS) vNIC
Get-VM -Name xxxx | Set-VMNetworkAdapter -Name xxx -IovWeight 100 -IoVQueuePairsRequested 2
Get-VM -Name xxxx | Set-VMNetworkAdapterRdma -Name xxx -RdmaWeight 100

#Enable RDMA on NetAdapter
Enable-NetAdapterRdma -Name xxxx

#Test-Rdma Physical Host
.\Test-Rdma.ps1 -IfIndex xx -IsRoCE $true -RemoteIpAddress xxx.xxx.xxx.xxx -PathToDiskspd xxxxx

#Test-Rdma Virtual Machine (Guest OS)
.\Test-Rdma.ps1 -IfIndex xx -IsRoCE $true -RemoteIpAddress xxx.xxx.xxx.xxx -PathToDiskspd xxxxx -VfIndex xx
 

6.Download Links


• Microsoft Diskspd
• Microsoft Test-RDMA Script

 
Have any comments, questions or suggestions? Please contact us on WinServerBlogs@dell.com
 

Article Properties


Affected Product

PowerEdge, Microsoft Windows Server 2019

Last Published Date

15 Sept 2021

Version

7

Article Type

How To