This post is more than 5 years old
3 Posts
0
174284
Hyper-V networking slow
Dell PowerEdge T420, Windows Server 2008 R2 with Hyper-V installed and a single 2008 R2 virtual machine setup within the Hyper-V environment. The two NICs on the T420 are connected to the same Gigabit switch, one for the physical machine's use and the other for the virtual network.
The only thing being done in the VM is Remote Desktop Services and then end-users running a medical records program which is accessing a database on completely separate server. The physical server just has Hyper-V, a domain controller and DNS.
With no activity, ping'ing the guest from the physical machine or vice versa, or a separate database server from the guest machine, we get under 1 ms response times. However, if there is any activity in the guest OS, ping times vary drastically -- between 1 ms and 200 ms.
I have the various Offloads disabled in the virtual network adapters properties but that does not seem to make any difference. Looking at the Resource Monitor on the virtual machines, there is generally under 5% of the CPU and 10% of the memory being used, and network traffic when the application is running is under 100 Kbps (usually under 50 Kbps).
On the physical server, the CPU & network traffic are basically the same as the virtual machine, with 80% of the memory in use (since it is all allocated to Hyper-V).
Any thoughts on how to solve the slow networking?
Thanks in advance!
A p u
3 Posts
3
January 6th, 2013 14:00
The solution to the problem turns out to be...
The Broadcom NICs in this Dell server have a setting "Virtual Machine Queues" that defaults to enabled. Once disabled, the guest OS has the same network speed as the physical machine does.
danielslaughter
1 Message
0
June 14th, 2013 12:00
That worked for me. Thanks!
Francisco M.
1 Message
1
April 14th, 2014 14:00
PPRINetadmin
3 Posts
0
August 20th, 2019 09:00
Ok, I know this is a very old thread.
But... I am having this same problem on our PowerEdge R820 servers running Hyper-V in a 4 server cluster.
These servers do not have a Broadcom card - they use an Intel 4P I350-t NICs. But the same setting (Virtual Machine Queues) is still there and is still Enabled by default.
Before I go and start changing settings, I need to ask: Does this NIC have the same issues as the Broadcom.
RachelGomez
162 Posts
0
July 10th, 2022 21:00
This performance problem has nothing to do with OpenEdge, the underlying architecture is transparent to our communication layers. The following information is recorded as a known 'first step' advice, after-which Vendor engagement is required to further assist.
The first troubleshooting step is to ensure that the adapter driver and firmware are up to date on the Hyper-V host.
If the adapter driver and firmware are up to date and the problem persists:
1. Disable VMQ on the network adapters:
To disable VMQ on a virtual switch, use the Set-VMNetworkAdapter PowerShell:
$ Set-VMNetworkAdapter –ManagementOS -Name -VmqWeight 0
To disable VMQ on a physical network adapter:
Uncheck the appropriate box in the Advanced tab of the network adapter's properties page
2. Change the MAC addresses of the virtual switch(es)
Either Modify in Hyper-V Manager or use one of the following Set-VMNetworkAdapter PowerShell cmdlets:
Use a static MAC address:
$ Set-VMNetworkAdapter –ManagementOS -Name -StaticMacAddress
Use a dynamic MAC address:
$ Set-VMNetworkAdapter –ManagementOS -Name -DynamicMacAddress
Regards,
Rachel Gomez