Highlighted
edurojma
1 Copper

Very low performance on Windows Server 2012 R2 Hyper-V VMs in cluster connected to Powerault MD3200i via iSCSI

Hi, we are currently experiencing serious performance issues on the LUNs of a couple of VMs that are connected to a Dell iSCSI SAN. Here is my scenario:

  • 4 Windows Server 2012 R2 Hyper-V Hosts in a HPV Cluster connected to a Dell Powervault MD3200i using the iSCSI initiator and with the Dell MD Storage software installed.
  • Inside are around 40 VMs running without any performance issues.
  • Each HPV Host has 4 NICs connected to the SAN, 2 of these are also used on the Hyper-V network.
  • 2 VMs are clustered and connected to the Dell Powervault using the iSCSI initiator.
  • Each VM also has the Dell MD Storage software installed.

The problem is that the LUNs that are configured on these VMs work super slow. They took forever to format, they take forever to copy files, they take forever to do anything.

I already configured on the VM NIC connected to the SAN, the Jumbo Frames to 9000 according to the SAN provider that told us to configure them that way. They say the SAN is correclty configured and that the problem is related to the virtualization environment.

I even tested configuring a VM outside the Hyper-V cluster on a standalone Hyper-V Host, and that VM works perfectly fine connected to the Dell Powervault! No performance issues at all! 

Any help on this matter is deeply appreciated.

Regards!

0 Kudos
3 Replies
Dev Mgr
6 Gallium

RE: Very low performance on Windows Server 2012 R2 Hyper-V VMs in cluster connected to Powerault MD3200i via iSCSI

What you're doing is referred to as Guest-attached-iSCSI (there are other names for it as well like direct-iSCSI and some others).

A few notes for this:

- Try to avoid sharing the same physical NIC ports for host and guest iSCSI. So with 4 NICs available, use 2 for the host iSCSI (give them each an IP in 2 of the 4 subnets), then for the other 2 NICs you make them each a virtual switch, but uncheck the "allow management traffic", so that it strips IPv4 and IPv4 off of the physical NIC and dedicates those 2 NICs to just VM traffic.

- Enable jumbo frames and flowcontrol on the physical if not already done

- Enable jumbo frames on the virtual NIC if possible (don't remember if the 2012 R2 virtual NIC allows this)

- Ensure the switches are proper iSCSI switches (if from Dell they should be a 6200-series or higher model)

- Ensure these switches are properly configured for iSCSI; this doesn't mean "turn on iscsi optimization".

- Best bet is to dedicate the switches to just iSCSI traffic and nothing else (LAN traffic and/or routing especially)

Another note: The current MD support matrix does not list Windows 2012 R2 as a supported operating system. The latest firmware released a few weeks ago did list support for 2012 R2, but the latest (available) resource DVD doesn't list this OS (yet). This may play a role as well.

Member since 2003

0 Kudos
edurojma
1 Copper

RE: Very low performance on Windows Server 2012 R2 Hyper-V VMs in cluster connected to Powerault MD3200i via iSCSI

Ok so I did selected one dedicated SAN NIC for the VMs, removed the TCP/IP on the hyper-v host to ensure that it is not used by the physical server, confgured the connection inside the VM, but still the perfomance is terrible. I'm trying to copy a 800Mb file in a LUN assigned in the VM and it says that is going to take 4 Hours! And the file resides in the same LUN!

Jumbo Frames are enabled both on the physical NIC as well as the VM.

Flow Control is enabled Rx and Tx on the physical NIC, there is no option for that on the VM NIC.

The Switches are dedicated for the SAN, they came along with it and we are not using them for anything else.

I'm guessing it probably a driver issue or something, because the SAN does work perfectly fine on the physical servers that host all the VMs.

Is there anything else that can be tested?

Thanks

0 Kudos

RE: Very low performance on Windows Server 2012 R2 Hyper-V VMs in cluster connected to Powerault MD3200i via iSCSI

Update of hosting hyper-v server network drivers helped to resolve the same problem.

0 Kudos