Start a Conversation

Unsolved

This post is more than 5 years old

34548

December 12th, 2013 04:00

Network speed issues

Hey

I hope this is posted the right place :)

We’re having some issues with network speed on our setup.

We have 2x Equallogic PS4100 model 70-0476 SANs in group, 2x Windows 2012 R2 fileservers in failover cluster, 3x Windows 2012 R2 Hyper-V servers in failover cluster and 2x Force10 S25n switches in a ring topology stack with redundant 12G links.
All servers are Dell Poweredge R620 with Broadcom netcards.

The 3 Hyper-V servers are connected with 4x 1-gbit connections, teamed with LACP.
The 2 file-servers are connected with 2x 1-gbit connections, teamed with LACP, to the hyper-v servers and 2x 1-gbit connections, MPIO, to the storage-network.

The networks are split by vlans, so storage is its own vlan.
The ports on the storage-network is set up with “MTU 9000” and “flowcontrol rx on tx on” – the rest is without flowcontrol and MTU 1500. The file-servers is also configured with MTU 9000 and flowcontrol enabled.
STP is not configured on the switch.
LLDP is not configured.

When we copy large files (200mb to 1.5gb, 15gb total) from SAN (share on fileserver-cluster) to the fileserver not currently in use, it copies with 60-150 MB/s.
When we copy from SAN (share on fileserver-cluster) to a virtual machine on same SAN, it copies with 60-80 MB/s but often drops to 0 for 1-2 seconds, before regaining the speed.

I would have believed the copy to the virtual machine, would have been as fast as the fileservers or even faster, considering it should have been using ODX (unless I’ve misunderstood something about how ODX works when using windows fileservers).


Have anyone seen this issue and have some ideas as to where the problem might occur?

9.3K Posts

December 12th, 2013 21:00

iSCSI/Equallogic basics; don't team the NICs you are using for iSCSI. Let MPIO of the hitkit sort it out (unless you run an OS that there is no Equallogic hitkit for, but often you would then just use native MPIO and still don't team NICs for iSCSI).

iSCSI traffic should be isolated from LAN traffic using at least dedicated (non-routed) VLANs, and optionally dedicated switches. This obviously also means iSCSI needs it's own (whole) subnet. The only reason to route iSCSI traffic would be for replication to another site's Equallogic SAN.

Switch flowcontrol should be enabled on the iSCSI ports to the servers and SAN, as well as on the iSCSI NICs on the servers.

For more reliable failover, set all host and SAN iSCSI ports to RSTP edge-port (on Force10 (on PowerConnect it would be spanning-tree portfast)).

2 Posts

December 13th, 2013 01:00

Sorry if I wasn't clear about this in the post..

The NICs for iSCSI is MPIO, not teamed. the other NICs are teamed to have redundancy for normal usage, and the performanceboost in LACP + SMB3.0 multichannel

the NICs and switch-ports for both SAN and fileserver on iSCSI side is configured with flowcontrol and MTU 9000.

I will try setting the ports mentioned to RSTP and return if it helps :)

December 13th, 2013 09:00

Hello,

ODX will do nothing for that, since it just accelerates direct operations from Hyper-V hosts on VHD(x) (such as clone/copy/...).

In the past I had huge problems with Broadcom drivers from dell and VM performance, I had to disable VMDQs on the interfaces to have decent performance.

www.dell.com/.../ArticleView

Some people solved the problem installing the original driver from Broadcom, since the Dell version didn't fix. Not sure is it's supported or if Dell fixed their drivers.

Bye

Claudio

No Events found!

Top