Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

35781

September 19th, 2010 10:00

Etherchannel ESX 4.1 and Equallogic

There are some people in our department who feel that creating etherchannel bonded port groups on the connections running to our equallogic group are not gaining advantages due to the load balancing algorithms of the equallogic arrays and there are those that are confident that because of the load balancing and multi paths that there is a performance advantage to be gained by running etherchannel.

I am interested in what Dell's take is on this. We are strictly talking ESX hosts and vm guests running iscsi initiators.

9.3K Posts

September 20th, 2010 07:00

This is a user-to-user forum. If you want an answer from Dell as such, I suggest you call Equallogic support and open a case with them and ask the question.

As a non-Dell answer, my wording may not be 100%, but the general idea is that iSCSI uses MPIO. From a host, each NIC/VMkernel generates an iSCSI session to the group. The group then forwards that session to a physical port on the member. So, with 2 NICs/VMkernels on 2 separate IP addresses, that yields 2 iSCSI sessions where each session generally ends up being forwarded to a different physical port on the member. The MPIO part then splits the SCSI commands (inside of TCP traffic) and sends them to the array. This is how you get a 2Gbit/s 'connection'. If you start teaming NICs, the iSCSI connection becomes a single connection, which gets forwarded to a single physical port on the member (which is a 1Gbit/s port (I'm assuming you don't have a 6010 or 6510)), and therefor you just reduced your bandwidth to 1Gbit/s.

5 Practitioner

 • 

274.2K Posts

September 20th, 2010 08:00

Good morning,

Equallogic arrays do not support Etherchannel / bonding / trunking, etc...    So setting them up provides little advantage. A 2 port channel group, will only create one iSCSI session to the array.   Break that into two MPIO connections and you double your bandwidth and retain the same level of redundancy. 

With ESX v4.x you can enable MPIO which will work hand-in-hand with the load balancing algorithm, One problem with trunking/bonding/etc is the spanning tree overhead as you try to scale connections.   In a 16 member group, with 4x GbE per array, that would be 64 ports that would need to be trunked.   PLUS the trunking from the servers.   Very few if any switches would easily handle that load.   The connection balancing routine we (CLB),  scales w/o having to change the host or switch.   Add another array and those additional connections are used automatically.  No changes needed at the host layer. 

Regards,

-don

31 Posts

September 20th, 2010 09:00

For VMFS datastores we have created a vswitch with four pNICs/vmkernels to utilize round robin (set to default by running 'esxcli nmp satp setdefaultpsp --psp VMW_PSP_RR --satp VMW_SATP_EQL')

For VM guest OS iSCSI connections we have setup either 2 or 4 vNICs using the Dell EQL Hit kit with Dell MPIO plug in to the MS iSCSI initiator to utilize "least queue depth"

This is Dell's/EQL's recommended multipathing setup

102 Posts

September 20th, 2010 12:00

Thanks guys. What you say makes sense. We were using etherchannel as originally we had some nfs volumes mounted for our vm's. We are following the muti-pathing best practices and have two pnics assigned for iscsi with one vmkernel assigned to each and the one nic assigned to each vmkernel port. I just wanted to find out whether there was any advantages. As has been pointed out, it seems more likely that we were actually suffering from a performance perspective keeping this configuration around.

No Events found!

Top