Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2026

October 14th, 2008 19:00

NS40 and Network HA setup

Hi guys,

i have an NS40 (5.6.39) that i want to configure for maximum performance/availability. I looked back at Ian, Sandip and Rainer posts from a while back and thinking about this configuration:

switch1 -- cge0,cge1 = LACP trunk1
switch2 -- cge2,cge3 = LACP trunk2

now i combine trunk1 and trunk2 into FSN0. This FSN will only be used one one VLAN so i am not going to assign VLAN ids.

So let's say i have trunk1 as the active device in my FSN0. Even though i have 2 x 1Gig connections to switch1 ..server_A will only use of the underlying cge devices (cge0 or cge1). Even if my server had a 2G network card (hypothetically speaking), it could still only get 1G worth of throughput. Now when server_B tries to connect, its data will most be likely sent through the other device (cge0 or cge1). So i really get more overall throughput by spreading my traffic jam over multiple lanes on the highway, so i have less congestion per lane.


Scenario 2:

switch1/switch2 -- cge0,cge1 =FSN0
switch1/switch2 -- cge2,cge3 = FSN1

now i create a trunk consisting of two FSN devices. Is this even possible ? I thought LACP connections have to reside on the same switch. I saw Ian's comment in this thread that got me thinking about this config and how it would work. I see advantages of loading balancing between two switches ..anything else ?

http://forums.emc.com/forums/thread.jspa?messageID=490289񷬱

Thank you

1.5K Posts

October 14th, 2008 22:00

Hi dynamox.

Scenario 1 is really a good solution and very widely used. This provides you load balancing and port level redundancy in the form of trunking and also switch level redundancy in the form of FSN. VLAN tagging allows you to configure multiple interfaces in different VLANs as well.

However, your analogy with 2 hosts are not exactly the same - the port (wire) speed limit of 1 Gig will be applicable on each session - not per host. So, one session on a particular host will use either cge0 or cge1 - but another session on the same host may use the other port. That means, on a host level you still get 2 Gig bandwidth - but one single session can not utilize the same.

Scenario 2 -

You can create the two FSN devices but can not create a LACP trunk out of the FSNs as typically the ports in FSN will be connected to differnt switches. However, one may consider using two FSN devices - especially if iSCSI is used and one wants to dedicate a 1 Gig pipe for iSCSI traffic - so one FSN can be used for iSCSI only - physically separating the traffic and also ensuring the full 1 gig pipe.

Scenario 3 -

You have not mentioned this - but this may be another option which I personally like -

Switch 1 -- cge0,1,2 = LACP Trunk 0
Switch 2 - cge3

FSN0 - primary LACP trunk 0, standby - cge3


Then use VLAN tagging on FSN0.

In this configuration - you are getting effective 3 Gig pipe on the primary connection and 3 ports gives you more port level redundancy. However, if the primary switch fails or all 3 ports/paths are down - then the standby connection will be reduced to only 1 Gig. This may be acceptable in many cases considering this is a fault situation and will not be there for a long time. So running on standby connection of 1 Gig will be very rare - but you still get the switch level redundancy. There will be a performance hit during the time when standby device is in use. If you have any application which can not sustain this - then it may not be a good solution though.

Also, this may not be possible if you are using Etherchannel as it requires 2 or 4 connections. However, LACP trunks can be created with 3 ports.

Whatever be the configuration - it will be the same for your standby data mover as well.

Hope I am able to provide some details on this - I am sure others will provide more valued thoughts and inputs.

Thanks,
Sandip

674 Posts

October 15th, 2008 05:00

Some customers are using Nortel SMLT (split multi-link trunking) feature.

Then you are able to use all physical network ports of a DM actively, no longer any standby ports needed.

2 Intern

 • 

20.4K Posts

October 15th, 2008 07:00

thanks guys ...can you elaborate a little more on the statistical load balancing ? Why would one select ip versus tcp versus mac values for LoadBalance parameter ? I see the default method is load balancing by IP but what are the other user cases where other methods are more beneficial ?

301 Posts

May 6th, 2010 05:00

Hi,

I am curious about this.

We have a senario where there is a single client talking to the celerra so IP or MAC balancing is not a runner. TCP Ports may be a runner but I need more info about this. The reason ( I think ) tcp is not it is according to our networking gurus a more intensive switch management operation, do not know why IP is default over MAC. Did you get any further info regarding the balancing options ?

2 Intern

 • 

20.4K Posts

May 6th, 2010 06:00

Take a look at Ian post at the very bottom of this thread

https://community.emc.com/message/465684#465684

301 Posts

May 6th, 2010 06:00

Tks,

I have asked him for some clarification, as my networking team gave me a different story

8.6K Posts

May 6th, 2010 07:00

castleknock schrieb:

Hi,

I am curious about this.

We have a senario where there is a single client talking to the celerra so IP or MAC balancing is not a runner. TCP Ports may be a runner but I need more info about this. The reason ( I think ) tcp is not it is according to our networking gurus a more intensive switch management operation, do not know why IP is default over MAC. Did you get any further info regarding the balancing options ?

getting more than one link utilized by one single client and application is going to be difficult

link aggregation wasnt designed for that - its designed to aggregate multiple clients and conversations

as to why - lets looks at it for both directions:

client to Celerra:

als long as you are using one protocol (NFS, CIFS, ISCSI) your destination IP, MAC and TCP port will all be the same

all the protocols use a single TCP port like 2048 for NFS, 135/445 for CIFS

you could create multiple interfaces to get multiple IP adresses used, which gets us to the other protocol and client-dependent hurdles

- one NFS mount, CIFS server connection or ISCSI LUN is only using *one* IP address as destination on the Celerra

- even if you use multiple of these they are most likely using IP adresses on the same subnet and most TCP/IP stacks only use the first interface for a route

(sometimes you can *fix* this by creating static host routes on the clients)

Celerra to client:

thats the easiest one - by default the Celerra has a reflect networking feature turned on that will send the reply back through the same interface that the request came in.

So in summary - the only feasible environment to use multiple links is with ISCSI - if you are able to use multiple LUNs

Rainer

190 Posts

May 7th, 2010 08:00

On the switch side, things depend upon the switching algorithm.  For Cisco devices there is a summary of options here: http://www.cisco.com/en/US/tech/tk389/tk213/technologies_tech_note09186a0080094714.shtml

Unless all of your clients are in the same layer 2 domain, distributing by MAC address is pretty useless as the algorithm is going to use the MAC of the default gateway which is always the same. If you have a network team, they can tell you what they are using.

My switches will only do either IP or MAC, so we go with IP.  Doing it via src-dst-port might be better as most network connections use a dynamic port so you might get a better spread but I would defer to your network team as I don't know what impact that would have on processor of the switch itself. Regardless of the algorithm used (using Cisco equipment anyway), once traffic starts flowing, the traffic is pinned to one physical connection.

For iSCSI on other platforms (someone help me out here with support on the Celerra as I don't do iSCSI with ours), the traffic balancing is achieved at a higher level by accessing multiple ports using PowerPath, MPIO, or whatever the supported multipath software would happen to be.

10Gbit ethernet more or less changes the discussion as load balancing becomes less of an issue (on most cisco platforms, I believe LACP/etherchannel is limited to 8 ports - 8x1Gbit=8Gbit with a crude load balancing algorithm).

Dan

117 Posts

May 9th, 2010 01:00

castleknock wrote:

Hi,

I am curious about this.

We have a senario where there is a single client talking to the celerra so IP or MAC balancing is not a runner. TCP Ports may be a runner but I need more info about this. The reason ( I think ) tcp is not it is according to our networking gurus a more intensive switch management operation, do not know why IP is default over MAC. Did you get any further info regarding the balancing options ?

What kind of switch do you have?

For most switches, it's no more expensive to do TCP-based load balancing with LACP than IP or MAC.

The most common mechanism over the years, and simplest, is for the switch to some simple math to figure out which port to use.  For example, older Cisco switches would do a simple XOR operation against the source and destination MAC address.  Then it'd take the last one, two, three, or four bits of the result (depending on how many ports in your Etherchannel) and that would be the "number" of the port in the EtherChannel used to transmit data.

When IP and TCP-based load balancing were introduced, they used the same technique, though they'd XOR the IP addresses instead, or a combination of the IP addresses and TCP addresses.

With newer hardware, that support any number of ports in an Etherchannel (and tries to redistribute load evenly if a port goes down), the math that a switch does to calculate the port used for translation may be different - maybe even a little more complex.  But in general the math is pretty low overhead.

Many modern switches also use a technique of caching the port used to transmit packets on an Etherchannel, so the calculation doesn't have to be done each time the switch sends a packet.  This is usually done in connection with other switch path caching technologies the switch supports. 

...But basically, on all switches I've ever heard of, there should be no (or an insignificant amount of) extra overhead for doing TCP instead of IP or MAC load balancing.

16 Posts

June 8th, 2010 14:00

So should you match the switch/port setting to the Celerra?

I have a Cisco 3750 that has the load-balance setting set to src-dst-mac.    Should I change the Etherchannel trunk on the Celerra to mac (default is ip)?

When do you use these parameters?  Is it similar to the setup below?

Cisco - src-dst-ip     =>  Celerra - ip

Cisco - src.dst-mac =>  Celerra - mac

Cisco - ?????          =>  Celerra - tcp

Thanks,

Rip

117 Posts

June 8th, 2010 15:00

There is no need to match the settings on the switch and Celerra.

For performance reasons, the highest level of load balancing (tcp) should be configured on both sides, though.

MAC load balancing is extremely inefficient.  It load balances conversations based on the Ethernet addresses used.  This means that all communications going through a router (which typically will use a single Ethernet address) will be sent over a single NIC.  With TCP load balancing, individual TCP conversations and used to load balance across the NICs, so you have a much better chance of having nice and even distribution of load across the links.

190 Posts

June 9th, 2010 09:00

On a 3750, you only have these options:

port-channel load-balance ?
  dst-ip       Dst IP Addr
  dst-mac      Dst Mac Addr
  src-dst-ip   Src XOR Dst IP Addr
  src-dst-mac  Src XOR Dst Mac Addr
  src-ip       Src IP Addr
  src-mac      Src Mac Addr

Your best bet is src-dst-ip

The 3750 appears to default to src-mac - I can't say whether changing this is disruptive or not, perhaps someone can comment on this.

Dan

No Events found!

Top