Data Domain: Recommendation for Link Aggregation configuration instead of Failover with Directly Connected Interfaces between Two DDs

Summary: Directly attaching failover interfaces back-to-back between two DDR may fail to be able to transfer data.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

Interfaces eth3b, eth4a, and eth4b of DD1 and DD2 are directly connected to each other back-to-back. On both DDs eth3b, eth4a and eth4b are configured to participate in a failover bond. Link status of the failover interface shows 'running' however traffic is not able to flow between the two DDs through this failover interface.

A mismatched active link causes this. Without a primary interface configured, the active interfaces on both sides are arbitrary and they may not match. This could cause the traffic to the receiving end to come over the standby interface and get dropped by the standby interface.

DD1:

Net Failover Show
-----------------
Ifname   Hardware Address    Configured Interfaces                               Up Delay (ms)   Down Delay (ms)
------   -----------------   -------------------------------------------------   -------------   ---------------
veth1    00:60:16:68:ed:41   eth3b, eth4a, eth4b, active: eth4b, primary: None   29700           29700
------   -----------------   -------------------------------------------------   -------------   ---------------

DD2:

Net Failover Show
-----------------
Ifname   Hardware Address    Configured Interfaces                               Up Delay (ms)   Down Delay (ms)
------   -----------------   -------------------------------------------------   -------------   ---------------
veth1    00:60:16:68:e9:21   eth3b, eth4a, eth4b, active: eth3b, primary: None   29700           29700
------   -----------------   -------------------------------------------------   -------------   ---------------

Ping is failing:

SE@DD1## net ping interface veth1 192.168.170.252
PING 192.168.170.252 (192.168.170.252) from 192.168.170.250 veth1: 56(84) bytes of data
From 192.168.170.250 icmp_seq=11 Destination Host Unreachable
From 192.168.170.250 icmp_seq=12 Destination Host Unreachable
From 192.168.170.250 icmp_seq=13 Destination Host Unreachable
From 192.168.170.250 icmp_seq=15 Destination Host Unreachable
From 192.168.170.250 icmp_seq=16 Destination Host Unreachable
From 192.168.170.250 icmp_seq=17 Destination Host Unreachable
From 192.168.170.250 icmp_seq=19 Destination Host Unreachable

Cause

When failover is used between directly connected interfaces, a matching primary interface should be configured for the failover bond on both ends.

You can specify the primary interface while creating failover:

net failover add <virtual interface> interfaces <slave interfaces> [primary <interface name>]

Or modify existing failover virtual interface to add a primary:

net failover modify <virtual interface> primary <interface name>

Once a matching primary interface is set:

DD1:

Net Failover Show
-----------------
Ifname   Hardware Address    Configured Interfaces                               Up Delay (ms)   Down Delay (ms)
------   -----------------   -------------------------------------------------   -------------   ---------------
veth1    00:60:16:68:ed:41   eth3b, eth4a, eth4b, active: eth3b, primary: eth3b  29700          29700
------   -----------------   -------------------------------------------------   -------------   ---------------

DD2:

Net Failover Show
-----------------
Ifname   Hardware Address    Configured Interfaces                               Up Delay (ms)   Down Delay (ms)
------   -----------------   -------------------------------------------------   -------------   ---------------
veth1    00:60:16:68:e9:21 eth3b, eth4a, eth4b, active: eth3b, primary: eth3b    29700           29700
------   -----------------   -------------------------------------------------   -------------   ---------------


Ping is working now:

SE@DD2## net ping interface veth1 192.168.170.250
PING 192.168.170.250 (192.168.170.250) from 192.168.170.252 veth1: 56(84) bytes of data
64 bytes from 192.168.170.250: icmp_seq=1 ttl=64 time=1.09 ms
64 bytes from 192.168.170.250: icmp_seq=2 ttl=64 time=1.12 ms
64 bytes from 192.168.170.250: icmp_seq=3 ttl=64 time=1.14 ms
NOTE: The user may still face the same issue if the primary interface of one side fails. Both ends will arbitrarily choose one of the other links to be active and there could be a mismatch of active interfaces in both sides.

Resolution

Recommendation is to use LACP instead of failover for directly connected back-to-back interfaces. Note, LACP can be used if the total throughput is less than the throughput for one interface. Otherwise, there is a degradation of the total throughput when a failover happens.

In summary, when interfaces are directly connected:

  • A primary interface should be specified to make active interface on both sides
  • Use LACP instead of failover. LACP also provides the capability of failover.

Additional Information

For troubleshooting interface network connectivity, reference How to troubleshoot network interface connectivity issues.

Affected Products

Data Domain, Data Domain Boost, Data Domain Boost - Open Storage

Products

Data Domain Boost – File System, Data Domain Deduplication Storage Systems, Data Domain Encryption, Data Domain Extended Retention, Data Domain GDA, Data Domain NDMP Tape Server, Data Domain Replicator, Data Domain Retention Lock , Data Domain Storage Migration, Data Domain Virtual Tape Library, Data Domain Virtual Tape Library for IBM I/OS, Data Domain Virtual Edition, PowerProtect Data Domain Management Center, Storage Direct for Data Domain ...
Article Properties
Article Number: 000191681
Article Type: Solution
Last Modified: 26 Jul 2023
Version:  6
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.