Dell Networking FCoE session issues due to LLDP packets sent by VMs.

Summary: FCoE session issues due to Link Layer Discovery Protocol (LLDP) packets sent by Virtual Machines. Issue can cause interface flapping, or no problems are seen until any manual action that causes LLDP to renumerate triggering the issue. ...

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

Affected Platforms: MX and MXL switch running Data Center Bridging (DCB) functions but not limited to them / VMware Esxi
 

Affected Firmware: All until now

Impact:
  • Lost host communication to FCoE environment
  • Storage communication stops responding or flaps all the time.
  • In a stable environment a COMPLETE OUTAGE of all FCoE vlans may occur when a manual action is taken that causes LLDP to renumerate; for example, adding a VLAN or disabling LLDP on the vDS. 

Cause

This article is intended to provide an additional protection for FCoE environments where a VM sends LLDP packets to the (FIP/FSB) switch by accident and that breaks the FCoE connection to the switch. During the problem state, you see this message below in the switch’s syslog.

Error message: "LLDP_MULTIPLE_PEER_DETECTED: DCBX operationally disabled due to more than one PEER being present on interface" when enabling LLDP from the OS)

Warning Message:
-------------------------------------

LLDP_MULTIPLE_PEER_DETECTED: DCBX
-------------------------------------
Two possible issues you can see:

  1. VMware vDS is sending and receiving or only sending LLDP packets.
    1. Usually, it happens when the LLDP configuration on VMware vDS is set as "Both(listen and advertise)" or is set as "advertise.”
  2. VMs are sending LLDP packets through VMware vDS.
    1. If the VM’s operation system is sending LLDP packets to advertise its presence that breaks the FCoE connectivity due to the nature of FCoE operation.

Another indication that the switch is seeing multiple LLDP neighbors and is in the problem state:
 switch is seeing multiple LLDP neighbors  

The network may present itself as perfectly stable even when "show lldp neighbors" indicates that the switch is in the problem state. However, if a manual action like adding a VLAN or disabling LLDP on the vDS is taken, it may trigger a complete outage on all FCoE sessions.

Why Would This Happen?
FCoE relies on DCBx, and DCBx is a protocol that runs on LLDP. 

"DCBX uses Link Layer Discovery Protocol (LLDP) to exchange parameters between two link peers. LLDP is a unidirectional protocol. It advertises connectivity and management information about the local station to adjacent stations on the same IEEE 802 LAN." - https://www.ieee802.org/1/files/public/docs2008/az-wadekar-dcbx-capability-exchange-discovery-protocol-1108-v1.01.pdf
"DCBX is expected to operate over a point-to-point link. If multiple LLDP neighbors are detected, then DCBX behaves as if the peer’s DCBX TLVs are not present until the multiple LLDP neighbor condition is no longer present. An LLDP neighbor is identified by its logical MAC Service Access Identifier (MSAP). The logical MSAP is a concatenation of the chassis ID and port ID values transmitted in the LLDPDU." - https://www.ieee802.org/1/files/public/docs2008/az-wadekar-dcbx-capability-exchange-discovery-protocol-1108-v1.01.pdf
Any change that causes LLDP to renumerate may cause FCoE's TLV functionality to immediately break.

Resolution

Option 1) change LLDP mode on the vDS per Broadcom KB.
Option 2) What we are going to do is to add a VMware vDS filtering policy to block LLDP packet from VMs, this configuration alone blocks any LLDP packet from any VM under that vDS Port-Group. *This issue can be seen with VMware standard vSwitch as well, but no work round or filtering option is available. The customer must make sure their VMs are LLDP free.
 

 

 Procedure

 Reference: VMware traffic filtering guide

 

  1. Locate a distributed port group or an uplink port group in the vSphere Client.

 
vSphere Client 
 

  1. Select a distributed switch and click the Networks tab.
  2. Click Distributed Port Groups to see the list of distributed port groups or click Uplink Port Groups to see the list of uplink port groups.

 
Distributed Port Groups screen 
 
 

  1. Click a distributed port group or an uplink port group and select the Configure tab.
  2. Under Settings, select Traffic Filtering And Marking.
  3. Click the Enable and reorder button.

 
Traffic Filtering And Marking screen 
 

  1. Click Enable all traffic rules.
  2. Click OK.

 
Enable all traffic rules screen 
 

 

 

  1. Click ADD button.

 
Add button screen 
 

VMware MAC Traffic Qualifier reference guide.

 

  1. In the Rule window, set the parameters below.
    1. "Name" the rule to describe the action.
    2. On "Action" field, select "Drop."
    3. On "Traffic direction," select "Ingress."
    4. Click "MAC" tab.
      1. Enable the "Enable qualifier" checkbox.
      2. On "EtherType" field:
        1. Select "IS."
        2. Select "Custom."
        3. Write "88CC" as the LLDP ethertype.
    5. Click OK.

 
New Traffic Rule screen 

  1. At the end, you should have the configuration below.

 
Configuration page 
 
 

  1. Check if any LLDP packet is being blocked from the selected VM.
    1. Click Ports tab.
      1. Select vDS Port ID where is the VM.
      2. Click Statistics tab.
        1. Check "Dropped - Ingress Packets"

   Ports tab screen 

11 That concludes the configuration.

Affected Products

Force10 MXL Blade, Dell EMC Networking MX5108n, Dell EMC Networking MX9116n, VMware ESXi 6.x, VMware ESXi 7.x
Article Properties
Article Number: 000203012
Article Type: Solution
Last Modified: 03 Oct 2025
Version:  2
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.