Passer au contenu principal
  • Passer des commandes rapidement et facilement
  • Afficher les commandes et suivre l’état de votre expédition
  • Créez et accédez à une liste de vos produits
  • Gérer vos sites, vos produits et vos contacts au niveau des produits Dell EMC à l’aide de la rubrique Gestion des informations de l’entreprise.

Dell PowerEdge FN I/O Module Configuration Guide 9.10(0.0)

PDF

Ethernet Enhancements in Data Center Bridging

The following section describes DCB.

  • The device supports the following DCB features:
    • Data center bridging exchange protocol (DCBx)
    • Priority-based flow control (PFC)
    • Enhanced transmission selection (ETS)

DCB refers to a set of IEEE Ethernet enhancements that provide data centers with a single, robust, converged network to support multiple traffic types, including local area network (LAN), server, and storage traffic. Through network consolidation, DCB results in reduced operational cost, simplified management, and easy scalability by avoiding the need to deploy separate application-specific networks.

For example, instead of deploying an Ethernet network for LAN traffic, additional storage area networks (SANs) to ensure lossless fibre-channel traffic, and a separate InfiniBand network for high-performance inter-processor computing within server clusters, only one DCB-enabled network is required in a data center. The Dell Networking switches that support a unified fabric and consolidate multiple network infrastructures use a single input/output (I/O) device called a converged network adapter (CNA).

A CNA is a computer input/output device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC). Multiple adapters on different devices for several traffic types are no longer required.

Data center bridging satisfies the needs of the following types of data center traffic in a unified fabric:

  • LAN traffic consists of a large number of flows that are generally insensitive to latency requirements, while certain applications, such as streaming video, are more sensitive to latency. Ethernet functions as a best-effort network that may drop packets in case of network congestion. IP networks rely on transport protocols (for example, TCP) for reliable data transmission with the associated cost of greater processing overhead and performance impact.
  • Storage traffic based on Fibre Channel media uses the SCSI protocol for data transfer. This traffic typically consists of large data packets with a payload of 2K bytes that cannot recover from frame loss. To successfully transport storage traffic, data center Ethernet must provide no-drop service with lossless links.
  • Servers use InterProcess Communication (IPC) traffic within high-performance computing clusters to share information. Server traffic is extremely sensitive to latency requirements.
To ensure lossless delivery and latency-sensitive scheduling of storage and service traffic and I/O convergence of LAN, storage, and server traffic over a unified fabric, IEEE data center bridging adds the following extensions to a classical Ethernet network:
  • 802.1Qbb - Priority-based Flow Control (PFC)
  • 802.1Qaz - Enhanced Transmission Selection (ETS)
  • 802.1Qau - Congestion Notification
  • Data Center Bridging Exchange (DCBx) protocol
  • NOTE: In Dell Networking OS version 9.4.0.x, only the PFC, ETS, and DCBx features are supported in data center bridging.

Évaluez ce contenu

Précis
Utile
Facile à comprendre
Avez-vous trouvé cet article utile ?
0/3000 characters
  Veuillez attribuer une note (1 à 5 étoiles).
  Veuillez attribuer une note (1 à 5 étoiles).
  Veuillez attribuer une note (1 à 5 étoiles).
  Veuillez indiquer si l’article a été utile ou non.
  Les commentaires ne doivent pas contenir les caractères spéciaux : <>()\