|

Mellanox ConnectX-3 Network Adapters with PCIe 3.0 for PowerEdge Servers

Mellanox Network Cards
For the most demanding data centers

Maximize your Dell PowerEdge server performance with Mellanox networking cards. Choose a 10GbE or 40GbE network interface card (NIC) to get the bandwidth and speed you need for your performance-driven server and storage applications, including enterprise data centers, Web 2.0, high-performance computing and embedded environments.Clustered databases, web infrastructure and high-frequency trading are just a few applications that will achieve significant throughput and latency improvements, resulting in faster access, real-time response and an increased number of users per server.
For the most demanding data centers
Mellanox Network Cards - Product featuresProduct features

• Provides dual port 10GbE or 40GbE connectivity
• Full line-rate performance per port (10GbE only)
• PCI Express 3.0 (up to 8GTs)
• TCP/IP stateless offload in hardware
• Compatible with standard TCP/UDP/IP and iSCSI software stacks
• Standard block and file access protocols that can leverage RDMA for high-performance storage access

Advanced features

• Support for VMware Net Queue — Helps to increase performance in virtual server environments.
• High-performance offload — Enhances TCP/IP and RDMA performance.
• SR-IOV — Allows a PCIe device to appear to be multiple separate physical PCIe devices.
• Storage acceleration — Standard block and file access protocols that can leverage RDMA for high-performance storage access.
• VLAN support (IEEE 802.1q VLAN tagging) — Improves security, network flexibility and management efficiency.
• RDMA over Converged Ethernet (RoCE) — Delivers low-latency and high performance to bandwidth- and latency-sensitive applications.
Advanced features
Maximize and protect your IT investment
Better application performanceOptimize RDMAConsolidate I/O
Better application performance.
Significant throughput and latency improvements support demanding enterprise applications.
Optimize RDMA.
Enhance performance for bandwidth- and latency-sensitive applications utilizing IBTA RoCE technology.
Consolidate I/O.
Support TCP/IP, storage and RDMA over Ethernet transport protocols on a single adapter for optimized network performance.
ConnectX®-3 VPI FDR Daughter Card, 56GB/s IB or 10/40GbEConnectX®-4 Lx rNDC
DescriptionConnectX®-3 VPI FDR Daughter Card, 56GB/s IB or 10/40GbEConnectX®-4 Lx rNDC
Speed56Gb/s IB or 10/40GbE10/25GbE
PortsSingle & Dual Port AvailableDual Port
Technology– Virtual Protocol Interconnect
– 1us MPI ping latency
– Up to 56Gb/s InfiniBand or 40/56 Gigabit Ethernet per port
– Single- and Dual-Port options available
– PCI Express 3.0 (up to 8GT/s)
– CPU offload of transport operations
– Application offload
– GPU communication acceleration
– Precision Clock Synchronization
– End-to-end QoS and congestion control
– Hardware-based I/O virtualization
– Ethernet encapsulation (EoIB)
– RoHS-R6
–– 10/25Gb/s speeds
–– Erasure Coding offload
–– Virtualization
–– Low latency RDMA over Converged Ethernet (RoCE)
–– CPU offloading of transport operations
–– Application offloading
–– Mellanox PeerDirectTM communication acceleration
–– Hardware offloads for NVGRE and VXLAN encapsulated traffic
–– End-to-end QoS and congestion control
–– Hardware-based I/O virtualization
–– RoHS-R6
– TCP/IP stateless offload in hardware
– Traffic steering across multiple cores
– Hardware-based I/O virtualization
– End-to-end QoS and congestion control
– Virtualization
– Intelligent interrupt coalescence
– Advanced Quality of Service
– RDMA & RoCE
Controller
ConnectX®-3 Dual-port FDR10 40Gb MezzConnectX®-3 Dual-port FDR 56Gb MezzConnectX®-3 VPI FDR Daughter Card, 56GB/s IB or 10/40GbE
DescriptionConnectX®-3 Dual-port FDR10 40Gb MezzConnectX®-3 Dual-port FDR 56Gb MezzConnectX®-3 VPI FDR Daughter Card, 56GB/s IB or 10/40GbE
Speed40Gb/s56Gb/s56Gb/s IB or 10/40GbE
Ports, MediaDual Port, BackplaneDual Port, BackplaneSingle & Dual Port Available
Technology
0.7us application to application latency
– 40Gb/s InfiniBand ports
– PCI Express 3.0 (up to 8GT/s)
– CPU offload of transport operations
– Hardware-based I/O virtualization
0.7us application to application latency
– 40 or 56Gb/s InfiniBand ports
– PCI Express 3.0 (up to 8GT/s)
– CPU offload of transport operations
– Hardware-based I/O virtualization
– Virtual Protocol Interconnect
– 1us MPI ping latency
– Up to 56Gb/s InfiniBand or 40/56 Gigabit Ethernet per port
– Single- and Dual-Port options available
– PCI Express 3.0 (up to 8GT/s)
– CPU offload of transport operations
– Application offload
– GPU communication acceleration
– Precision Clock Synchronization
– End-to-end QoS and congestion control
– Hardware-based I/O virtualization
– Ethernet encapsulation (EoIB)
– RoHS-R6
Controller

ConnectX-4 VPI EDR IB Dual Port (100Gb/s) and 100GbE PCIe AdapterConnectX®-4 EN Dual Port 100GbE PCIe AdapterConnectX® -3 EN  Dual Port 10 GbE  Adapters with PCI Express 3.0ConnectX® -3 EN Dual Port 40 GbE Adapter with PCI Express 3.0
DescriptionConnectX-4 VPI EDR IB Dual Port (100Gb/s) and 100GbE PCIe AdapterConnectX®-4 EN Dual Port 100GbE PCIe AdapterConnectX® -3 EN Dual Port 10 GbE Adapters with PCI Express 3.0ConnectX® -3 EN Dual Port 40 GbE Adapter with PCI Express 3.0
Speed100Gb/s IB or 100GbE100GbE10GbE40GbE
PortsDual Port, LP, FHDual Port, LP, FHDual Port, LP, FHDual Port, LP & FH
MediaQSFP28QSFP28SFP+QSFP
Technology– EDR 100Gb/s InfiniBand or 100Gb/s Ethernet
per port
– Single and dual-port options available
– 10/25/40/50/56/100Gb/s speeds
– 150M messages/second
– Multi-Host technology
– Connectivity to up-to 4 independent hosts
– Hardware offloads for NVGRE and VXLAN
encapsulated traffic
– CPU offloading of transport operations
– Application offloading
– Mellanox PeerDirect™ communication
acceleration
– End-to-end QoS and congestion control
– Hardware-based I/O virtualization
– Erasure Coding offload
– T10-DIF Signature Handover
– Ethernet encapsulation (EoIB)
– 100Gb/s Ethernet per port
– 10/25/40/50/56/100Gb/s speeds
– Single and dual-port options available
– Erasure Coding offload
– T10-DIF Signature Handover
– Power8 CAPI support
– CPU offloading of transport operations
– Application offloading
– Mellanox PeerDirectTM communication acceleration
– Hardware offloads for NVGRE, VXLAN encapsulated traffic
– End-to-end QoS and congestion control
– Hardware-based I/O virtualization
– PCI Express 3.0 (up to 8GT/s)
– Low Latency RDMA over Ethernet
– Data Center Bridging support
– TCP/IP stateless offload in hardware
– Traffic steering across multiple cores
– Hardware-based I/O virtualization
– Intelligent interrupt coalescence
– Advanced Quality of Service
– RDMA
– Virtual Protocol Interconnect
– 1us MPI ping latency
– Up to 56Gb/s InfiniBand or 40/56 Gigabit Ethernet per port
– Single- and Dual-Port options available
– PCI Express 3.0 (up to 8GT/s)
– CPU offload of transport operations
– Application offload
– GPU communication acceleration
– Precision Clock Synchronization
– End-to-end QoS and congestion control
– Hardware-based I/O virtualization
– Ethernet encapsulation (EoIB)
– RDMA


onnectX-4 Lx 10/25Gb/s Dual Port, PCIe SFP28 AdaptersConnectX®-3 Pro Dual Port 10 GbE SFP+ PCIe  AdapterConnectX®-3 Pro Dual Port 40 GbE QSFP+ PCIe AdapterConnectX®-3 VPI FDR Single or Dual  56GB/s IB or 10/40GbE
DescriptionConnectX-4 Lx 10/25Gb/s Dual Port, PCIe SFP28 AdaptersConnectX®-3 Pro Dual Port 10 GbE SFP+ PCIe AdapterConnectX®-3 Pro Dual Port 40 GbE QSFP+ PCIe AdapterConnectX®-3 VPI FDR Single or Dual 56GB/s IB or 10/40GbE
Speed10/25GbE10GbE40GbE56G b /s IB or 10/40GbE
PortsDual Port, LP, FHDual Port, LP, FHDual Port, LP, FHSingle & Dual Port Available
MediaSFP28SFP+QSFPQSFP+
Technology–– 10/25Gb/s speeds
–– Erasure Coding offload
–– Virtualization
–– Low latency RDMA over Converged Ethernet (RoCE)
–– CPU offloading of transport operations
–– Application offloading
–– Mellanox PeerDirectTM communication acceleration
–– Hardware offloads for NVGRE and VXLAN encapsulated traffic
–– End-to-end QoS and congestion control
–– Hardware-based I/O virtualization
–– RoHS-R6
– PCI Express 3.0 (up to 8GT/s)
– Low Latency RDMA over Ethernet
– Data Center Bridging support
– TCP/IP stateless offload in hardware
– Traffic steering across multiple cores
– Hardware-based I/O virtualization
– End-to-end QoS and congestion control
– Virtualization
– Intelligent interrupt coalescence
– Advanced Quality of Service
– RDMA & RoCE
– PCI Express 3.0 (up to 8GT/s)
– Low Latency RDMA over Ethernet
– Data Center Bridging support
– TCP/IP stateless offload in hardware
– Traffic steering across multiple cores
– Hardware-based I/O virtualization
– End-to-end QoS and congestion control
– Virtualization
– Intelligent interrupt coalescence
– Advanced Quality of Service
– RDMA & RoCE
– Virtual Protocol Interconnect
– 1us MPI ping latency
– Up to 56Gb/s InfiniBand or 40/56 Gigabit Ethernet per port
– Single- and Dual-Port options available
– PCI Express 3.0 (up to 8GT/s)
– CPU offload of transport operations
– Application offload
– GPU communication acceleration
– Precision Clock Synchronization
– End-to-end QoS and congestion control
– Hardware-based I/O virtualization
– Ethernet encapsulation (EoIB)
– RoHS-R6
Dell Business Credit

Affordable financing made easy.^

Great financing solutions for better cash flow.^

Learn More