

Events
Deploy AI Faster with Integrated Compute and Networking from Dell and NVIDIA
Key takeaways: Deploy AI Faster with Dell and NVIDIA Integrated AI Infrastructure: Dell PowerEdge XE-Series servers with NVIDIA Rubin GPUs and Vera CPUs deliver GPU-dense, rack-scale systems for large-scale AI training and inference, ensuring seamless performance and efficiency. High-Performance Networking: Dell PowerSwitch SN6000 Series and NVIDIA Quantum InfiniBand switches provide ultra-high bandwidth, low latency, and liquid cooling options, enabling massive GPU-cluster scale-out and optimized AI workloads. End-to-End Security and Scalability: Dell’s Confidential AI Certification ensures secure data handling, while integrated compute, storage, and networking simplify deployment and accelerate time to value. AI for Every Scale: From enterprise edge systems to hybrid quantum-classical computing, Dell’s solutions offer flexibility, scalability, and innovation to meet diverse AI needs.
End-to-end infrastructure for AI factories

AI performance at scale is no longer defined by a single component — it is
defined by how well the entire system works together. As GPUs become
exponentially more powerful, infrastructure must evolve just as rapidly to keep data flowing, power stable and thermals under control. When compute, networking, memory and cooling are not engineered as one, even the fastest accelerators cannot deliver their full potential.
Dell’s latest announcement responds directly to this shift with a generational leap in AI infrastructure: integrated rack-scale systems built on NVIDIA Rubin GPUs, next-generation PowerEdge servers powered by NVIDIA Vera CPUs, and high-bandwidth Ethernet and InfiniBand networking — all co-architected to operate as a unified performance fabric.
Co-engineered compute and networking
Dell AI Factory with NVIDIA designs compute and networking as one architecture — from integrated rack-scale systems to the interconnects that bind them. This ensures that as GPU performance advances, the network scales in lockstep — reducing bottlenecks and preserving full cluster utilization. The result is sustained GPU efficiency, higher throughput and faster time to insight.
-
- Dell PowerEdge XE9812 is the NVIDIA Vera Rubin update to the PowerEdge XE9712. It is a 72-way GPU-accelerated rack-scale server for training the largest trillions-of-parameter AI models, Mixture of Experts (MoE) AI models, and achieving lower cost per token for AI reasoning at massive scale.
- The Intel CPU-based PowerEdge XE9880L, the AMD CPU-based XE9885L and the NVIDIA Vera CPU-based Dell Server XE9882L; all

Dell PowerEdge XE9880L featuring 8-way NVIDIA HGX Rubin NVL8 GPU acceleration. Dell combines this 100% direct liquid cooling (DLC) rack-scale compute architecture, NVIDIA NVLink–interconnected GPUs, and networking in the latest Dell IR9000 factory-integrated rack system, so customers can train models faster, run more inferencing tokens per rack, and maximize data center efficiency without compromising time to deployment or performance.
- Dell Technologies continues to expand NVIDIA Spectrum platforms with a choice of NOS for the PowerSwitch SN5610 and SN2201, including Cumulus and Dell SONiC, aligning with Dell AI Factory and NVIDIA reference architectures to deliver high-performance, lossless networking for AI workloads.
- Dell PowerSwitch SN6000 Series delivers 1.6 TbE switching with options for liquid cooling and co-packaged optics (CPO), reducing power consumption and signal loss at ultra-high speeds, delivering up to 409.6 Tb/s of switching capacity and up to 2,048 breakout connections for massive GPU-cluster scale-out.
- NVIDIA Quantum-X800 InfiniBand Q3300-LD is a liquid-cooled switch family with co-packaged optics options delivering high-bandwidth networking for AI and cloud-native workloads.
- Dell Integrated Rack Scalable Systems are expanding with Dell PowerSwitch and NVIDIA liquid-cooled switching to provide unified, rack-level power and cooling management for AI infrastructure.
Within each node, CPU, GPU, memory and I/O operate as a tightly integrated system, engineered and validated by Dell to eliminate typical multivendor bottlenecks. High-core-count CPUs continuously feed dense NVIDIA Rubin GPUs, while the NVLink switching fabric minimizes intranode latency and maximizes GPU to GPU bandwidth.
Dell PowerEdge servers are among the first to earn NVIDIA’sCertification for high‑performance, secure AI solutions. With this certification, Dell servers help ensure that customer data and models remain encrypted in host memory and that transfers between the CPU and GPU occur over a secure, encrypted channel, delivering robust security without compromising system performance. Confidential AI solutions complement Secured Component Verification (SCV) and the protection of data on Dell storage systems to further enhance the security of customer AI environments. For organizations building AI factories, these systems are designed to deliver:
-
- Higher-effective GPU performance per rack by keeping accelerators fed and minimizing idle time
- Consistently high utilization across distributed training workloads through an integrated, bottleneck free fabric
- Stronger economics and risk reduction, combining better GPU utilization with built-in, certified CPU–GPU data security
Unlike piecemeal, DIY cluster builds, Dell extends this performance beyond the server with pretested, high-bandwidth northbound network interfaces and Dell PowerSwitch-based AI fabrics delivering predictable, nonblocking throughput from node to top of rack. This end-to-end integration enables seamless, assured scale across large AI clusters, reducing risk and time to value compared to traditional multivendor approaches.
Paired with Dell PowerSwitch SN6000 Series Ethernet switches, customers can build fully integrated GPU-dense Integrated Rack Solutions with consistent high-bandwidth connectivity and liquid cooling. Powered by the NVIDIA Spectrum-6 Ethernet ASIC, with air- and liquid-cooled designs and co-packaged optics (CPO) options, these switches are engineered to keep pace with the thermal and bandwidth demands of NVIDIA Rubin–based systems.
Building on Dell SONiC’s foundation, Dell is expanding the choice of network OS on Spectrum-based switches to include the PowerSwitch SN5610 and
SN2201, now available as part of the Dell AI Factory. This innovation empowers organizations to simplify operations, scale AI infrastructure, and drive efficiency with a trusted, end-to-end partnership. Michael Lassen, Head of Technology Infrastructure at team.blue, recently shared, “With Dell’s extension of Dell SONiC to NVIDIA’s Spectrum platforms, we now benefit from a unified NOS across diverse silicon architectures.”
The NVIDIA Quantum-X800 InfiniBand Q3300-LD is a liquid-cooled switch family with co-packaged optics options delivering high-bandwidth networking for AI and cloud-native workloads. With liquid cooling and DC busbar designs, they streamline thermal management and simplify cable organization.

By eliminating pluggable optical transceivers to reduce electrical loss, enhance signal integrity, and improve power and thermal efficiency, the co-packaged optics options in the Dell PowerSwitch SN6000 Series and the NVIDIA Quantum-X800 InfiniBand Q3300‑LD switch family integrate silicon photonics directly into the switch ASIC.
These integrated solutions enable customers to rely on a single vendor for all their AI infrastructure needs, encompassing compute, storage, and networking. They are further enhanced by NVIDIA Reference Architectures, Dell Validated Designs and comprehensive end-to-end support and deployment services. This seamless approach helps customers accelerate time to value and quickly resolve issues, ensuring a streamlined and efficient experience
AI for enterprise systems
Not every AI workload requires a rack-scale deployment. Many enterprises need flexible systems for edge locations, departmental clusters, or workloads such as retrieval augmented generation (RAG), AI Assistants, and real-time analytics. Dell PowerEdge R-Series servers—including PowerEdge R770, R7715, and R7725 with NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs—extend AI acceleration into mainstream data center platforms. These systems feature balanced CPU-to-GPU ratios, optimized power envelopes, and efficient air or liquid cooling.
They enable organizations to:
-
- Introduce GPU acceleration into existing environments
- Fine-tune domain-specific models without large-scale cluster investment
- Support inference, visualization and AI-enhanced VDI workloads
Unified management tools provide telemetry across GPU utilization, memory use and network performance — giving IT teams operational visibility as AI expands across environments.


With the PowerEdge M9822 and R9822, Dell brings NVIDIA Vera across both rack-scale and traditional data center environments — giving customers true architectural choice. Whether deploying alongside hyperscale GPU clusters or integrating into existing enterprise infrastructure, organizations can place AI control and service layers where they make the most sense. By running orchestration, model gateways, and data services outside the accelerator tier, these systems preserve premium GPU capacity for training and inference while aligning AI control functions to the environment customers already operate.
Preparing for Hybrid Quantum Classical Compute
Dell is operationalizing hybrid quantum-classical computing by integrating enterprise-grade PowerEdge infrastructure with NVIDIA NVQLink–enabled quantum workflows. Rather than treating quantum systems as isolated research devices, Dell is engineering high-bandwidth, validated Ethernet bridges that tightly couple GPU-accelerated servers with a diverse QPU ecosystem.
This positions Dell as the first enterprise infrastructure provider to productize hybrid quantum‑classical integration—extending its proven strategy of tightly integrated compute, networking, and accelerators into the quantum domain. By controlling the classical performance fabric that underpins quantum workflows, Dell enables customers to accelerate experimentation, reduce integration complexity, and move hybrid quantum from the lab into scalable, production‑ready environments.
Quantum Machines (QM) is a Dell partner that has tested their own quantum control plane hardware with the R7615 and achieved single-digit microsecond results in quantum control and calibration.
System-level AI performance
AI performance is no longer about a single chip—it’s about the entire system. GPU gains can’t overcome bottlenecks in networking, memory, cooling, or management. With Dell AI Factory with NVIDIA, system-first designs across PowerEdge R-Series, PowerEdge XE-Series and rack-scale systems, and emerging Vera-based architectures keep GPUs busy, shorten training cycles, and unlock scalable AI—but the organizations that act now will be the ones that lead.
