The Dell AI Factory with NVIDIA

Dell Technologies AI Factory with NVIDIA

Your way to AI

Welcome to the Dell AI Factory with NVIDIA, where two trusted technology leaders unite to deliver a comprehensive and secure AI solution customizable for any business. With a portfolio of products, solutions, and services tailored for AI workloads from desktop to data center, it paves the way for AI to work seamlessly for you.
Use Cases Services AI Software and Models AI Infrastructure
Dell Al Factory with NVIDIA uses your data to drive growth and boost productivity while maintaining security within a controlled environment. Consisting of Al infrastructure, Al software and models, and tailored, expert support, the Dell Al Factory with NVIDIA helps you identify, develop, deploy, and scale Al use cases.

Simplified

Make AI deployments and operations easier and more scalable with automated workflows.

Tailored

Turn your data into powerful insights with an AI approach designed for your needs.

Trusted

Safeguard your data by ensuring privacy, governance, and full control, while confidently maximizing its value.

Accelerate your business outcomes

AI Use cases unlock data insights, improve productivity, redefine the customer experience, and accelerate innovation.

Content Creation

Easily create original and engaging content, whether text or images, with just a few simple prompts.
db16250nt-laptop small

AI SOLUTIONS EXPLORER TOOL Custom AI configuration recommendations

Choose your use case and receive configuration recommendations from Dell AI Solutions Explorer.

Enable AI with technology and services

Drive your AI strategy forward

Professional Services

Leverage our consultants at every stage, from strategy to data preparation, implementation, and beyond.

Talk to an advisor

Reduce costs with a tailored experience, and then pay as you go for what you use – all on your terms.

Dive deeper into Dell + NVIDIA

Viewing of 5

Dell AI Factory with NVIDIA FAQs

The Dell AI Factory with NVIDIA is an end-to-end enterprise AI solution that unifies Dell’s AI-optimized infrastructure with NVIDIA’s accelerated computing and enterprise AI software to simplify and scale AI across your organization. It integrates compute, storage, networking, PCs/workstations, and services with NVIDIA AI Enterprise, NVIDIA NIM microservices, and the NVIDIA Spectrum X high-speed Ethernet fabric to deliver a full-stack, production-ready platform. It’s designed to help teams quickly identify, develop, deploy, and operate AI use cases with consistent security, governance, and manageability from desktop to data center to edge and cloud. Dell and NVIDIA co-engineered the platform to be the industry’s first end-to-end enterprise AI solution focused on making AI deployments easier and faster.

The Dell AI Factory with NVIDIA is powered by multiple Dell PowerEdge server models tailored to different AI workloads: the PowerEdge XE9680, a flagship 8‑GPU system ideal for training and fine‑tuning with NVIDIA H100 Tensor Core GPUs; the PowerEdge R760xa, a 2U GPU‑accelerated workhorse commonly used for L40S‑based inferencing and fine‑tuning as well as general GenAI tasks; the PowerEdge R660, often used in validated stacks for supporting/management compute and lighter AI services; and the PowerEdge XE7745, featured in Spectrum‑X solution IDs for building large‑scale GPU clusters. Together with NVIDIA accelerators, BlueField DPUs, and high‑performance Spectrum‑X networking, these systems deliver a full‑stack platform for enterprise GenAI from PoC to production.

The Blackwell-based Dell PowerEdge XE9780 improves LLM training by combining dual Intel Xeon 6 CPUs (up to 86 cores) with eight NVIDIA HGX Blackwell (B200/B300) GPUs in a 10U air cooled chassis purpose built for GenAI training and fine tuning. In practice, this delivers up to 4x faster training versus prior platforms, accelerating time to value and reducing iteration cycles for model development. Its air cooled, standard rack design simplifies integration into existing data centers, enabling you to scale performance without costly site upgrades. Backed by Dell’s enterprise management and support, XE9780 provides a reliable, production ready foundation for LLM training and fine tuning initiatives.

The default stack includes NVIDIA AI Enterprise (enterprise AI platform and support), NVIDIA NIM microservices (optimized model endpoints), and optional NVIDIA Omniverse for simulation workflows—validated and sold through Dell as part of the AI Factory solution.

Yes, the Dell AI Factory fully supports advanced cooling technologies, including direct-to-chip liquid cooling. This is essential for managing the thermal output of high-density GPU configurations, such as those using the NVIDIA Blackwell platform. Implementing liquid cooling allows you to deploy more compute power per rack, improve your data center's power usage effectiveness (PUE), and reduce overall operational expenses, ensuring your infrastructure is both powerful and efficient.

Dell Technologies provides a comprehensive suite of services to ensure your Dell AI Factory operates at peak performance. This includes ProDeploy services for seamless implementation and ProSupport services, which offer a single point of contact for expert assistance across the entire hardware and software stack. With 24x7 access to specialized engineers, you can minimize downtime, resolve issues quickly, and confidently scale your AI operations.

The Dell AI Factory, supports a maximum of 72 NVIDIA Blackwell GPUs in a single, liquid-cooled rack. This remarkable density is achieved through the NVIDIA NVL72 system, which integrates the GPUs with high-speed NVLink interconnects.

The Dell AI Factory with NVIDIA is designed to accelerate your time-to-value by dramatically reducing the deployment time and complexity associated with do-it-yourself (DIY) AI infrastructure. Instead of spending months integrating disparate hardware and software, you can deploy a fully validated and optimized solution.

You can virtualize H100/H200 with NVIDIA vGPU (via NVIDIA AI Enterprise) to share GPUs across VMs, but MLPerf results are typically published on bare metal configurations; for highest benchmark targets, use pass through or bare metal rather than vGPU.

Direct-to-chip liquid cooling, a core feature of the Dell AI Factory's high-density configurations, significantly lowers your data center's Power Usage Effectiveness (PUE) and operational expenditures (OPEX). By transferring heat directly from the GPUs with liquid, it removes thermal constraints far more efficiently than traditional air cooling.