tl;dr
Through continuous innovation and close collaboration with NVIDIA, Dell PowerScale emerges as the ultimate solution for all workloads—ranging from traditional file consolidation to cutting-edge AI applications. This partnership provides validated, GPU-ready infrastructure that removes data bottlenecks and enhances AI training. As a foundational component of the Dell AI Data Platform, PowerScale is ready to unleash AI at scale.
AI is reshaping how data moves, grows, and delivers value. Volumes are surging, access patterns are changing and performance demands are accelerating. GPU-ready storage is key in the rapidly evolving world of AI. To stay ahead, infrastructure must keep evolving—seamlessly and intelligently.
PowerScale has led this evolution with the horsepower you need for AI – delivering the performance of a parallel file system today, combined with the simplicity of scale-out NAS. Through continuous modernization—architectural, operational and deep collaboration with NVIDIA—PowerScale has evolved from battle-tested storage into an AI-optimized platform, ready for the demands of modern AI factories and data-driven enterprises, serving as a foundational component of the Dell AI Data Platform.
A partnership rooted in innovation
Dell and NVIDIA co-engineer across the stack to meet each new wave of AI demand. This is more than validation; it’s shared innovation that removes data bottlenecks from ingest through training, fine-tuning, and inference with efficiency, scalability and security top of mind:
- Direct data paths for accelerated training (2021–2023) – PowerScale introduced NVIDIA Magnum IOTM GPUDirect® Storage and NFS over RDMA, creating a fast lane between GPU memory and PowerScale to cut CPU bottlenecks and reduce I/O latency.
- Breaking the Ethernet barrier (2024) – PowerScale F710 became the first Ethernet-based storage certified for NVIDIA DGX SuperPOD™, delivering predictable, high-performance AI over open fabrics—without proprietary lock-in.
- NVIDIA Cloud Partner (NCP) program validation (2025) – PowerScale F710 is validated in the NVIDIA Cloud Partner storage program for HGX Ras—part of the NVIDIA Partner Network (NPN)—which supports cloud and managed-service providers building GPU cloud offerings.
- NVIDIA-Certified Enterprise Storage (2025) – PowerScale F710 is certified at the Enterprise level for NVIDIA-Certified storage program that standardizes performance and operability for AI infrastructure.
- Jointly validated next-gen AI reference architectures (2024–2025)—each tuned for specific AI system classes:
- NVIDIA DGX H100 – Optimized for Hopper’s multimodal/LLM throughput.
- NVIDIA DGX B200 – Built for higher compute density and memory bandwidth.
Why PowerScale is built for AI workloads
AI data is unstructured, continuously growing, and demands peak performance. PowerScale is purpose-built to meet these challenges head-on, making it the trusted solution for over 1.5k customers running GPU workloads. Here’s why:
- Effortless scalability: Seamlessly expands from 8 TB to exabyte scale without interruptions.
- Unmatched efficiency: Offers industry-leading 2:1 data reduction guarantee.
- Proven leadership: 9 consecutive years as a leader in the Gartner Magic Quadrant.
- Certified for security: DoD APL Certified for mission-critical environments.
- Protocol flexibility: Unified support for NFS, SMB, and S3 eliminates data silos.
- Smart data management: Cloud and archive tiering ensure hot data stays close to GPUs while optimizing costs.
- Peak GPU performance: GPUDirect + RDMA ensures GPUs are always fed, eliminating data bottlenecks.
Optimized for Retrieval-Augmented Generation (RAG)
PowerScale’s advanced capabilities shine in RAG workloads, where real-time data retrieval is critical.
- MetadataIQ: Extracts metadata from billions of files in near real time, enabling RAG indexers to refresh vector databases without the need for full scans.
- RAG connectors: Integrate directly with NVIDIA NeMo Retriever and NIM inference services over RDMA-enabled NFS or S3, bypassing slow ETL for near-instant retrieval.
With PowerScale, enterprises can:
- Move from ingestion to live knowledge in minutes.
- Run RAG search and AI training on the same dataset without migrations.
Continuous performance evolution
PowerScale’s commitment to innovation has consistently delivered groundbreaking performance improvements, keeping it at the forefront of modern AI workloads. Testing of the latest all-flash AI-ready nodes demonstrates:
- Up to 220% faster data ingestion¹ for seamless scaling.
- Up to 99% faster data retrieval² to speed AI pipelines.
- Up to 3x write throughput per RU³ versus competitors for exceptional efficiency.
These improvements highlight PowerScale’s deliberate evolution and result in the ability to scale up to multiple TB/s per cluster read throughput in a single namespace—delivering uncompromising performance, unbound intelligence and unmatched safeguards for the most demanding AI environments.
The bottom line
PowerScale is more than storage. It’s an AI-native data platform—validated, tuned, and engineered for the data era, powering everything from foundation model training to real-time RAG inference. For enterprises, it’s the same reliability they’ve trusted for years, with the speed, scale, and intelligence of next-generation workloads.
Explore more about how two industry leaders – Dell PowerScale and NVIDIA – are shaping the future of AI infrastructure.
1 Based on preliminary internal analysis of streaming write figures comparing F600p running OneFS 9.5 versus F710 running the latest version of OneFS using 200GbE. Dec, 2024
2 Based on preliminary internal analysis comparing F600p running OneFS 9.5 versus F710 running the latest version of OneFS, using 200GbE. Dec, 2024
3 Based on Dell internal testing comparing write throughput per node, May 2024. Versus closest flash-only competitor. Performance rates based on FIO over a remote file system. Actual performance may vary.


