Scale Your AI Ambitions with Dell Storage and NVIDIA

Dell Lightning File System and Dell Exascale Storage fuel AI performance for the largest, most demanding environments.

Key takeaways: Extreme-scale AI rises or falls on data. Dell Lightning File System delivers ultrafast parallel file system storage to keep the largest AI factories fully utilized, while Dell Exascale Storage architecture brings together Dell’s best-of-breed file, object and PFS on the latest PowerEdge servers. Both innovations remove bottlenecks and simplify scaling AI on NVIDIA-accelerated platforms.


Advancing AI data access and orchestration at NVIDIA GTC

AI innovation is moving faster than ever, and nowhere is that more evident than at NVIDIA GTC 2026. As AI infrastructure becomes more powerful and models grow more complex, the question every AI factory must answer is simple: Can storage keep up with compute?

At this year’s GTC, Dell is addressing that challenge with two complementary innovations for extreme‑scale AI:

    • Dell Lightning File System, now available globally
    • A preview of Dell Exascale Storage, a software‑first storage architecture for AI

Together, they underpin the Dell AI Data Platform and the Dell AI Factory with NVIDIA, helping customers move from pilot projects to massive, production AI services—scaling AI without limits by transforming how data is accessed, moved and orchestrated across the AI factory.

Dell Lightning File System: Extreme performance for extreme‑scale AI

Available globally starting today, Dell Lightning File System (Lightning FS) is the world’s fastest parallel file system.¹ Engineered for extreme-scale AI training and inference, Lightning FS also delivers unsurpassed performance density, reducing hardware space and power requirements. Lightning FS complements Dell PowerScale and Dell ObjectScale as a storage accelerator for the most demanding AI workloads.

Where PowerScale and ObjectScale power the broader AI data lifecycle—including ingest, curation, feature stores, archives and a wide range of training and inference workloads—Lightning FS focuses on a single imperative: Keep accelerated compute fully utilized at massive scale by delivering predictable, high-throughput data access.

Lightning FS is designed for organizations at the outer edge of AI scale—typically Tier 2 cloud service providers and GPU‑as‑a‑Service providers running tens of thousands of GPUs or over 4TB/sec. of aggregate throughput. In these environments, even small inefficiencies can lead to days of lost training time and millions of dollars in stranded GPU investment. Lightning FS is built to saturate large accelerated clusters and minimize training windows with up to 6TB/sec. per rack read performance across both random and sequential workloads,² so providers can deliver more AI capacity on the same footprint.

Fabric‑bound architecture with direct NVMe access

Unlike legacy parallel file systems that depend heavily on CPU‑bound controller nodes and complex write caching, Lightning FS uses a fabric‑bound architecture with direct NVMe access and redirect‑on‑write semantics. This design avoids common bottlenecks such as:

    • Overloaded metadata or controller nodes
    • Cache thrashing under mixed I/O profiles
    • Performance cliffs as clusters grow

The result is near-line‑speed efficiency for both sequential and random reads, even under highly concurrent AI workloads. Lightning FS is engineered to deliver maximum throughput for mixed I/O patterns and to scale linearly as you add nodes, helping teams plan bandwidth and GPU feeding rates with confidence. Based on Dell internal testing, Lightning FS can deliver up to 2x greater throughput per rack unit than competing parallel file systems (actual results may vary),³ and up to 20X greater performance over traditional flash-only scale-out file competitors.

Because Lightning FS provides direct data access to NVMe devices rather than relying on large, fragile caches, it maintains consistent performance across mixed workloads. This helps minimize data stalls, reduce tail latencies and keep GPU utilization high—whether you are training frontier models, fine‑tuning customer‑specific variants or powering large‑scale inference.

Integrated with NVIDIA‑based AI architectures

Lightning FS is designed to integrate cleanly into NVIDIA‑based AI infrastructures, including deployments aligned with the Dell AI Factory with NVIDIA. Building on earlier work under Project Lightning, Dell and NVIDIA have collaborated on high‑performance inferencing solutions that combine Dell storage with NVIDIA KV cache and NVIDIA NIXL libraries to accelerate large‑scale LLM inference.

As part of that blueprint, Lightning FS helps customers saturate massive server farms and GPU clusters, enabling breakthrough performance for both training and inference while lowering integration risk. And because Lightning FS is software‑defined on qualified Dell PowerEdge servers, customers can take advantage of new server and fabric generations over time without re‑architecting their data layer.

Dell Exascale Storage: software‑first storage for the largest AI environments

Dell Exascale Storage—the only 3-in-1 storage built for extreme-scale AI and HPC,gives IT teams the flexibility to deploy Dell’s best-of-breed file, object and parallel file system storage software on the latest Dell PowerEdge servers. Customers can allocate PowerScale, ObjectScale and/or Lightning File System storage resources on a common hardware platform to support the most demanding AI and HPC environments, including high-frequency trading and neoclouds. Instead of standing up separate storage appliances for each service or protocol, organizations can use this architecture to run multiple storage personalities on the same high-powered PowerEdge designs optimized for AI training and inference.

A flexible way to run file, object and parallel file

Under the covers, Dell Exascale Storage provides:

    • A common, globally scalable architectural foundation.
    • A set of software-defined AI storage capabilities—including our proven PowerScale, ObjectScale and Lightning File System—delivered under one operating and control pattern.
    • A single model that can span capacity and archive, primary data services and ultra-low-latency AI training and inference workload tiers.
    • Up to 150GB/second per rack unit read performance,⁶
      delivering high throughput to keep GPUs fed and reduce I/O bottlenecks in demanding AI workloads.
    • Planned network connectivity of up to 800GbE, enabled by support for NVIDIA ConnectX-8 and ConnectX-9 SuperNICs, delivering high‑bandwidth, low‑latency data paths.

For large providers, Dell Exascale Storage is designed to improve hardware utilization and TCO by allowing capacity and performance to be repurposed across tenants, regions and services, rather than locked into isolated silos.

Organizations can turn storage into a programmable resource that can be orchestrated alongside compute and networking. As AI stacks become more composable—mixing foundation models, vector databases, KV cache layers and streaming pipelines—teams can focus on building services rather than rebuilding infrastructure. The architecture provides the flexibility multimodal AI workloads require as models increasingly combine text, images, video and sensor data.

Advancing data access and orchestration

With Dell Lightning FS and Dell Exascale Storage, we continue expanding the AI storage innovation at the heart of the Dell AI Data Platform and Dell AI Factory with NVIDIA. In our storage portfolio:

    • Lightning FS serves as extreme‑performance scratch and training/inference storage for environments at the very top end of AI scale.
    • Exascale provides the software-first architecture that lets the largest providers run and shift among Lightning FS, PowerScale and ObjectScale storage personalities on the same high-performance PowerEdge designs.

Join us at NVIDIA GTC

For organizations building the next generation of AI services—from foundation model platforms to industry‑specific AI clouds—Dell Lightning File System and Dell Exascale Storage help you truly scale AI without limits. We invite you to join us at NVIDIA GTC 2026, in person or online to learn more.

Availability:

  • Dell Lightning File System: available globally today
  • Dell Exascale Storage: targeted for availability in early H2 CY2026

1Based on Dell preliminary testing comparing random and sequential throughput per rack unit, May 2025. Actual performance may vary.
2Based on internal analysis of sequential and random read I/O, Feb. 2026. Actual results may vary.
3Based on Dell preliminary testing comparing random and sequential throughput per rack unit, May 2025. Actual performance may vary.
4Based on Dell internal testing comparing IOPs performance per node, Mar. 2026. IOPs rates based on FIO over a remote file system. Actual performance may vary.
5Based on publicly available documentation from leading enterprise storage vendors as of March 2026. Comparison refers to distinct file, object, and parallel‑file engines on one reusable hardware platform, excluding single‑engine multi‑protocol designs.
6Based on internal analysis of sequential and random read I/O, Feb. 2026.  Actual results may vary.

About the Author: David Noy

David Noy is a 25 year veteran of the storage and data management industry with deep, hands on expertise in data center infrastructure, enterprise and cloud data storage, and solutions for Artificial Intelligence. After more than a decade directing engineering organizations—and subsequent leadership of high impact product management and technical marketing teams—he has shaped flagship portfolios at Dell Technologies, NetApp, Veritas, Cohesity, and VAST Data. He has been the global executive leader for enterprise product lines recognized by Gartner as #1 in their category.
As Vice President of Product Management for Unstructured Data Solutions at Dell Technologies, David oversees the end to end strategy for enterprise, high performance computing, and artificial intelligence workloads. This includes responsibility for the Dell AI Data Platform which includes data engines and storage engines. His remit spans product conception, roadmap execution, and go to market alignment—delivering infrastructure that not only scales but also integrates advanced data management, cyber resilience, and hybrid cloud capabilities into a single, coherent platform.
Industry context
• Explosive growth of unstructured data: AI, edge telemetry, and rich media are driving compound annual growth >25 %, demanding file/object architectures that scale linearly and economically.
• Hybrid and multi cloud deployments: Enterprises now treat cloud as an operating model, not a destination; seamless data mobility and consistent policy enforcement are table stakes.
• AI and GPU acceleration: Modern AI pipelines require parallel file and object stores that can saturate the latest high speed networks while guaranteeing metadata efficiency.
• Cyber resilience & compliance: Immutable snapshots, object lock, and zero trust architectures have become mandatory in the face of ransomware and evolving data sovereignty laws.
David’s track record of shipping innovative, enterprise grade solutions at global scale directly aligns with these trends, positioning him to lead the next wave of file and object innovation that accelerates customers’ digital transformation and AI ambitions—on premises, at the edge, and in the cloud.