Dell AI Data Platform Innovations Announced at SuperComputing 2025

Unlock massive AI scalability with Dell PowerScale and pNFS. Discover how parallel data access delivers faster throughput and client level scalability to feed GPUs for model training and ML/DL workloads.

tldr: Dell Technologies introduces major updates to its AI Data Platform, featuring advanced storage engine innovations. These enhancements boost performance, scalability, and efficiency, empowering organizations to transform data into actionable insights and accelerate AI initiatives. Your guide to Dell AI Factory starts here.


Driving progress with AI-ready data innovations

This week at SuperComputing (SC) 2025, Dell Technologies is introducing a new wave of innovations designed to power human progress. We are announcing powerful updates to help you turn your data into strategic assets and accelerate your AI initiatives. These announcements enhance how organizations manage, secure, and scale their data, providing the right infrastructure to separate leaders from laggards.

At the heart of our strategy is the Dell AI Data Platform (DAIDP), a comprehensive solution that integrates with the NVIDIA AI Data Platform reference design built to support the entire data lifecycle for analytics and AI applications. The DAIDP, a key component of the Dell AI Factory, unifies complex data workflows at scale, helping your teams transform raw data into valuable outcomes. It is built on four key pillars: powerful data engines, high-performance storage engines, integrated cyber resilience, and expert professional services.

This year, we’re raising the bar for data infrastructure with groundbreaking advancements to our storage engines, Dell PowerScale and ObjectScale. These innovations are designed to help you harness the full potential of your data by delivering new levels of performance, flexibility, and efficiency for every analytics and AI workload.

Dell storage engines: Fueling performance and flexibility for AI

Dell PowerScale and ObjectScale, driving forces within the AI Data Platform, empower organizations to extract real value from data-intensive operations. With this latest round of innovations, our storage engines portfolio is elevating how businesses handle expansive and demanding AI workloads.

    1. PowerScale and ObjectScale: Accelerating AI Inferencing with Scalable KV Cache Offloading

As AI models grow in scale, traditional inference methods hit memory capacity limits, slowing performance and driving up costs. Dell’s scalable KV Cache offloading, powered by PowerScale and ObjectScale, overcomes these challenges by shifting cache workloads to high-performance storage, thereby freeing up GPU resources and accelerating AI inferencing.

We benchmarked our vLLM + LMCache + NVIDIA NIXL stack to measure Time to First Token (TTFT) performance with a 100% KV Cache hit rate. This scenario isolates the efficiency of retrieving a fully prepopulated KV Cache from external storage. The tests were run on 4x NVIDIA H100 GPUs across Dell storage backends. We compared our storage offloading solution to a baseline where the KV Cache is recomputed from scratch on the GPU, using the LLaMA-3.3-70B Instruct model, with Tensor Parallelism=4.

Dell storage engines—PowerScale and ObjectScale—delivered a 1-second time to first token (TTFT) at a full context window of 131K tokens.1 This represents a 19x improvement over the standard vLLM configuration,2 which took over 17 seconds at the same context size.

We also ran follow-up testing focusing on multi-turn inferencing, where responses become part of future prompts and context accumulation becomes a dominant scalability challenge. As LLMs increasingly rely on multi-turn interactions, evaluating these workloads is essential for understanding real-world performance and system scalability. Testing showed 2.87× higher tokens-per-second inference throughput compared to the baseline vLLM.3

This validated approach removes memory capacity bottlenecks, boosts efficiency, and enables support for large-scale LLMs with greater efficiency. By integrating Dell’s high-performance storage engines with NVIDIA NIXL, part of NVIDIA Dynamo, the Dell AI Data Platform empowers organizations to operationalize AI at scale.

2. PowerScale: Massive Scalability with Parallel NFS Support

PowerScale will soon support Parallel NFS (pNFS) with Flexible File Layout. This feature allows clients to connect directly to multiple data servers across a PowerScale cluster. The result is dramatically enhanced throughput, as parallel data streams enable much faster data ingestion and access. For organizations running high-throughput AI workloads, pNFS delivers true linear scalability, enabling your data infrastructure to handle growing workloads with ease. To dive deeper into how this works, explore our detailed blog.

Notably, pNFS Flex-Files are natively supported in most modern Linux distributions, so no additional client software is required—making deployment fast and hassle-free, and ensuring your cluster remains simple to manage and ready to scale with your data and AI needs. Deployment is straightforward, and scaling remains simple and resilient as AI ambitions expand.

3. PowerScale: Instant Performance Gains with the Latest in Hardware Technology

Following our software-defined ObjectScale announcement, PowerScale will also soon be available as an independent software license on qualified PowerEdge servers. By decoupling software from hardware, organizations not only gain flexibility to adopt the latest hardware technology and network advancements as soon as they’re available but also unlock substantial gains in performance and efficiency. Based on hardware and software upgrades, we expect PowerScale to deliver performance gains of up to 50% compared to current levels.4

Dell PowerEdge R7725xd, the first PowerScale-qualified server hardware, featuring dual AMD processors, PCIe Gen 5 architecture, and 400 GbE high-speed connectivity, is engineered to deliver these impressive gains. This approach empowers you to process data faster and scale your infrastructure seamlessly as needed; putting real, measurable results at the heart of your AI strategy.

4. ObjectScale: Introducing AI-Optimized Search with S3 Tables and S3 Vector

Our newly announced S3 Tables and S3 Vector APIs will let you access, analyze, and act on complex data at remarkable speed, without unnecessary data movement.

S3 Tables provides a breakthrough for managing structured data within object storage. You can store and query massive datasets using familiar SQL-based analytics tools like Spark and Trino right where your data lives. This makes it easy for teams to run ad-hoc analysis or build AI training sets at scale, eliminating bottlenecks and minimizing infrastructure costs. Testing on ObjectScale shows up to 2x faster ingestion and up to 4.5x faster queries, depending on workload and dataset.5

S3 Vector brings advanced, AI-native search to ObjectScale. As vector search transforms how organizations find meaning in unstructured data, S3 Vector makes it simpler and more efficient. By allowing you to store vector indices alongside your objects, it eliminates the need for separate vector database software and licenses. Applications can now perform lightning-fast similarity searches directly on petabyte-scale data, with seamless connections to frameworks like LangChain. Preliminary results here show sub 1-second query performance for searching billions of vectors.6

Together, these ObjectScale enhancements simplify the management, storage, and discovery of ever-expanding datasets. They break down silos and empower smarter, faster decision-making, so your teams can spend less time wrangling data and more time making an impact.

Meet the experts at SC2025

Heading to SC 2025? Be sure to stop by the Dell Technologies booth to connect directly with our experts to learn more about these announcements. This is your chance to ask questions, see hands-on demos of our latest advancements in the Dell AI Data Platform, and explore how these innovations can support your AI projects. Discover, learn, and engage with the people driving progress—let’s shape what’s possible, together.

For more information about how Dell is helping customers on their AI journey, visit our website.


1Based on internal Dell Technologies testing using the LLaMA-3.3-70B Instruct model with Tensor Parallelism=4. Tests measured Time to First Token (TTFT) performance with a 100% KV Cache hit rate. Actual results may vary. Nov, 2025
2Based on internal Dell Technologies testing using the LLaMA-3.3-70B Instruct model with Tensor Parallelism=4. Tests measured Time to First Token (TTFT) performance with a 100% KV Cache hit rate, comparing Dell’s vLLM + LMCache + NVIDIA NIXL stack on PowerScale and ObjectScale storage to a baseline standard vLLM configuration. Actual results may vary. Nov, 2025
3Based on internal Dell Technologies testing using the LLaMA-3.3-70B Instruct model with Tensor Parallelism=4. Tests measured TPS (tokens per second) throughput using LMbenchmark multi-turn inference suite, comparing Dell’s vLLM+ LMCache + NVIDIA NIXL stack on PowerScale and ObjectScale storage to a baseline  configuration using standard vLLM with GPU memory-only caching. Actual results may vary. November 2025.
4Based on expected hardware and software optimizations. Nov, 2025
5Based on Dell internal ObjectScale S3 Tables testing, Sept. 2025. Actual results may vary.
6Based on Dell internal ObjectScale S3 Vector testing using Dell’s proprietary storage-optimized index, Sept. 2025. Actual results may vary.

About the Author: David Noy

David Noy is a 25 year veteran of the storage and data management industry with deep, hands on expertise in data center infrastructure, enterprise and cloud data storage, and solutions for Artificial Intelligence. After more than a decade directing engineering organizations—and subsequent leadership of high impact product management and technical marketing teams—he has shaped flagship portfolios at Dell Technologies, NetApp, Veritas, Cohesity, and VAST Data. He has been the global executive leader for enterprise product lines recognized by Gartner as #1 in their category. As Vice President of Product Management for Unstructured Data Solutions at Dell Technologies, David oversees the end to end strategy for enterprise, high performance computing, and artificial intelligence workloads. This includes responsibility for the Dell AI Data Platform which includes data engines and storage engines. His remit spans product conception, roadmap execution, and go to market alignment—delivering infrastructure that not only scales but also integrates advanced data management, cyber resilience, and hybrid cloud capabilities into a single, coherent platform. Industry context • Explosive growth of unstructured data: AI, edge telemetry, and rich media are driving compound annual growth >25 %, demanding file/object architectures that scale linearly and economically. • Hybrid and multi cloud deployments: Enterprises now treat cloud as an operating model, not a destination; seamless data mobility and consistent policy enforcement are table stakes. • AI and GPU acceleration: Modern AI pipelines require parallel file and object stores that can saturate the latest high speed networks while guaranteeing metadata efficiency. • Cyber resilience & compliance: Immutable snapshots, object lock, and zero trust architectures have become mandatory in the face of ransomware and evolving data sovereignty laws. David’s track record of shipping innovative, enterprise grade solutions at global scale directly aligns with these trends, positioning him to lead the next wave of file and object innovation that accelerates customers’ digital transformation and AI ambitions—on premises, at the edge, and in the cloud.