Break Through AI with Data

Inside the Dell AI Data Platform Event

AI ambition is clear: build smarter products, uncover new efficiencies, and deliver better outcomes. But progress often stalls. Starved GPUs waste budgets and extend training cycles. Fragmented data leads to unreliable results. And vendor lock-in slows down your ability to adopt the next wave of innovation. The fastest path to results is eliminating these data bottlenecks.

At the Dell AI Data Platform Event, we unveiled key innovations designed to help you move from AI pilot to production with greater speed and confidence while working with the best data at petabyte scale. By pairing high-throughput storage with open data engines, you can cut your time-to-model and reduce the total cost of ownership for your AI infrastructure.

An open, modular, and secure data foundation

Figure 1: The Dell AI Data Platform architecture

The Dell AI Data Platform is a comprehensive solution built to turn distributed data into a strategic asset. It rests on four core building blocks that provide a resilient, high-performance foundation for your most demanding AI workloads.

    • Storage Engines: Dell PowerScale and Dell ObjectScale deliver the high-throughput, scalable data access essential for training, fine-tuning, inference, and retrieval-augmented generation (RAG) workflows. They ensure your data is always available when and where your AI workloads need it.
    • Data Engines: The Dell Data Analytics Engine, Dell Data Processing Engine and Dell Data Search Engine are specialized tools that help you organize, enrich metadata, query data in place, and activate your organizations organic information together with IoT, machine and unstructured data created outside your business. They transform raw data into query-ready, discoverable data products ready to fuel smarter AI.
    • Cyber Resiliency: With real-time threat detection, robust governance, and intrinsic security, we help protect the integrity of your AI data pipelines. You can only trust your AI outcomes if you can trust your data.
    • Professional Services: We provide a strong foundation to make data AI-ready and keep it that way throughout its lifecycle, from ingest to archive.

PowerScale: Fueling AI workloads with unmatched efficiency

To accelerate AI from pilot to production, your storage must keep pace with the world’s most powerful GPUs. Dell PowerScale continues to raise the bar for AI-ready storage, now with NVIDIA Cloud Partner Program (NCP) certification for NVIDIA GB200 and GB300 NVL72 platforms. This validation provides a proven, optimized storage foundation for enterprises and cloud service providers building everything from small-scale GPU clusters to massive AI factories.

The PowerScale F710 system delivers exceptional performance with remarkable efficiency. In a reference environment supporting over 16,000 GPUs, PowerScale achieves this density in just 168 rack units.1This allows you to scale AI within tight power and space constraints, using up to 88% fewer backend switches and consuming up to 72% less power than alternative solutions.1The result is simplified infrastructure, lower operational overhead, and a smarter way to maximize every dollar spent on GPU resources.

With PowerScale, you can unlock the full potential of your GPUs, accelerate AI innovation, and achieve better outcomes—all while reducing costs and complexity.

ObjectScale: Extreme performance for modern AI workloads

As unstructured datasets grow, you need an object storage platform built for the speed and scale of modern AI. The upcoming Dell ObjectScale software release sets a new benchmark for AI-ready object storage. Its breakthrough S3-over-RDMA technology (initially available in Tech Preview) accelerates AI and high-volume workloads with up to 230% higher throughput, 80% lower latency, and 98% lower CPU usage compared to traditional S3.2 This helps accelerate RAG indexing and cuts compute costs significantly.

Built on a next-generation architecture, ObjectScale sustains up to 40 GiB/second ingest per node,3future-proofing your infrastructure for large datasets like AI training models and rich media archives. Performance for smaller object workloads also improves, ensuring consistently fast access at scale. With planned support for next-generation drives scaling to 122 TB, ObjectScale enables multi-petabyte capacity in a smaller footprint, further reducing operational costs.

Data Engines: From raw data to real-time intelligence

The Dell Data Engines are engineered to transform raw, distributed data into real-time AI outcomes. Through deep engineering collaboration, we are simplifying how organizations prepare, search, and activate data for retrieval-augmented generation (RAG), analytics, and generative AI.

To make discovery intuitive, the new Dell Data Search Engine—developed with Elastic—speeds decision-making by letting teams interact with data as naturally as asking a question. Built for RAG and semantic search, it integrates with MetadataIQ to search billions of files on PowerScale and ObjectScale. Developers can build smarter RAG apps in frameworks like LangChain, saving compute time by ingesting only updated files to keep vector databases current.

Building on that, our integration with NVIDIA cuVS on the Dell AI Data Platform delivers the next leap in vector search performance. This brings GPU-accelerated hybrid search to the Data Search Engine, enabling faster insights with the security of full on-premises control. IT teams get a fully integrated, ready-to-deploy solution to scale GPU-powered search right out of the box.

Complementing search, the Dell Data Analytics Engine—powered by Starburst—enables federated analytics across databases, warehouses, and lakehouses without moving data. It unifies data in open formats like Apache Iceberg and Delta Lake, ensuring consistent governance from preparation to inference. To operationalize this from end-to-end, the MCP Server enables AI agents and workflows to securely access high-quality data products from distributed sources, accelerating the path from ingestion to actionable insight.

Why Dell’s approach Is different

    • Open and Modular: Our unbundled architecture lets you scale storage and data engines independently, adopt new technologies faster, and avoid vendor lock-in.
    • GPU-Efficient Throughput: We focus on delivering line-rate performance from storage to GPUs, eliminating I/O bottlenecks that waste expensive compute cycles.
    • Federated Access Without Data Copies: Query and analyze data where it lives, reducing complexity, cost, and governance risks associated with data duplication.
    • Enterprise-Grade Resilience: With built-in security and data protection, you can confidently deploy mission-critical AI applications on a platform designed for integrity and availability.

Some platforms bundle AI services directly into their storage hardware. This can seem simple at first, but it often leads to CPU contention, firmware-gated roadmaps, and delays while waiting for monolithic upgrades. We believe a separated, open, and optimized approach is the key to scaling AI with velocity.

Break through AI with data

If you are serious about scaling AI, your data layer cannot be an afterthought. The Dell AI Data Platform delivers the throughput, search, and query capabilities your pipelines demand, paired with the freedom to choose the best tools as your needs evolve. Feed your GPUs faster, ground your models in real-time knowledge, and scale with confidence.

Ready to see innovation in action? Catch the Dell AI Data Platform Event on demand and explore what’s possible: https://www.thecube.net/events/dell/dell-ai-data-platform-event.


Disclaimers

1Based on Dell internal analysis of NVIDIA-validated reference designs for 64 SU configurations that adhere to the NVIDIA Cloud Platform Reference Architecture specification for high-performance storage, August 2025.
2Based on Dell internal ObjectScale S3 over RDMA testing, May 2025. Actual results may vary.
3Based on Dell analysis of ObjectScale 4.2 on PowerEdge R7725xd for object ingest speed, Aug. 2025. Actual results may vary.

About the Author: Prasanna Vijayakumar

Prasanna Vijayakumar is a Senior Product Marketing Manager for Dell’s Unstructured Data Solutions portfolio. In this role, he addresses a variety of topics, including the Dell AI-Ready Data Platform, cost optimization, productivity enhancement, and business growth acceleration. Prasanna leverages his extensive experience in product and solutions marketing to tackle IT challenges and streamline data center management. Prasanna possesses expertise in SaaS, IaaS, XaaS, Edge, Cloud, Analytics, Public and Private Cloud, MEC, IP Network Transformation, and IMS-related technologies. His mission is to help businesses enhance their market leadership and customer satisfaction by delivering compelling and differentiated value propositions and stories.