At Dell Technologies, we believe in empowering organizations to unlock the full potential of artificial intelligence. Demonstrating our commitment to innovation, we’ve updated the Dell AI Platform with AMD four times in less than a year, seamlessly combining advanced technology with open-source flexibility. With performance, scalability and efficiency at the forefront, this update is designed to meet the evolving needs of your organizations while supporting innovation on your terms.
Enhanced by Collaboration
Since its launch in July last year, the Dell AI Platform with AMD has seen ongoing enhancements, delivering ready-to-scale AI infrastructure that accelerates deployment, simplifies setup and drives impactful outcomes. The enhanced collaboration between Dell and AMD means we can speed up time to value, supporting the latest models like Llama4 on Day 0 with performance optimized containers available on the Dell Enterprise Hub on Hugging Face. Here is a look at some of the platform’s key advancements:
Upgraded AMD ROCm Software Stack
Dell’s commitment to open-source technology remains a driving force behind our innovation, as shown by the upgrade to the AMD ROCm 6.3.1 stack. Since rollout the ROCm stack has introduced significant performance improvements, including better GPU utilization and faster processing speeds, enabling organizations to tackle complex AI workloads with ease. Designed to provide developers with flexibility and compatibility, this platform empowers organizations to tailor their AI solutions to their unique needs. Additionally, with ROCm 6.3.1, enterprises can now support multiple large language models while optimizing tool sets and data management for greater efficiency.
200GbE Storage Networking for a Performance Boost
As the demands on AI systems continue to grow, the need for faster and more efficient data transfer has become paramount. New 200GbE storage networking directly addresses these challenges by delivering enhanced data throughput and reduced bottlenecks. This upgrade ensures organizations can handle the increasing complexity of training and inferencing processes with greater speed and efficiency. To further enhance performance, the architecture now features 200GbE for compute front end and 400GbE at the back end, delivering greater throughput across the entire solution.
Broadcom Network Adapters for Scalability
AI deployments require robust networking solutions capable of handling high volumes of data transfer and demanding throughput requirements. To support this, the platform now integrates new Broadcom dual-port network adapters. The 57508 in our control plane nodes for 100GbE link speeds and the 57608 in the GPU nodes for connectivity.
These adapters enhance scalability by enabling seamless connectivity, faster data movement, and reduced latency. For organizations implementing future-proof AI this improvement ensures infrastructure can grow in sync with their demands. From testing small models to deploying -wide AI capabilities, the platform adapts to a wide range of requirements.
Tailored for Your AI Journey
Continuously improving on our overall Dell AI Factory efforts, the focus of this AI Platform is flexibility. With open architecture and support for multicloud environments, organizations have the freedom to choose how to implement and scale as well as control the costs of AI. Whether you are working with advanced research models, operationalizing AI in production, or exploring new use cases, Dell and AMD empower you to stay agile and innovative.
The future of AI is one of endless potential, and the tools you adopt today will shape your success tomorrow. The Dell AI Platform with AMD offers the flexibility and capabilities needed to tackle complex AI challenges and adapt to changing needs. Explore the architecture and see how this platform can support your AI goals and drive results. Your next step toward AI innovation starts here.


