“Any sufficiently advanced technology is indistinguishable from magic” — Arthur C. Clarke
At this moment in history, we can watch movies on our phones and use our televisions to call our loved ones. Advanced computing is leading to more accurate medical diagnoses, breakthrough medical treatments and a better shopping experience. Businesses, through artificial intelligence, can recognize who their customers are, accurately predict what they are most likely to buy and when they are going to buy it.
Computing acceleration primarily comes from the advances in silicon processing per Moore’s law: the number of transistors incorporated in a chip will approximately double every 24 months, and roughly translates to doubling of the chip’s performance. This has unleashed a level of technology proliferation unprecedented in human history. Digitization has also resulted in the proliferation of data of all kinds — personal, professional and machine data. However, these performance gains from Moore’s law are beginning to plateau. Engineers and scientists are finding that the slow performances offered by contemporary CPUs are either quickly becoming un-economical or insufficient.
Enter the accelerators.
The most common accelerator is the high-speed NVIDIA graphical processing unit (GPU). This specialized hardware is designed to perform one particular task more efficiently than a general-purpose CPU. GPUs have been supporting video games and image rendering for years. The primary computational requirement for video applications is matrix multiplication or, technically speaking, vector processing. The need for a fast rendering of high-resolution images quickly overwhelms a general-purpose CPU. Video game hardware engineers solved this problem with GPUs. They’re so good that the leading-edge GPU from NVIDIA can easily crank out 125 TFLOPS.
Before long, there was a dramatic acceleration of the performance of critical applications in diverse verticals with similar computational requirements. Verticals from financial services to manufacturing logistics to retail to scientific research to oil and gas exploration now use accelerators to help solve computational problems that they could not before.
Artificial intelligence including machine learning and deep learning has become more mainstream. Accelerated computing is becoming essential to provide the necessary performance to support these business-critical applications. In fact, Google recently stated that it could not run its various services without accelerated computing.
There are many reasons for an enterprise to use accelerated computing but here are the top three:
- It is BETTER: Enables enterprises to cover workloads more comprehensively by leveraging machine learning / deep learning applications to analyze vast, unstructured data workloads
- It is FASTER: Get to critical business insights faster. Depending on the application and supporting hardware, accelerated computing can boost performance from 10x to 100x
- It is COST-EFFECTIVE: A denser but simpler infrastructure for better overall computational performance. It helps lower CapEx and OpEx and helps maintain reliable services. Leverage off-the-shelf accelerators and libraries to effectively support increasingly complex cognitive workloads
Adopting accelerated computing is an easy win for enterprises striving for competitive advantage. Dell has expanded its leadership from HPC into AI. We offer a portfolio of accelerated computing platforms to support our customers’ diverse AI computational needs. Our customers are at various stages in the AI adoption journey and we realize that not all applications need the same category of performance.
Businesses having heterogeneous HPC workloads tend to use our PowerEdge R740XD, our three accelerator-based workhorse platform designed to be more fault tolerant for critical servers in HPC environments. Moving along the increasing computational complexity line, we have the PowerEdge C6420, a server platform with innovative cooling options to provide the maximum performance density for applications such as high frequency trading. Our other platforms include the C6320p, T640 and a few other platforms under investigation.
As machine learning and deep learning applications gain greater adoption, we are very excited to announce the PowerEdge C4140, an ultra-dense, accelerator optimized server platform designed to handle these intensive AI workloads. With its innovative interleaved GPU design, the C4140 can support four GPUs and provide the kind of unthrottled, no-compromise performance that customers have come to expect from Dell PowerEdge servers. The C4140 can now deliver up to 500 TFLOPS for deep learning applications and on a lifesciences application; it is 19x faster than an equivalent CPU-only system. One can also view this superlative performance as needing 19 CPU-only servers to accomplish the same task as one (1) C4140. With NVIDIA’s state-of-the-art GPUs and PowerEdge servers, Dell is helping businesses adopt machine learning and deep learning applications through our various HPC Ready Bundles.
Come and see us at SC 17 (Nov 13 –16) to learn more about the Dell PowerEdge portfolio and some of the hush-hush products we are showcasing in our Whisper Suites. If you cannot make it, be sure to follow us on Twitter and check out Direct2Dell for the latest news and updates.
It is also worthwhile to consider that a high-speed interconnect is just as important as the high-speed computations. The next big thing on the scene is Gen-Z, a new data access technology developed by a broad-based industry consortium including Dell. This technology provides the high-speed interconnect needed to allow system disaggregation and the ability to scale acceleration, compute, and memory independently.
The myriad ways that enterprises are taking advantage of this superior performance afforded by accelerated computing and driving new, wondrous applications is nothing short of magic.