Revving Up AI Workloads with HPC under the Hood

Artificial intelligence will soon be virtually everywhere in the enterprise. To fully capitalize on the opportunity, organizations need to think strategically and deploy infrastructure that is ready for AI’s demands.

shutterstock 382714240 mod 1280x1280
Dell EMC

In enterprise environments, the need for speed has never been greater. Business competitiveness increasingly hinges on products and services driven by artificial intelligence (AI) systems that generate instantaneous insights and take autonomous actions in real time.

To deliver this blazing speed, AI needs a lot of computational horsepower under the hood, along with an architecture designed for lightning-fast processing, bottleneck-free I/O and the capacity to handle huge datasets. To get there, forward-looking enterprises are rolling out high performance computing (HPC) infrastructure built and optimized for the challenges of AI workloads, including machine learning, deep learning and high-performance data analytics.

This is very much the wave of the future, with AI needs driving infrastructure decisions — a point underscored in a recent Gartner report.

“The use of AI across enterprises is ramping up quickly,” Garner says. “In fact, through 2023, AI will be one of the top workloads that drive infrastructure decisions. Accelerating AI adoption requires specific infrastructure resources that can grow and evolve alongside technology.”1 

This shift to AI-inspired infrastructure is prompting IT leaders to look at new server, networking and storage technologies that are optimized for the demands of AI.

“The days of the omnipresent homogeneous, general-purpose server are over,” IDC declares in the introduction to a recent study. “The speed with which training and inferencing for ML and DL can be executed is of critical importance for organizations that are developing and deploying AI applications.”2

IDC notes that the growing adoption of AI has led to an eruption of different infrastructure technologies aimed at increasing performance and reducing latency in AI data flows. “Increasingly, parallelization is the preferred approach, with AI infrastructure starting to resemble HPC infrastructure,” the firm says.

Engines for AI workloads

In many cases, the search for an AI computing foundation leads to HPC infrastructure designed for the convergence of HPC, AI and data analytics. These leading-edge products and solutions are built for a world in which AI is everywhere in the enterprise. For example:

  • Dell EMC Ready Solutions for AI bring together the key system components needed to accelerate AI initiatives. These pre-designed and pre-validated solutions are ideal for machine and deep learning applications that deliver faster, deeper insights.
  • Intel Xeon Scalable processors are the first generation of the Intel platform built specifically to run high-performance AI workloads — alongside the cloud and HPC workloads they already run.
  • Dell EMC Networking H-Series switches, based on the Intel Omni-Path Architecture (Intel OPA), deliver the scalability and integration HPC deployments need, helping IT teams increase computing density, improve reliability and reduce power consumption.

A case in point

At Dell EMC, we work with many organizations that are deploying HPC infrastructure to support AI and data analytics applications. At the University of Cambridge, its latest Cumulus supercomputer is designed to serve as a single HPC cluster that supports data analytics, machine learning and large-scale data processing. The Cumulus cluster is based on Dell EMC PowerEdge™ servers and Intel Xeon Scalable processors, all connected via Intel OPA. To avoid I/O bottlenecks, the system incorporates a unique Data Accelerator (DAC) that is designed into the network topology.

In this architecture, DAC nodes work with the Distributed Name Space (DNE) feature in the Lustre file system and Intel OPA-based switches to greatly accelerate system I/O. How fast is it? With DAC under the hood, Cumulus provides more than 500 GB/s of I/O read performance, making it the UK’s fastest HPC I/O platform when it was launched, according to the university’s Research Computing Service, which operates the Cumulus cluster.3

Results like this show what’s possible when infrastructure is designed for the demands of AI workloads.

Key takeaways

AI will soon be virtually everywhere in the enterprise. To fully capitalize on the opportunity, and to stay competitive, organizations need to think strategically and deploy infrastructure that is ready for the demands of machine learning, deep learning, high-performance data analytics and similar workloads. Today, this isn’t hard to do, thanks to a growing range of HPC products and solutions designed to improve AI system performance.

To learn more


[1] Gartner, “Gartner Predicts the Future of AI Technologies,” February 13, 2019.

2 IDC, “AI Infrastructure: Horsepower Changes Everything” (abstract), March 2019.

3 Dell EMC case study, “UK Science Cloud,” November 2018.

Copyright © 2019 IDG Communications, Inc.