The convergence of artificial intelligence, high performance computing and data analytics is being driven by a proliferation of advanced computing workflows that combine different techniques to solve complex problems.
Here’s one example: AI and data analytics can augment traditional HPC workloads to speed scientific discovery and innovation. At the same time, data scientists and researchers are developing new processes for solving problems at massive scale that require HPC systems and oftentimes AI-driven applications like machine learning and deep learning.
While this convergence is accelerating discovery and innovation, it’s also putting pressure on IT shops to support increasingly complex environments. IT teams are being asked to complete manual configurations and reconfigurations of servers, storage and networking as they move nodes between clusters to provide the resources required for shifting workload demands.
And this brings us to the new Omnia software stack. This open source community project helps IT shops speed and simplify the process of deploying and managing environments for mixed workloads. It abstracts away the manual steps that can slow provisioning and lead to configuration errors, automating the deployment of Slurm® and/or Kubernetes® workload management software, along with libraries, frameworks, operators, services, platforms and applications.
For advanced computing applications such as simulation, high‑throughput computing, machine learning, deep learning and data analytics, Omnia gives IT the flexibility to run these workloads in the same environment, with a single interface for cluster provisioning and deployment, and all with easy‑to‑use point‑and‑click templates.
This is truly a game-changer. With Omnia, your HPC shop can compose a unified architecture with multi‑purpose, balanced nodes to support multiple workloads, and quickly re-compose resources to meet demands both now and in the future.
And this isn’t a one-and-done software platform. Developers in the open source community are continually extending Omnia to speed deployment of new infrastructure into resource pools that can be easily allocated and re-allocated to different workloads. Omnia can make it faster and easier for IT to provide the right tools for the right job on the right infrastructure at the right time.
Here’s the bottom line. HPC systems deployment can be difficult, and the addition of AI and data analytics makes a hard problem even harder and more complicated. Omnia incorporates collective expertise from Dell Technologies HPC & AI Solutions engineers, our HPC & AI Centers of Excellence and the broader HPC Community to make it easier to run diverse workloads on the same converged solution.
In short, Omnia really brings it all together — which is one definition of the term: Omnia (Latin: all or everything).
To Learn More
Omnia is available today on GitHub at dellhpc/omnia. The Dell Technologies HPC & AI Innovation Lab invites you to join the HPC Community now and help guide the design and development of the next generation of open-source consolidated cluster deployment tools.
And for a look at Omnia on the job at the University of Pisa, see the Dell Technologies white paper “Implementing Virtualized HPC Clusters on Dell EMC Infrastructure.” This collaborative effort shows how Omnia can be deployed on virtual infrastructure using either administrator‑curated inventory files or by using the VMware® dynamic inventory plugin for the Red Hat® Ansible® Automation Platform. Find out more about the HPC Ready Solution for AI and Data Analytics leveraging Omnia in the Rise of a New Architecture for AI and Data Analytics.
Explore more HPC Solutions from Dell Technologies and Intel.