The holy grail of cross platform, heterogenous computing is within sight since Intel announced its pathfinding oneAPI initiative a couple years ago. With the increasing pace of advancements in high performance computing, compute acceleration and the rapid adoption of artificial intelligence, this cross platform, hardware-agnostic vision is key to driving technology toward the betterment of humankind, a shared aspiration of both Intel and Dell Technologies.
The oneAPI toolkit and industry initiative is a bold vision for a unified and simplified cross-architecture programming model that delivers uncompromised performance without proprietary lock-in while enabling the integration of legacy code. And whether it be CPU, GPU, or FPGA, an XPU approach to compute acceleration architectures is certain to advance the technology even further. With oneAPI, developers can choose the best architecture for the specific problem they are trying to solve without needing to rewrite software, as the “X” stands for any compute architecture that best fits the need of the application.
IT Takes a Village
Intel® oneAPI is a cross-industry, open, C++ standards-based unified programming model that delivers a common developer experience across accelerator architectures—for faster application performance, more productivity, and greater innovation. The oneAPI initiative encourages collaboration on the oneAPI specification and compatible oneAPI implementations across the ecosystem. According to Bhavesh Patel, Dell Technologies Distinguished Engineer, it will take time for broadscale adoption of oneAPI, and enabling the ecosystem is one of his key goals.
As a lead engineer within Dell Technologies Infrastructure Solutions Group CTIO, he is tasked with research into compute accelerators and how to best enable the ecosystem. “At Dell Technologies we believe that enabling the entire ecosystem, from researchers to universities to enterprises, with the capabilities of heterogenous computing, will drive further adoption. And along with Intel, we see the true benefits of digital transformation being dispersed, accelerating adoption of AI and allowing us to drive humanity to a better place.”
Intel is also addressing the development of the XPU ecosystem. The company launched The Great Cross-Architecture Challenge to drive development and adoption. According to Jeff McVeigh, Intel VP, Datacenter XPU Products and Solutions:
“This challenge showcases the ease of use and freedom of choice that oneAPIs open, cross-architecture programming model delivers. The participants were able to either quickly port, or develop from scratch, applications with real-world impact across a range of disciplines. We are highly impressed with the innovative and creative submissions received from around the world, and the positive feedback and growing adoption of oneAPI.”
It’s clear that the accelerator ecosystem is getting a boost from this initiative. In fact, Intel sees the opportunity to enable upwards of 20 million developers. This robust programming ecosystem is key to success. As Intel Chief XPU Architect Raja Koduri stated during his Hot Chips 2020 keynote, with oneAPI we have “no transistor left behind.”
No Vendor Lock In
When asked about the importance of Intel’s oneAPI and XPU visions, Bhavesh Patel states that de facto software development standards for accelerator architectures are not enough. “Programming to one specific GPU limits extensibility of any software code that needs acceleration. Intel oneAPI allows a more agnostic approach to software development, offering complete abstraction from the hardware underlying the accelerators.”
And the ecosystem is primed and ready to grow. “Although it will take a few years to complete the heterogenous compute vision, the fact that the oneAPI toolkit is based on C++ programming languages is sure to advance it as there is a significant pool of developers in the ecosystem already accustomed to these common programming interfaces.”
The goal of programming in this space will be having an end-to-end, unified software stack that can work across the various accelerators. And Bhavesh also sees a significant push to further incorporate Intel® oneAPI, associated distributed compute XPU technologies and advanced storage memory architectures into the development of future Dell Technologies products.
Impacts on AI
Over time, oneAPI will broaden the scope of accelerator vendors as heterogenous computing becomes more entrenched for AI workloads. And as the ecosystem matures, oneAPI will be up streamed into common AI and Deep Learning frameworks such as Pytorch and TensorFlow, enabling even faster adoption.
AI workloads such as inferencing at the edge will require various discrete devices, such as cameras or sensors with embedded compute and acceleration. And within core datacenters or clouds, high performance computing will benefit too, as typical HPC clusters invariably include any number and type of server accelerators in the form of CPUs, GPUs, FPGAs and so on. Therefore, oneAPI is pivotal to driving AI from the edge to the core to the cloud.
And as HPC researchers adopt oneAPI and port their HPC code, we will see a broader adoption and virtuous cyclical growth of the ecosystem. And that is the goal for Dell Technologies and our oneAPI efforts — to drive adoption and to broaden the ecosystem. It also includes our datacenter products in the future, e.g. when Intel’s newest XPU, Ponte Vecchio Xe-HPC, is integrated into PowerEdge servers.
The Future is Bright
In this Intel Unleashed broadcast, newly seated CEO Pat Gelsinger spoke of the company’s mission and vision for heterogenous computing. “Technology has never been more important for humanity. The entire world is becoming digital, driven by four superpowers: the cloud, connectivity, artificial intelligence and the intelligent edge.” The oneAPI initiative is tied very squarely to the last two, allowing developers to write code to the growing proliferation of accelerators. And it will make programming to the new generation of exascale XPUs, such as Intel® Ponte Vecchio, more accessible.
With oneAPI allowing developers to abstract away from various compute elements, it will really come down to how data is moved in and out of AI workflows. With the proliferation of edge computing, inferencing will happen there, but data still needs to move from the edge to core datacenters or clouds. One side goal of the initiative will be to streamline intelligent data movement in and out of compute elements. This is key as data keeps amassing from devices and its use in AI, which will in turn generates more data.
It’s All About Data
Data locality will be critical, as latency in moving massive data stores impacts performance. Intel® oneAPI, along with advanced storage memory architectures such as Intel Optane™ and Intel DAOS (Distributed Asynchronous Object Storage) will allow progress toward a more seamless movement of data, intelligently, across devices, no matter the compute or accelerator elements.
When data resides closer in locality to the compute elements, performance will be positively impacted, thus allowing greater expansion of AI. Lower latency and higher bandwidth are the domains of Intel’s vision in data movement. And it dovetails nicely into what we advocate at Dell Technologies when we talk about having compute close to data. With the Intel OneAPI we see great alignment in these visions and significant progress toward heterogenous computing, allowing compute to be deployed closer to data, whether it be at the edge, in the core or the cloud.
To Learn More
Visit Dell Technologies during ISC High Performance 2021 Digital.
Explore more AI solutions from Dell Technologies and Intel.