Edge Requires Interoperability

BrandPost By Pete Bartolik
May 19, 2021
Technology Industry

The edge computing stack must support multiple elements that all work together.

istock 1212381791
Credit: iStock

Don’t count on a single vendor to build an entire edge stack. Edge deployments will vary greatly both in terms of devices and connectivity, as well as the deployment environment.

Types of devices and use cases vary considerably. In the pre-COVID-19 days, McKinsey analysts mapped out “107 concrete use cases” for edge computing, spanning a wide range of industries, from media and entertainment to chemicals and agriculture. Since then, no doubt, more organizations have developed specific edge strategies that add to that number.

There can be vastly different needs distinguishing use cases. Harvesting sensors on the factory floor is going to have quite different needs than monitoring patients in hospital beds.

But there are also central commonalities: “The fundamentals of edge use cases continue to remain similar where the key ask is low-latency and reduction in network traffic transit,” Yugal Joshi, vice president at management consultancy and research firm Everest Group, tells The Enterprisers Project in a recent article on edge computing examples.

Multi-vendor environments

To manage disparate edge strategies, organizations will likely employ a multi-vendor hardware and software environment. Edge computing enables them to use and distribute a common pool of resources across a large number of locations. Most are likely to rely on hybrid cloud architectures.

The cloud is ideal for offloading intense compute tasks, such as creating big data algorithms and machine learning models that can be downloaded to edge devices.

But not all edge applications will need to upload vast streams of data from thousands of devices – that would be costly, for some applications the latency of roundtrip data transmission is unacceptable. Also, keeping data local reduces security risks of transferring and storing data in the cloud. Many organizations will want to employ a hybrid strategy that allows them to leverage the best of edge and cloud processing.

“Edge must be by its very nature highly adaptable,” writes the OpenStack Foundation’s OSF Edge Computing Group. “Adaptability is crucial to evolve existing software components to fit into new environments or give them elevated functionality. Edge computing is a technology evolution that is not restricted to any particular industry.”

Open and hybrid deployments

The computing stack, therefore, must support multiple elements that all work together with different infrastructures and platforms, including virtual machines, containers, bare metal servers, and more.  Open hybrid cloud provides the interoperability and management tools to operate these edge deployments at scale and cost-effectively.

“With containerization and Kubernetes, a rapidly increasing number of cloud native software applications are based on platform-independent, service-based architecture and Continuous Integration/Continuous Delivery (CI/CD) practices for software enhancements,” writes Vikram Siwach, a member of the governing board of Linux Foundation’s LF Edge organization. “The same benefits of cloud native development in the data center apply at the edge, enabling applications to be composed on the fly from best-in-class components — scaling up and out in a distributed fashion and evolving over time as developers continue to innovate.”

While edge is still evolving and in some respects defies easy definition, that simply reflects the potential and diversity of the broad concept. Organizations that have already invested in some of the open interoperable technologies driving much of the digital transformation effort are well positioned to adapt edge to their own unique use cases.

Red Hat sees edge differently. See how: https://www.redhat.com/en/topics/edge-computing/approach?sc_cid=7013a000002w1CwAAI