Increasingly, edge environments are part of hybrid cloud architectures that meld localized assets with the vast scale and analytics capabilities of public and private cloud services. Few edge deployments will operate in a manner fully disconnected from centralized resources.
Some edge applications will always depend on centralized cloud services; but in many cases, localized edge clouds will provide a degree of autonomy while relying less often on interaction with a centralized hub. An open hybrid cloud strategy provides organizations with maximum flexibility in building out their edge-to-cloud capabilities.
Defining edge computing often depends on the perspective of who is offering the definition. As Network World points out, “edge computing is a broad architectural concept rather than a specific set of solutions.”
Most agree that the edge is a fast-evolving form of distributed computing that involves putting more compute and intelligence closer to where data is being generated or consumed, whether that is at the level of an Internet of Things (IoT) sensor or an on-premises mini data center. “Edge computing can apply to anything that involves placing service provisioning, data, and intelligence closer to users and devices,” Red Hat technology evangelist Gordon Haff tells The Enterprisers Project.
It’s tempting to view edge computing as a counterbalance to the centralization of cloud computing and the dominance of hyperscalers, but that’s not how the edge is taking shape.
For one thing, the largest public cloud providers – Amazon Web Services, Google Cloud Platform, and Microsoft Azure – won’t sit back and simply watch processing and storage migrate from their platforms to edge platforms. The big three, as InfoWorld reports, “are all starting to provide edge computing capabilities.”
Furthermore, according to that report, “Cloud-based edge computing offerings are a clear sign that the boundaries between public cloud, private cloud, and edge computing are blurring. The unifying goal is to provide businesses and architects with a range of choices based on the type of workload and its performance, reliability, regulatory, and safety requirements.”
Most sizeable organizations have and will continue to rely on a core data center perspective. However, as edge computing solutions mature, organizations are looking for a common, horizontal, unified platform—from the core to the edge—with a consistent development and operations experience. That doesn’t mean they are looking to one dominant vendor to try and meet those needs, however—that would be just another form of vendor lock-in and likely lead to a combination of premium pricing and the feature bloat of one-size-fits-all solutions.
The edge is really just taking shape and is likely to represent one of the hottest areas for investment and innovation in the foreseeable future. Nobody wants to be locked out of the ability to take advantage of new developments there.
In addition, the small physical footprints, remote locations and limited connectivity of many—if not most—edge devices pose a challenge for traditional, full-featured operating systems. That’s where open-source Linux comes in. As Red Hat’s Seth Kenlon writes, “as much as Linux thrives in data centers, it’s even more welcome out on the edge, where servers and devices run locally relevant software on every variety of architecture.”
Red Hat Enterprise Linux (RHEL), along with the OpenShift hybrid cloud platform, provide organizations with an edge in managing the edge, Cruz says. RHEL is built with edge capabilities to address enterprise edge deployments on small footprints. It also forms the foundation for open, hybrid cloud environments and a full stack of compute, storage, networking and automation technology to get the most out of cloud and edge moving forward.
Red Hat sees edge differently. See how: https://www.redhat.com/en/topics/edge-computing/approach?sc_cid=7013a000002w1CwAAI