New challenges at the edge: latency, interoperability, coordination

We’re seeing the growth of edge computing, but multiple challenges await.

gettyimages
getty

At some point, it becomes difficult to comprehend the magnitude of numbers and what they represent. Imagining the million bits of data in a terabyte is manageable, but progressing into petabytes and exabytes - when we're dealing in the billions - is trickier to visualise.

Estimates put the printed material of the Library of Congress at about ten terabytes of data; an exabyte could hold one hundred thousand times that. By 2025, it's predicated that global mobile data traffic will reach 160 exabytes each month - and almost half of that will be supported by the 5G networks that are being created today.

While LTE networks brought about a degree of enterprise application improvements and added consumer benefits like better remote video streaming, 5G networks should eventually achieve speeds close to 100Gbps. With this powerful uptick in bandwidth capacity, and with the ability of businesses to create their own local, non-public 5G networks, the next-gen technology will be central to orchestrating infrastructure and being used for critical applications - including matters of life and death, literally, such as ensuring autonomous cars operate safely.

This burst in mobile connectivity introduces challenges for how to track, manage, store, and maintain the soaring data requirements that it is sure to bring.

If road vehicles are moving towards connected, autonomous cars, the systems that support the one-tonne machines hurtling along our motorways are going to have to be incredibly efficient, managing high-stakes operations, and components that will need to communicate close to real time - with levels of latency approaching zero. While half a second barely registers to the human eye, split seconds will make all the difference in preventing accidents on the road.

That is where edge computing will help - in short, a method of computation where processing is achieved closer to the 'edge' of the network, i.e., where the device is. This means that in time-critical situations where information needs to be processed fast, it can be achieved near where it's required, rather than having to bounce information to and from a server that could be located hundreds of miles away.

By 2025, this business need will have become more apparent, with Grand View Research predicting that this will be reflected by a market worth $16.5bn compared to just over $1.7bn in 2017.

Even with this spike in network traffic, there will remain a need for on-premise solutions as well as public and private clouds. Edge computing, then, will introduce another element in the IT estate, and be just one part of the hybrid cloud make-up of companies that have the business need for it.

Making sense of this growth in information processed by edge devices will require a more strategic approach to data management and analytics: one that can retrieve, store, and manage all this data, securely, so that operators can logically sort data and pull what they need to glean actionable insights from it.

HPE GreenLake is the as-a-service cloud that provides a single, unified operating system for all of your workloads. By integrating edges with other workloads into a customisable pane-of-glass view, customers can see the big picture operationally, and enjoy the user-friendly simplicity more typically associated with the public cloud.

To discover more benefits about HPE Greenlake, and how it can help optimise your hybrid cloud environment, click here to visit the HPE website.

Related:

Copyright © 2020 IDG Communications, Inc.

Download CIO's Roadmap Report: Data and analytics at scale