Edge computing is not new, but its applications for business have rapidly evolved over the past decade. I initially implemented an edge solution for ACI Specialty Benefits back in 2010 because we were collecting petabytes of wearable and biometric data. The delivery, processing, and storage of this data became too expensive and latent to ship and translate into usable real-time analytics. With the need to reduce network latency, bandwidth requirements and cost, it made sense to process this data closer, or at the edge, of our wide area network (WAN).
While ACI’s initial deployment was designed well before edge really took off, today there are numerous use-cases and a wide range of providers, from telecommunications to data centers, investing in and utilizing edge. Although edge technology creates a unique opportunity for business leaders, it also has the potential to cause a lot of confusion within the boardroom.
Ask five different people to define edge computing and you’ll get five totally different definitions. As with most things in tech, everyone jumps on the buzzword bandwagon before fully understanding or agreeing on what exactly these terms mean. So before diving into how edge computing will shape the future of business, I would like to propose a definition everyone can agree on:
Edge computing is the delivery of computing infrastructure that exists as close to the sources of data (logical extremes of a network) designed to improve the performance, operating cost and reliability of applications and services. Edge computing reduces network hops, latency, and bandwidth constraints by distributing new resources and software stacks along the path between centralized data centers and the increasingly large number of devices in the field. By shortening the distance between devices and cloud resources that serve them, edge computing ultimately turns massive amounts of machine-based data into actionable intelligence. In particular, but not exclusively, in close proximity to the last mile network, on both the infrastructure and device sides.
The word “edge” refers specifically to geographic distribution. While edge computing is a form of cloud computing, it works differently by pushing data processing to the literal “edge” devices for computing, not relying on the centralized data center to do all the work. This complementary computing system frees up bandwidth pressure since data no longer has to be constantly pushed back and forth to the data center. Reducing these delays, even by milliseconds, results in a big win for latency. Network speeds can be optimized, and it becomes seamless to handle high volume traffic.
With a clear picture of how edge computing works, it seems as though the top business use cases for edge involve improvements surrounding latency, privacy, bandwidth requirements, user experience, and integrated device technologies. I partnered with Mark Thiele, Director of Engineering Edge Computing at Ericsson, to create an open-forum discussion on LinkedIn to gauge how exactly today’s top technology leaders see edge computing shaping the future of business.
As part of this discussion, Alison Conigliaro-Hubbard of Riverbed Technology suggested that each organization may have different types of edge requirements, and recommends taking a 5-point approach to edge decision-making and forecasting:
What are the organization’s specific and measurable business objectives (and concerns) over the next 3 years when it comes to remote operations?
What types of apps are currently required, or will be necessary to achieve objectives?
What type of data risk/vulnerability can your business withstand, considering new data is generated at the Edge?
What operational costs are acceptable for supporting Edge IT, including estimating project time, hiring needs or expertise required?
What are the trade-offs to the business when it comes to implementing X versus Y solution, and how do they align with or affect business objectives? (For example, mini-data centers at the edge for applications that require HA also create risk and cost; and what is the recovery cost in time and data loss of an unplanned remote site outage; or how fast can a new site be rolled out for competition?)
In addition to Alison’s top questions, I would ask the following: How should tech leaders frame the conversation for edge and fog computing as an investment strategy to drive business revenue? It’s critical to establish buy-in at all levels by tying whatever investments are required in new technology to a strong business roadmap rooted in revenue growth.
Anatoly Chikanov, Enterprise Information Security Architect, believes that having certain intelligence at the edge will be better suited to survive disconnection, since the cloud might not always be available. With this rationale, I could not agree more. Any enterprise dealing with a large number of distributed end-points knows that the ability to filter data closer to the sensors is hugely beneficial. This helps ensure that only relevant data is sent to the cloud with the security benefit of keeping data onsite.
This also means that the data that does go through the ecosystem is more valuable from the very beginning. By consolidating the data that passes through layers of hardware, software, application and finally to the cloud, enterprises can have more control over use case-based network splicing.
Finally, I am a firm believer that whatever companies can “do” edge computing the best will control even more of the user experience. Google, Amazon, Microsoft and Apple have already begun to fully manage their devices, allowing many consumers to focus on the experience beyond the fundamentals of security, updates, functionality and to large extent, capabilities.
All of this is my attempt to say: edge computing enables businesses do what they are already doing “okay,” but better, cheaper and faster.