Securing the Edge – Not Without Challenges

BrandPost By Pete Bartolik
May 19, 2021
Technology Industry

Edge deployments represent a large number of moving parts to track and secure.

istock 1153628249
Credit: iStock

Ultra-fast networks running on the network edge will move app functions and data closer to the user to reduce latency and improve performance. In the process, edge computing can bring new security capabilities to the field – but also introduce new security challenges to overcome.

As organizations roll out more edge devices and edge applications, logically they are creating more potential attack surfaces. But, like every computing paradigm that preceded it, the onus is on systems designers and architects to build in appropriate security protection.

“Poorly configured and poorly secured edge computing devices give attackers more opportunities to disrupt operations or to gain access to the broader enterprise network,” CSO points out.

5G edge acceleration

It’s widely expected that edge deployments will accelerate as organizations begin to take advantage of high-speed, low latency, and ultra-reliable 5G networking. Some may connect through public carriers while others may create their own infrastructure for operation by themselves or by managed services providers.

“For many of the world’s largest businesses, private 5G will likely become the preferred choice, especially for industrial environments such as manufacturing plants, logistics centers, and ports,” Deloitte analysts predicted.

But there are potential downsides, as well – including more challenging cybersecurity with the addition of edge nodes.

Regardless of whether they’re using 5G or more traditional networking, edge deployments represent a lot of moving parts to track and keep secure. As deployments scale up, traditional methods of monitoring and management for security and compliance can’t keep up. Applications need to be deployed and managed—quickly and consistently—across a wide range of systems, environments, and vendors or service providers.

Getting an edge with containers

Many organizations are in the process of or planning to utilize containers in edge deployments to take advantage of a common, horizontal, unified platform—from the core to the edge—with a consistent development and operations experience​.

Using the same tools and processes as the centralized infrastructure – but that can operate independently in a disconnected mode –

is a big advantage for edge computing sites that have limited or no IT staffing. Additionally, in the event of security concerns, containers can be quickly taken down, rebuilt from scratch, and redeployed.

“With many endpoints, security has to ‘shift left’ to make security a part of the infrastructure and product lifecycle as early as possible,” says Ajmal Kohgadai, Red Hat product manager for Kubernetes-native security leader StackRox. Red Hat’s OpenShift Kubernetes platform provides the ability to automate and orchestrate security across the software supply chain, the infrastructure and actual workloads or applications running on edge endpoints, he adds.

Organizations deploying container applications and Kubernetes clusters across edge environments need the ability to scan for vulnerabilities across many nodes.

“From our standpoint, you want to be able to provide the right network segmentation and also make sure you don’t have configuration drift,” says Kohgadai. “Within OpenShift, and Kubernetes in general, you can work under a zero-trust framework so that every API call needs authentication and you can build that into your cluster so third parties cannot just communicate with your application.”

Edge computing lets organizations place applications where it makes most sense to the business or application – even in the most remote locations. But when operating and managing hundreds to thousands of edge sites, uniformity is a necessity to consistently, more securely, and reliably deploy edge clusters.

Red Hat sees edge differently. See how: