Lots of things are being called “smart” these days — everything from light bulbs to cars. Increasingly, the smarts come from some form of artificial intelligence or machine learning.
AI is no longer limited to big central data centers. By moving it to the edge, enterprises can reduce latency, improve performance, reduce bandwidth requirements, and enable devices to continue to operate even when there’s no network connectivity.
One of the main drivers for the use of AI at the edge is that the sheer amount of data produced in the field would cripple the internet if it all had to be processed by centralized cloud computing solutions and traditional data centers.
“The need to send all of that data to a centralized cloud for processing has pushed the limits of network bandwidth and latency,” says Ki Lee, vice president at Booz Allen Hamilton.
Enter the era of AI-enabled edge computing.
Few companies are experiencing this problem to the degree that Akamai does. Akamai runs the world’s largest content distribution network, with, at last count, about 325,000 servers in over 135 countries, delivering more than 100Tb of web traffic every second.
Edge computing is key to improving performance and security, says Ari Weil, Akamai’s global vice president for product and industry marketing.
Take bots, for example.
“Bots are a huge problem on the internet,” says Weil. They attack Akamai’s customers with automated credential stuffing and denial of service attacks. Plus, they clog up the pipes with useless traffic, costing Akamai money.
Cybercriminals are also using bots to try to penetrate the defenses of companies and research firms and healthcare organizations. Sometimes, their evil knows no bounds. For example, hackers have recently begun using bots as the COVID-19 equivalents of ticket scalping — snatching up vaccine appointment slots.
Akamai sees 485 million bot requests per hour, and 280 million bot login attempts per day. In the battle against them, Akamai began deploying AI at the edge in 2018 to determine whether a particular user is a real human being or a bot.
In 2019, Akamai also began using centralized deep learning to identify bot behaviors and develop better machine learning models. Those models are then distributed to the edge to actually do the work.
AI is also used to analyze threat intelligence at Akamai. “It’s a big data problem,” says Weil. “We take a huge amount of data, in a massive data lake, and try different models against the data to find malicious signatures. Once we identify the patterns, we can use this across the platform.”
Sometimes the messages are innocuous but come from a malicious source — command and control traffic, for example.
“We train the edge model to recognize traffic that’s coming out of this particular region, or this particular IP address, and apply the mitigation techniques right at the edge,” says Weil.
The end result is that Akamai saves money because it doesn’t have to carry the traffic from either the bots or the malware. Customers save money because they don’t have to pay for wasted bandwidth. And customers are more secure because they have fewer bots and malware samples to deal with.
In the fourth quarter of 2020, Akamai was able to stop 1.86 billion application-level attacks, says Weil, and thwart more than 70 billion credential abuse attacks.
Managing edge IoT
AI at the edge can also decrease the data and network load of internet of things strategies. IoT devices can generate a massive amount of information, but often that information is routine and repetitive.
“There’s a lot of ‘I’m OK, I’m OK’ messages being generated [by IoT devices],” says Weil. “So you sift through all that and look for the signal that says that the system might be failing. That needs to get back to the manufacturer.”
To do this, machine learning technology is deployed at the edge to learn what the critical signals are and to preprocess the data before it is sent on to the customer.
Take, for example, a connected car. It moves from one cell zone and tower to another, to different states, even to different elevations and climates. A reading that is appropriate for one location might not be appropriate for another, or the problem could be signaled by a rapid change in the data. Here, machine learning is becoming essential.
“Bringing the intelligence to the devices is one of the biggest growth areas of IoT right now,” says Carmen Fontana, IEEE member and cloud and emerging tech practice lead at Centric Consulting.
The issue comes up in many industries, not just cars, though moving vehicles do have some of the biggest requirements for latency. “You don’t want to go back to the main data center to get a decision and bring the decision back,” she says. “There’s no time for that.”
But even slow-moving or stationary devices benefit from more processing at the edge.
“A common example is solar panels in the middle of nowhere,” she says. “They don’t have great cell service or WiFi. Being able to process data and make decisions locally is really important.”
Distributed intelligence also enables companies to reduce the volume of message traffic back from the devices, which reduces networking costs — and energy use.
“Data storage is expensive and not energy efficient,” she says. “If you can eliminate a lot of the data you would have otherwise transferred and stored, then it’s a great energy conservation piece.”
AI is also being increasingly used at the edge to provide devices with differentiating functionality.
“On my wrist, I have a smartwatch and a recovery device,” Fontana says. “The recovery device senses my metrics — my heart rate, breathing pattern. It makes calculations on how rested my body is and how hard I should push myself on my next workout.”
The advantages of decentralized AI
AI functionality at the edge can help create an intelligent distributed computing environment across network devices — a unique benefit for organizations that know how to leverage this.
The utility industry is especially keen on distributed intelligence, says Tim Driscoll, director of information management outcomes at energy and water resource management technology company Itron.
“Meters at the very edge of the utility distribution grid have an app platform similar to the common smartphone model,” he says. These meters use machine learning to respond to varying voltage and load conditions. “This allows the meters to provide proactive, real-time recommendations for grid control.”
But more intriguing is that the meters can work together, learning from their own communication network behavior, performance, and reliability — then use that to elect leaders among themselves that speak to the network on their behalf.
“This simplifies network management by removing the need for centralized analysis,” he says.
And as power systems evolve to include more distributed power generation in the distribution grid, edge computing becomes even more important. Traditionally, only local load was a variable for power networks — generation and power flow were all controlled centrally. Today, all three are variables.
“This is the main driver for autonomous, local, real-time response powered by edge processing and machine learning,” Driscoll says.
Beyond better latency and lower costs, bringing AI and machine learning to the edge can also help make AI faster, according to Booz Allen Hamilton’s Lee. That’s because decentralized, edge AI maximizes the frequency at which models are calibrated, “which not only reduces model development costs and schedules, but also increases model performance,” he says.
Risks and challenges
But AI at the edge also posts risks and challenges, Lee says. This includes the current lack of standards.
“We see a wide variety of edge hardware devices, processor chipsets, sensors, data formats and protocols that are usually incompatible,” he says, adding that there needs to be more focus on developing common open architectures.
In addition, many players in this space are focusing on one-off solutions that aren’t scalable or interoperable — or are based on traditional software delivery models.
“We’re still seeing monolithic applications that are purpose-built for specific devices,” he says. “From a design perspective, we’ve also seen typical hub-and-spoke architectures,” which can fail when connectivity is limited.
Another challenge of distributed AI is cybersecurity. “With the number of deployed edge devices the attack surface significantly increases,” he says.
We’ve already seen attackers take advantage of insecure IoT devices, such as the Mirai botnet that infected hundreds of thousands of devices in 2016. As IoT devices proliferate — and get smarter — the risks they pose will also increase.
One approach is to apply machine learning to the problem, using it to detect threats. But edge hardware is typically smaller and more resource constrained, limiting how much data can be processed, Lee says.
Where AI-powered edge computing can make a big difference in cybersecurity is in micro-data centers, says Shamik Mishra, CTO for connectivity in the engineering and R&D business at Capgemini.
“Threat detection, vulnerability management, perimeter security, and application security can be addressed at the edge,” he says, and AI algorithms can be decentralized to detect threats through anomaly detection.
New technology, such as secure access service edge, are also emerging, Mishra says. These combine wide area networks with security functionality.
“The more we distribute a functionality, the more the system becomes vulnerable as the surface area for attacks increases,” he says. “So, edge compute applications must keep security as a design priority.”
Continue reading for free
Create your free Insider account or sign in to continue reading. Learn more