How TinyML is powering big ideas across critical industries

machine learning and mlops hpe ezmeral softwaretg
shutterstock

From cars and TVs to lightbulbs and doorbells. So many of the objects in everyday life have ‘smart’ functionality because the manufacturers have built chips into them.  

But what if you could also run machine learning models in something as small as a golf ball dimple? That’s the reality that’s being enabled by TinyML, a broad movement to run tiny machine learning algorithms on embedded devices, or those with extremely low power requirements.

Heavy hitters such as Google, Qualcomm, and ARM recognise TinyML’s potential to transform the way we think about machine learning. It subverts the premise that ML is inherently power hungry and resource intensive, requiring swathes of cloud-hosted processing power to run anything remotely useful.

TinyML algorithms can be run on off-the-shelf microcontrollers – tiny, low-spec chips typically embedded in devices - at the edge of the network. They will run for long periods of time on small amounts of battery power and kilobytes of memory. Considering more than 30 billion microcontrollers were shipped in 2019 alone, accessibility is a distinct advantage of TinyML.

As the code is running locally on battery-powered devices, it also doesn’t need sustained internet connectivity to function, and isn’t affected by power outages. This makes it incredibly useful, especially for work being done in remote areas, or where latency is a priority. 

How is TinyML being used?

For industries such as manufacturing, retail, agriculture, and even healthcare, TinyML can be a gamechanger.

Take the work being done by a doctor in the Middle East, who is using TinyML models running on embedded devices or smartphones to detect benign and premalignant oral tongue lesions by automating the initial screening process. This makes the treatment cheaper, easier, and faster – ultimately making these screenings accessible to more patients.

It’s also being used in sustainability by an organisation called Rainforest Connection to protect against illegal logging by using TinyML-enabled listening devices capable of distinguishing the sounds of chainsaws or logging equipment.

Manufacturing companies are also using TinyML to carry out real-time predictive maintenance, using embedded devices to detect anomalies before machines fail, saving millions of dollars in maintenance costs.

Using TinyML sound analysis for example, embedded devices can identify if a machine is about to break down. This has been useful at remote mining fields for monitoring industrial pumps, as it detects anomalous noises and flags issues before they become serious. To put the value of predictive maintenance in context, SAP estimates globally that a two per cent saving in maintenance costs across the top 40 miners would yield a AUD $18 billion (USD $13.4 billion) saving to the mining sector.

These examples only scratch the surface of what TinyML can do. Other use cases include retail inventory management, irrigation system monitoring, and tracking the trajectory of cricket balls to improve a bowler’s action.

What does the future hold?

With thousands of potential applications and billions of places to run the algorithms, TinyML has incredible potential. However, this doesn’t mean that it will surpass or overtake cloud-based or centralised processing of AI and machine learning.

The inference or ‘production’ aspect of ML can be run on these devices, but training these algorithms still requires more compute and storage than you will find on microcontrollers. Some industry watchers predict we will see ‘training on the edge’ in five or so years, but there will continue to be a place for larger ML algorithms running within the four walls of the data centre.

TinyML should be thought of less as a replacement and more as a supplementary technology, supporting a distributed intelligent enterprise. It also supports movements towards federated machine learning, where intelligence is carried out at the edge and pushed into a central model for further processing.

Organisations should pay close attention to how this technology evolves and start investigating use cases now to get ahead of the curve. ABI Research forecasts the TinyML market to grow from 15.2 million shipments in 2020 to 2.5 billion in 2030, noting its potential to bolster IoT and even save lives.

While the technology is in its early stages, firms should leverage existing research and the vibrant ecosystem of use cases that already exist to start developing strategies for development, or consult their technology partners to investigate how it fits with their existing ecosystem. There are a range of vendors that already sell affordable TinyML-enabled devices and software stacks, making it easy to integrate into research and development activities.

It’s essential that TinyML remains an open-source platform, as this collaboration has underpinned much of the adoption we’ve experienced. At SAP, we’ve consistently made our TinyML work available to the open source community, as we believe this is the best way to increase accessibility and innovation.

Developer collaboration and ongoing innovation suggest TinyML will have a transformative impact, not only on how organisations invest in intelligent technology, but in the daily lives of everyone who engages with it.

Copyright © 2021 IDG Communications, Inc.