Machine intelligence (MI) continues to be a fixture in the headlines – which is a good thing, since it’s a field that will likely impact every industry on the planet in coming years. But the more one watches the headlines, the more one senses that we’re too focused on visible manifestations, without full regard for the numerous ways intelligent machines may be able to improve society without most people even noticing.
Put another way, there’s a lot of discussion around the ways people might interact with intelligent machines. Many of these discussions focus on MI’s capacity to improve human efficiency and decision-making, whether it’s a physician using machine learning to diagnose and treat disease, image recognition algorithms helping users choose new fashions, researchers homing in on difficult-to-track social topics, or a food delivery service using machine intelligence to provide users more accurate ETAs. From Microsoft’s mixed reality work with HoloLens to Apple’s Face ID authentication, new advances arrive all the time. Most industries have at least one marquee example of the ways intelligent machines could change people’s daily lives.
With all this focus on mass scale human-computer interaction, what sometimes gets lost – and what enterprises should also be paying attention to – are the subtler, almost invisible, ways MI is likely to make an impact.
In our recent columns, we’ve discussed how MI capabilities are becoming democratized thanks to advances such as the growing number of machine learning APIs available. In this column, we will explore MI’s applications behind the scenes, where few humans will ever interact with it.
What does invisible MI look like?
As technologies improve, some MI will have both visible and invisible implementations whereas others will gravitate more strictly toward one or the other.
For example, researchers from DeepMind recently combined multiple neural nets to create software that associates images and sounds – and whose potential uses could eventually include using sound to search for objects in the dark. (Disclosure: Alphabet, DeepMind’s parent company, is also parent company of our employer, Google.)
Many future applications of this technology may involve human interaction. It’s easy and intriguing to imagine one day pointing your phone into pitch blackness, letting it “listen,” and having the screen light up with images of what lies in the dark. But it’s also easy to imagine this technology eventually applied to autonomous robots, pushing the MI in directions less beholden to human interaction.
This theoretical scenario is one small push, however. Bigger shifts include machine learning algorithms that improve other machine learning algorithms. This sort of MI will still have human engineers and curators, of course. But as MI stacks become more complicated – and as open source libraries grow and individual components become more interoperable and accessible through consistent APIs – algorithms will take over aspects of the process, inserting a layer of hidden intelligence beneath the ones that interact more directly with people.
Another example: Researchers from Google have used neural networks to improve data compression, and released the model for public use via TensorFlow. While still an evolving use case, it points to a not-too-distant world in which MI may work invisibly around us, whether it’s freeing up space on our hard drives or optimizing traffic signals based on real-time traffic, a goal that’s recently gained attention.
For enterprises, one of the most vital areas of invisible MI may be IT security, with intelligence analyzing user behavior in real time to detect and thwart anomalies. Ideally, this intelligence is something most users never think about – because if they have a reason to do so, it means the security has probably failed. Cutting-edge MI-based security applications are still in their infancy – the approach is sometimes hampered by false positives, for example – but this shift from security that must be programmed to security that actually learns is likely to be profound.
Machine intelligence could cast a very wide net
Just a few years ago, machine learning capabilities were limited to just the handful of companies that had the expertise and resources to build them. But technology evolves quickly. Today, thanks to as-a-service infrastructure, machine learning API products, open source models and libraries, and other new resources, the barrier to MI entry is lower than ever. Even enterprises with little expertise in data science and advanced mathematics can begin building more sophisticated models and more intelligent apps.
While enterprises should be building strategies around the most visible manifestations of MI – the intelligent digital assistants, the universal translators, and other forms that will interact directly with people – it’s also important to keep the full scope in mind. Intelligent machines will revolutionize industries in ways that go largely unnoticed by most of us, and the rewards for businesses with the vision to see these invisible applications could be tremendous.