APIs, ecosystems, and the democratization of machine intelligence

Business leaders should stop thinking of machine intelligence in theoretical terms and start thinking of it as a force ready to be embedded in countless aspects of daily life.

artificial intelligence / machine learning / virtual brain

It’s funny the ways pop culture conditions our sense of how humans and machine intelligence (MI) might interact.

Take Star Wars. When Threepio warns Han Solo that it's almost impossible to successfully navigate an asteroid field, how does Solo respond? By barreling the Millennium Falcon into an asteroid field, of course. Artificial intelligence abounds in that galaxy far, far away—but not with much obvious effect on human decision-making.   

Iron Man offers a different take on MI. Tony Stark builds intelligence into almost every aspect of his life—managing smart devices in his home, helping him engineer new inventions, even offering real-time analysis to help him counter opponents in combat. Whereas to Han Solo, MI is generally a sidekick, to Stark, it’s often a way to amplify his own intelligence and accomplish things he couldn’t accomplish himself.

We bring all this up because machine intelligence is becoming democratized. Organizations are increasingly pursuing ecosystems strategies in which they build digital products by combining their software, typically via APIs, with software from other companies. If a developer wants to build an app with mapping and navigational capabilities, for example, she doesn’t need to build the functionality herself—she can use APIs from companies such as Waze or Google, our employer. Just as any developer today can easily build rich navigational functionality using APIs, they’ll soon be able to do similar things with MI.  

This means business leaders need to start thinking about machine intelligence not just in the context of sci-fi movies, theoretical futures, and niche applications—but as a force ready to be embedded in countless aspects of daily life.  

An evolution: from AI hype to IA opportunities

To get a sense of where MI is likely to go, it’s useful to look at where it’s been. Like movies, real-world AI research has encouraged us to think about interactions between humans and intelligent machines in specific terms—some  of them arguably myopic. 

When Deep Blue defeated chess grandmaster Garry Kasparov twenty years ago, many heralded it as the dawn of intelligent machines, for example. But that wasn’t really machine intelligence defeating human intelligence; it was machine computation defeating human intelligence. The chess match also reinforced the broadly antagonistic narrative that has defined MI since the Turing Test—that smart machines should exist to compete with and in some cases perhaps even replace people.   

With the ability to learn about and understand natural language, including subtle signals such as humor, IBM’s Watson was a more meaningful step into legitimate MI. Its “Jeopardy” victory was, despite a few humorous gaffes, a landmark achievement. Even so, these early Watson accomplishments were, like those of its predecessor Deep Blue, more about defeating humans in abstract games—a niche application that did little to help us understand what an age of pervasive MI might look like.  

Happily, after years cycling through different variations of the Turing Test, we’ve begun to treat machine intelligence less as a competitor and more as a way of augmenting our own decision-making. Watson, for example, has moved from “Jeopardy” to trying to match cancer patients with the best treatments. Similarly, Google invests not only in MI projects that compete in games against humans, such as AlphaGo, but also in a wide range of practical research. Working with Google machine learning technology, for example, Stanford researchers recently developed an image recognition technology that can reliably identify skin cancer.

Still, these healthcare use cases—profound though they are—are niche compared to the impending ubiquity of intelligence that we're talking about. People are shocked, for example, that Anant uses machine intelligence to help him compose at least 10 percent of his emails—and even those who know have trouble discerning which emails are which. This latter type of machine intelligence—more Intelligence Amplification (IA) than Artificial Intelligence (AI)—is where some of the biggest explosions in intelligent are primed to occur.   

Yes, we’ll likely cede decision-making to machines in some ways. Autonomous cars may evolve along the lines of Tesla’s Autopilot, enhancing human driving but leaving the person in control, or they might end up like the cars Waymo has tested, which lack even a steering wheel. Those sorts of dramatic shifts will unfold over years and in some cases decades to come. But in the short term, the opportunity and momentum won’t involve turning our lives over to AI overlords—it will be using machine intelligence to amplify human decision-making.  

APIs and the cloud democratize machine learning

Machine intelligence is about to pervade because for the first time in history, virtually any enterprise can tap this power—and because any of them can, those that don’t will risk being left behind.

The open-source TensorFlow library for machine learning, for example, gives any company in the world access to technology and expertise that few companies have the resources to develop in-house.

Pre-trained machine learning models take this democratization a step further. The aforementioned Stanford researchers used an algorithm already trained to identify 1.28 million images from 1,000 categories. The algorithm still had to be further trained to distinguish malignant skin conditions from benign ones—but by starting with a mature base, the researchers removed significant labor and specialized technical expertise from the equation, allowing the doctors to focus on answering their question instead of wrangling technology.

Increasingly, companies are releasing these pre-trained models as APIs, as alluded above. This predictable interface lets developers treat MI the way they treat other, more familiar digital assets, and significantly decreases the data science knowledge developers need to get started. Google, for example, now offers APIs for natural language analysis, text translation, image recognition, and other services. These APIs can help companies to effectively bypass the years of internal R&D and organizational overhauls that building such services from scratch would likely entail—and skip straight to building this intelligence into their apps.

Preparing for machine intelligence ecosystems: 3 tips

  1. Don’t wait to think about machine intelligence. The tools are becoming so accessible, the tide may turn quickly. Getting caught without an ML plan tomorrow could be like getting caught without a mobile plan several years ago.
  2. Think beyond Turing Test variations. The extent to which legitimate AI might replace human decision-making and productivity is fascinating and worth discussing. But for most companies, the opportunity isn’t in sentient machines—it’s in machine learning that amplifies human ability and improves minute-to-minute user experiences.
  3. You don’t have to build it all yourself. For many companies, the possible advantages involved in developing a fully proprietary machine learning platform won’t justify the enormous cost and risks of failure. Luckily, companies can circumvent these challenges via a variety of cloud resources, including APIs that let developers get up and running with MI without having to build all the pieces themselves.

This article is published as part of the IDG Contributor Network. Want to Join?

NEW! Download the Spring 2018 digital edition of CIO magazine