Moving toward outcome-based intelligence

Newer machine learning techniques coupled with better programming techniques and more data-driven skills should make a difference, and so should better tools to directly connect outcomes to the core models.

Amidst all today’s talk about machine learning, an important point is often forgotten: because true machine learning should be hidden, the actions taken based on machine learning should be the key measure of the technology’s effectiveness.

It is not just that machine intelligence can predict failures, it is how many failures are prevented because of it. It is not just that machine intelligence can detect API structures based on call traffic, it is how many more APIs get built because of it. It is not just that machine intelligence can recognize objects, it is how many accidents are prevented because of it. You get the idea.  It can be argued that the true measure of machine intelligence is not accuracy and loss (some favorite measures of deep learning), but its effect on outcomes.

We have two problems in going from learning models to affected outcomes. One, the developers building applications need to measure the impact of the model to their outcome.  Second, they need to rapidly build applications that affect outcomes by easily incorporating the learning models. 

Let’s take the measurement problem first.  

Suppose you are rolling out a new vision model for an autonomous vehicle. The only thing that matters is whether that better model helps lower accidents—not, per se, that the model recognizes objects better or faster.  

There are challenges, though. Accidents are rare events (and that is a good thing) and measuring the outcome effectiveness of rare events is not easy (and that is a bad thing). There are other difficulties. Avoiding accidents is a multi-variate problem. Some variables, such as your model, are under your control. Others, such as what other cars are doing or rain or construction, are not. Leveraging Bayesian techniques, the multi-armed bandit approach, A/B methods, and so on are good first steps to measuring and understanding outcomes—but not sufficient.

Now let’s look at the second problem—building applications that learn and adapt—and how the problem of measurement spills into it.  

Building learning applications is not the same as building machine learning models. Because the domain of application development and the domain of machine learning are different, rare is the breed of engineer who crosses the two.  

Machine learning folks tend to build models based on whatever inputs they think will affect those models, and they always want a clean input vector (all inputs must be of the same size, be standardized, normalized, etc.). So if a propensity-to-buy model is built on past customer behavior and similarity of products, but another variable—such as weather—also influences the propensity to buy, then tough luck: the model does not take weather into account, and the application programmer, even with access to the weather information (which is typically just an API away), cannot incorporate it. The needs of the machine learning engineer and the needs of the application developer are not well aligned.

Indeed, going from the rough-and-tumble real world of data to producing clean inputs for the model is often too steep a curve. Yes, pandas libraries are available in Python, but if the application is being written in Java, how does the application developer do the right thing?

So how do we overcome these challenges? When two worlds are far apart, the only solution is for each to move closer to the other.  

Machine learning is becoming simpler—and that needs to continue. There are some fantastic advancements in machine learning in the area of “learning to learn” models—algorithms that explore different spaces and types of models. AlphaGo used some of these techniques to win the game of machines against humans. These sorts of simplifications will not only encourage more people to build machine learning models (helping to solve a critical skills problem), but also result in models that are much easier to use.  

But application building paradigms, and application programmers, need to become much more data-driven and outcome-focused. Instead of programming through rules, they need to program through predictions. They need to be more API-driven to access the systems outside their controls (whether it is to get weather information from a third party or predictions from a model built by a team elsewhere within their enterprise). And they need some core education on machine learning as a prerequisite to getting an application programmer job.

In the next five years, machine learning will likely offer more and more leverage. But the most important measuring stick will be machine learning’s impact on improving outcomes.

For that to happen, applications have to incorporate machine learning. And for that to happen, the gap between machine learning and applications has to reduce. Newer machine learning techniques coupled with better programming techniques and more data-driven skills should make a difference, and so should better tools to directly connect outcomes to the core models.

As these pieces align, the future of smart outcomes appears very bright.

This article is published as part of the IDG Contributor Network. Want to Join?

SUBSCRIBE! Get the best of CIO delivered to your email inbox.