Machine Learning, Machine Intelligence and Cognitive Computing: What Does All of this Have to do with Big Data?

Living in the information age and finding the silver lining: Preparing for a future where machines are making lives better by going beyond what they were initially programmed to do. Artificial intelligence - statistical learning techniques- machine learning – cognitive computing – hear how data is helping to change lives.

techguy small
Another day and another story of how machines are going to take over the world. First computers started beating humans at chess, and even the more intellectually challenging game of Go has also fallen prey to computers, so what’s next? Is it the self-driving cars we see being tested? Are we really slated to be slaves to our robot overlords or is there a silver lining here, a future where machines are actually making our lives better by going beyond what we’ve initially programmed them to do?

Lots of everyday people are asking this question, and aside from the pessimistic prognostications of a few as to how SkyNet will ultimately prevail, I’m here to tell you that I think the future looks quite rosy for humanity, especially when it comes to using a robotic understanding of our environment. But before we get to that, let’s start off with a bit of a primer on all this terminology, which should then give you a better foundation to appreciate why we are seeing this resurgence of automated learning techniques.

Artificial intelligence, or AI, is a term coined in the 1950s when computers were in their infancy and researchers, being enamored with the possibilities, dreamt of a not-too-distant-future where machines could learn, just like their human counterparts, and be able to adapt to general-purpose tasks without much human intervention. Well, it turned out that replicating the human brain was actually quite a difficult task to achieve in practice, and most researchers failed in their objectives. However, this is not to say all was lost because AI has been making steady progress ever since.

The first AI systems were based entirely on rules and were known as expert systems. Think of a rule as multiple “If … then …” statements, and although it’s elegant in its simplicity, you can quickly imagine that encompassing every scenario via such rules could get quickly overwhelming and impractical. Moving forward a few decades, researchers realized that rather than explicitly specifying the rules to everything, it’s much more convenient to have a few starting rules, combined with rules to “learn.” This was a more flexible and superior approach, similar in many ways to how animals and humans acquire intelligence.

Hence was born the area of statistical learning techniques, or neural nets, as the proposed approach was largely based on how the biological neuron in our brains functions. So in this approach, the machine starts off as a blank slate and then you train (or teach if you will) with suitably biased data that specifically highlights the topic of interest to be “taught.” This in a nutshell is what machine learning (ML) is all about, and there’s a reason why it’s all the rage today.

We’re living in the information age, and you are aware of data, or more specifically what is termed as Big Data—i.e. data that is coming in much faster than can be humanly handled. And therein lies the crux of the problem. Today, in spite of our many advances in technology, it’s still the astute human data scientist—this much sought-after, rare creature—who is ultimately responsible for finding the information in the terabytes of data.

As an aside, information is officially defined as something you didn’t already know. It’s not an easy task, and although there are a bevy of statistical tools and techniques that information scientists have at their disposal to analyze data, we still need to get better and more efficient at how we do this. So you can think of machine learning as yet another tool at the disposal of the data scientist to uncover information that is lurking in the data. Machine learning algorithms go through data to look for consistent patterns, either in time or space or both; it’s patterns that defy a known logic and are irregular, and only become obvious when you look at lots of data in concert, rather than just a sample.

Predictive analytics today are essentially applying the predictors discovered by the machine learning algorithms to a future situation based on the simple premise that if something has happened consistently in the past, then it will most likely occur again in the future when the right set of surrounding circumstances present themselves.

A very simple example of this would be the correlation between wet weather and the surge in rain coats. And as the smart reader would have already noted, correlation does not mean causation. In other words, the computer is not smart enough to realize that the increased likelihood of rain caused the upsurge in people buying raincoats and not vice versa. Now this is where machine intelligence (MI) comes in. Not only can the algorithm tell you what the predictive outcomes of a future situation will be based on some probability scale, but it can also tell you why it arrived that those conclusions and what in the data made it prefer one set of outcomes over another. Now that’s exciting. I don’t for a moment think that such systems will supplant the data scientist; they will just make the data scientist far more productive.

So now that we’ve got all of this basic terminology out of the way and you can see what machine learning and intelligence are really all about, we can start thinking of the possibilities from a practical perspective. It should come as no surprise that ML is already in widespread use. One popular use case is fraud detection in financial transactions, and the industry is only getting started with the possibilities. Crooks can get quite creative when it comes to gaming the system, and this why we need intelligent systems that continually monitor people’s buying behavior. The easy detections are the ones where there is an obvious outlier in the data. A trained data scientist can quickly detect this if data is profiled and presented correctly. However, when nefarious behavior is hidden in seemingly innocuous transactions that don’t deviate too much from the norm, things get harder (and more interesting).

In the past, the most cost-effective way to observe a population was to build a model based on sampling the data, since going through all the data was prohibitively expensive from a resource perspective. Today, machine learning applications augment what a data scientist already knows about general purchasing patterns by building individualized profiles, by examining all the data, and by keeping track of changes in purchasing behavior at a very granular level—something that was just not feasible on a large scale before. Ultimately, they are helping make these fraud models more accurate by not only increasing the likelihood of detecting fraud, but by also reducing the number of false positives. And this helps both the financial institutions and the customers they serve by keeping fraud-related costs low.

Other applications where ML is gaining widespread use include web searching where context can be inferred, ad placements based on a better understanding of user profiles, network intrusion detection and cybersecurity, and predictive maintenance of equipment, where sensor data is proactively used to detect emerging problems. In my opinion, these are all still the low-hanging fruit when it comes to ML, and in the coming years, we will see more of our daily lives being improved by such smart applications.

However, the real benefit is yet to come when both ML and MI get better at more mundane tasks we take for granted—for example, driving a car. Self-driving cars are pushing the ML boundaries in many ways because not every scenario can be preprogrammed, so the underlying algorithms have to be able to learn as they perform their daily tasks—much like how we humans do today, which takes us to what truly cognitive computing promises … but that’s a topic for another day.

Adnan Khaleel is a global strategist for Dell.

©2016 Dell Inc. All rights reserved. Dell, the DELL logo, the DELL badge and PowerEdge are trademarks of Dell Inc. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell disclaims proprietary interest in the marks and names of others.


Download the CIO October 2016 Digital Magazine
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies