Deciphering the Mysteries of Neural Networks

BrandPost By Adnan Khaleel
Sep 20, 2019
AnalyticsBig DataHadoop

As more and more businesses tap AIu2019s capabilities for automated decision-making, neural net transparency is crucial.

getty 151372274 1280x1280
Credit: Dell EMC

We all love a good mystery ― the whodunnit, where all the players are in plain sight, but the actors’ motivations aren’t. But all is revealed in the end, and everything falls into place ― the nefarious suspects were rather innocuous after all, and those seemingly without an alibi are found guilty despite their best efforts to mask their actions. We’re all familiar with the popular quote “Elementary, my dear Watson,” when the famed Sherlock Holmes is asked to explain his deductions by his never-quite-convinced side-kick, Dr. Watson. Unfortunately, not everything in life can be so clear-cut.

Interestingly, despite the numerous advances made in improving the accuracy of neural networks, much of how they actually work remains a mystery – the proverbial black box! Loosely modeled after the human brain, these artificial intelligence (AI) networks are used for detecting patterns in unstructured data ― such as sensor readings, images or transcripts ― with enterprise applications like audio-to-text conversion, optical inspection automation and inbound call classification. The present-day uncertainty around how these networks reach specific decisions has many implications, both technical and legal.

figure 1 v3

Figure 1: This chart shows that, as models become more accurate, it becomes increasingly more difficult to explain how they work. Published in “Transparent Reasoning: How MIT Builds Neural Networks that can Explain Themselves” by Jesus Rodriguez on Towards Data Science, September 12, 2018.

Figure 1 shows that, the more accurate a model is, the less easy it becomes for us to explain how it works. By the way, this inverse correlation between interpretability and accuracy isn’t exactly a law, but rather an observation based on the cognitive abilities of human brains. Although not quite as complex as our brains, neural networks are turning out to be incredibly complicated to understand.

Fundamentally, an artificial neural network (ANN) takes a bunch of inputs and, based on summation threshold, produces an output – think of it as a slightly more complicated digital gate that does AND, OR, NOR operations. This is about as simple as it gets. However, as we string millions of these artificial neurons together, we reach a point where we cannot keep track of how this network is able to disambiguate a series of pixels into a cat, a dog or what not. If it weren’t for the fact that ANNs work so well, we might be better off using our time to solve other fundamental problems. Unfortunately for us, we don’t have that luxury.

Almost nothing in our quiver of automated decision-making tools comes close to what we’ve achieved with neural networks. And, while it would be easy to get caught up in the excitement of AI’s capabilities and to start using it in situations we don’t yet fully understand, the implications of handing over decision-making capability to a system that doesn’t respond to inquiry is disturbing.

Understanding the problem

Let’s take a moment to understand of how a neural network arrives at its outcome so we can better understand the limits of its capability. AI is a very complex topic technically, and it has exciting potential. And therein lies the problem. It could prove to be the proverbial double-edged sword if we’re not careful in its applications.

There is an iconic scene in several Hollywood movies where a rescue robot must save one person and uses “success” probability to choose which individual in a group is most likely to survive a rescue. This decision could be based on already inflicted trauma, or time to extricate or some other criteria. To be clear, a human rescuer would have to do the same in this situation. But it’s not the machine-computed probability with which we have an issue. Rather, it is with how those probabilities are derived. Especially in a legal scenario, where the steps to a deduction play an even bigger role as to who bears the liability of a fatal decision.

Before you dismiss this as a fanciful futuristic scenario, consider that this future is already here. Here are a few frequently referenced examples:

  • A self-driving car must swerve to avoid a dangerous situation, and the choices are to either drive into a light pole and endanger the well-being of its occupant, or to drive into the other lane where, unfortunately, it collides with another vehicle resulting in fatalities. Who is liable?
    • The manufacturer of the self-driving car?
    • The programmer who coded the instructions for the self-driving section of code that resulted in this unusual action?
    • Or, is it the data used to train the neural model that caused the car to choose the fatal swerve?

As tragic as this scenario is, liability and responsibility must be established to compensate the victim.

  • A new neural net is extremely accurate at detecting tumors in x-rays. (Unrelated, but see recent FDA approval for GE’s AI-powered X-rays.) However, despite their best efforts, doctors find nothing anomalous that could be the cause of a positive detection by the model. Should they
    • Tell the patient and subject them to several weeks of mental torture and physical tests, only to potentially discover that the neural model was incorrect (unlikely, but possible)?
    • Not tell the patient, only to potentially learn subsequently that the neural model was indeed correct, and they will be sued by the patient for withholding testing and earlier treatment?
  • Or, how about something less dire but still consequential? An individual is being denied a mortgage application because something caused the otherwise “perfect” neural net to reject this person as high-risk, and we don’t exactly know why?

These aren’t easy situations and, today, when trained professionals making the decisions are at fault, we have laws to determine and assign or absolve blame. Interestingly, we’re far more forgiving of humans making mistakes than we are of machines. We know people are very capable of misjudgments and errors, and our reparative systems in place today incorporate this aspect heavily into their design. Machines, on the other hand, are expected to be perfect.

It seems that we have a natural bias for humans, and this shouldn’t be so surprising, since we are a social species. But what happens in a situation when machines start getting better, and we humans begin to unquestioningly rely on their results? If you doubt this situation could ever happen, then let me ask you this: Since you started using a smartphone, how many new numbers have you memorized? I rest my case.

Worryingly, as machines do more of the heavy lifting, the experts who rely on them will have fewer opportunities to practice their expertise. It’s not that these professionals will suddenly lose their skills, but that we humans need to continually revisit a problem in order for us to get better. (For example, you can still memorize a phone number if you must.) And, as fewer and fewer humans exercise a given skill, we risk losing that specific body of documented knowledge and experience.

Case in point: It’s often cited that pilots who are accustomed to autopilots aren’t as adept at dealing with emergency situations as their counterparts from a generation ago. Continued training and exigency simulations are warranted in high stake areas like commercial flights, but what about other areas where there isn’t an incentive to keep skills current?

Although my prognostications paint a somber picture of the future, I don’t believe this to be the case. I merely want to get you thinking about why neural network transparency is crucial in an automated decision.

Additional benefits

In addition to the example situations described above, there are plenty of other reasons why we would like to understand the neural network black box ― like better, more-efficient neural nets. From a purely scientific perspective, might we be able to train a neural net better or faster if we understood what went on inside?

Another example I find to be particularly exciting is the way neural nets can solve problems that have taken us humans several tens of decades to understand. For example, a newly developed neural net that can pretty much understand the physics of the universe from simply observing the motion of objects. What makes this even more intriguing is that it can account for dark matter, something today’s researchers still don’t completely understand.

As if all of this weren’t good enough already, it appears that these AI models are a few orders of magnitude faster than their mathematically complex computational counterparts, since they require fewer computational steps.

At this point, I’ve hopefully sold you on why we must understand how large, accurate, neural nets work. But, is this even possible?

Progress in interpretability

Fortunately, lots of advances are being made in interpretability. One approach is to assign colors to various operations as the neural network is operating, much like the way we can look at a functional MRI and see roughly how the brain is taking up oxygen and glucose. This allows us to have a visual representation of how an ANN is functioning, with a dynamic heat map of activity. It’s a simple approach, but it leaves much to be desired in its granularity and resolution.

A better option than retrofitting existing models with interpretability would be to design a neural network that offered it as a fundamental feature. And that is exactly what the folks at MIT have done, calling it a Transparency by Design Network (or TbD-Net). The MIT-based TbD-Net model breaks a complex chain of reasoning into a series of smaller subproblems, each of which can be probed individually, and the results are successively grouped into larger chunks. This helps to break the process down in much the same way as scientists decompose a larger problem into its smaller components.

As an active area of research, neural transparency is going to make improvements, especially when the demand for it exists. Despite the many decades of brain research, we still don’t know how the human brain does what it does, but wouldn’t it be grand if a newly created neural network could?

To learn more

For more perspectives on tapping the value of data with artificial intelligence systems, explore Dell Technologies AI Solutions and Dell EMC Ready Solutions for AI