Explainable AI: Bringing trust to business AI adoption

For many organizations, AI remains a mystery not to be trusted in production, thanks to its lack of transparency. But demand, advances, and emerging standards may soon change all that.

Explainable AI: Bringing trust to business AI adoption
Zapp2Photo / Getty Images

When it comes to making use of AI and machine learning, trust in results is key. Many organizations, in particular those in regulated industries, can be hesitant to leverage AI systems thanks to what is known as AI’s “black box” problem: that the algorithms derive their decisions opaquely with no explanation for the reasoning they follow.

This is an obvious problem. How can we trust AI with life-or-death decisions in areas such as medical diagnostics or self-driving cars, if we don't know how they work?

At the center of this problem is a technical question shrouded by myth. There's a widely held belief out there today that AI technology has become so complex that it's impossible for the systems to explain why they make the decisions that they do. And even if they could, the explanations would be too complicated for our human brains to understand.

The reality is that many of the most common algorithms used today in machine learning and AI systems can have what is known as “explainability” built in. We're just not using it — or are not getting access to it. For other algorithms, explainability and traceability functions are still being developed, but aren't far out.

To continue reading this article register now

Survey says! Share your insights in our 19th annual State of the CIO study