A Matter of Trust: Controlling AI to Deploy at Scale

BrandPost By Sharon Goldman
Aug 08, 2019
IT Leadership

Artificial intelligence has gone far beyond a simple buzzword.

istock 1137141260
Credit: Erikona

Artificial intelligence has gone far beyond a simple buzzword. Experimentation in a wide variety of use cases is everywhere. However, most companies are not yet adopting AI across the enterprise at scale, says Martin Sokalski, Principal, Advisory, Emerging Technology Risk Services at KPMG. “A lot of our clients ask how they can get to scale with AI,” he explains. “They don’t know how to bridge the gap.”

A big issue, he explains, is a lack of trust and transparency. Companies struggle to provide confident assurance about the use of data or explain why algorithms are behaving in a certain way.

According to a new KPMG report, Controlling AI: The imperative for transparency and explainability, four foundational pillars are necessary to secure trust in AI and succeed when deploying at scale, says Sokalski: 

  1. Integrity. Where is your data coming from? What is its lineage? “Organizations must be able to maintain the integrity of the model as it is deployed, monitored and continues to evolve,” he says.
  2. Explainability. As more decisions are powered by AI, understanding the reasons the model made a decision is essential. The boardroom and the C-suite are demanding to know the “why” behind AI. This issue is also increasingly important to regulators and compliance functions.
  3. Fairness. To trust AI, its decisions must be seen as fair. That’s a challenge, since data may be inherently biased against certain personal elements — such as gender or race — as it relates to AI. Organizations must manage that bias with careful oversight and governance, says Sokalski: “A company’s reputation may be on the line.” 
  4. Resilience. What happens when an AI model goes out into the wild? It evolves. Organizations need to provide for resilience against errors, adversarial attacks and other risks. “Maintaining an end-to-end AI lifecycle is difficult,” says Sokalski.

To develop these four “anchors of trust,” Sokalski encourages organizations to incorporate a framework, with leading best practices to facilitate responsible adoption and scale of AI. 

“Some of the risks that AI introduces are unique and different, so the approach must be different as well,” he says. This is no easy task, however: According to the 2019 Enterprise AI Adoption Study, while gaining trust is a top priority of 45% of surveyed executives, around 70% don’t know how to govern algorithms. A framework powered by methods and tools can help address the inherent risks and ethical issues in AI.

Sokalski emphasizes that he is optimistic about the future of AI adoption. “I’m personally very bullish about how quickly AI will be adopted at the enterprise,” he says. “We’ll get better with explainability, but there are some fundamental things, such as ethics and bias and transparency, that need to be addressed before organizations, leadership, clients and society overall are ready to take a full deep dive.”

Certain AI technologies, such as deep neural networks, will continue to be a challenge when it comes to provide confidence at an acceptable level for leaders, he cautions. But the ability to analyze is out there to enable organizations to confidently stand by the results of their AI. “We’re giving enterprises methods and tools to monitor and build confidence behind their decisions,” he says. “We’re getting there.”

To learn more about controlling AI and closing the “AI trust gap,” click here to read KPMG’s Insights report.