Businesses in highly regulated industries often stumble over the so-called “black box” problem when they try to deploy artificial intelligence technology, which often automates tasks and generate outputs that cannot be readily explained.
Banks, in particular, are struggling to vet the transparency and interpretability of machine learning (ML) models and AI software that inform everything from fraud detection to asset traceability and provenance. HSBC is no different, but the British bank is using software that helps it instill trust in its algorithms, which are subject to biases, data drift and other issues that pose risks to businesses.
The bank’s efforts are critical in increasing trust at a time when more consumers are adopting digital services during the coronavirus pandemic, says Gavin Munroe, HSBC’s global CIO of retail and wealth banking.
“There is a balance between showing them what’s in the model and what’s in the AI, to explain what we’re using, that it has the right quality, lineage and most accurate data we need,” Munroe tells CIO.com. “We don’t think we can work in an environment without trust.”
Why banks balk at AI
Banks understand that AI can help them automate and augment operations and safeguard clients’ assets, but they are reticent to adopt such technology if they can’t interpret and explain how their algorithms work for regulators, audit committees and consumers. Mistrust in AI abounds, spurred by algorithms that helped misinformation campaigns go viral and racial biases in facial recognition.
Finservs under the watchful eye of regulators are leery about the technology they use to buttress their businesses, as bias can prevent qualified people from getting loans. As a result, only 4 percent of chief accounting ofﬁcers reported using AI in 2019, according to Gartner research, which also found that 79 percent of business executives attributed “fear of the unknown” to ﬁnance’s reticence to adopt AI. And when they do adopt AI, it’s typically as a hedge against fraudulent transactions, rather than as an accelerator for digital products and services.
The key for adopting AI in finance is trust, says Munroe, who compares the importance of establishing trust for AI to the addition of seatbelts, speedometers and other safety features in motor vehicles. “Left unchecked, you’re taking on a lot of inherent risk in the organization” adopting AI, Munroe adds. “We can’t allow data models to have inherent biases.”
Establishing trust is particularly salient as consumers look to consume more digital services during the pandemic. HSBC has seen upticks in transactions via social media services such as WhatsApp and WeChat, as well as tap-to-pay and other contactless technologies. Usage of such services must be monitored closely, Munroe says, because “as more money moves through the digital footprint of the bank, there is more fraud exposure.” Examples include anything from classic credit card fraud to opportunistic scams involving fake transactions for COVID-19 tests, Munroe adds.
Deploying guardrails for AI
To help HSBC validate its AI models and serve as a backstop for the digital services it facilitates, Munroe is using software from CognitiveScale, whose Cortex Certifai software focuses on mitigating business risk. Cortex Certifai helps companies interpret and explain machine-generated predictions and uncover bias in underlying data types, data sets, ML models, and AI development processes, according to Manoj Saxena, CognitiveScale’s executive chairman, who describes the solution as an AI measurement tool that serves as the HTTP of AI trust.
Saxena, who led IBM Watson Solutions from 2007 to 2014, says the software also helps identify “data drift” in which signals that inform the data incorporated in the ML model change over time. Since the COVID-19 outbreak, data drift has become a significant issue, as customer buying patterns have shifted online and purchases on anything from toilet paper to personal protective equipment have spiked wildly, forcing retailers to confront supply chain logjams. These changing dynamics are generating new data, which must be incorporated in the models.
Banks are at a disadvantage here, as their current AI models scramble to account for the data generated by new consumer patterns. For example, a person who never shopped online prior to the pandemic lockdown but is suddenly purchasing goods on Amazon.com would send up a flag in the bank’s fraud detection systems, Munroe says.
“Some models don’t reflect the reality of where we are with COVID,” Munroe says. “And what will be the norm coming out as digital adoption carries on and accelerates?”
Cortex Certifai, which uses a “trust index” to quantify AI models with a numeric score, helps HSBC’s data models account for these new behaviors, which will be critical in satisfying customers. “It’s about building customer loyalty and trust when core decisions are being delegated to machines,” Saxena says.
CognitiveScale is applying its software across a broad subset of business cases. In another proof-of-concept, a company is using Cortex Certifai to route customers to the right support contact when they call into an interactive voice response system, Saxena says.
The great bias battle
But the solution appears tailormade for banks, which are rolling the dice on such emerging solutions even as they remain leery of placing trust in AI models.
Whether CognitiveScale, which has banked $50 million in funding from investors such as Norwest Venture Partners, Intel Capital, Microsoft Ventures, The Westly Group and USAA, has hit the sweet spot of adding value in a software sector the hype of which may be exceeded only by the skepticism shrouding it.
Biases have always existed in predictive models that use decision trees and regression algorithms, particularly those that balloon to incorporate thousands of if/then/else statements, says Gartner analyst Saniye Alaybeyi, who researches AI explainability. “None of these problems are new and none are specific to neural networks,” says Alaybeyi.
She says the key to instilling trust in AI is ensuring that software developers deliver a high-quality AI model through thorough testing and validation before handing it over to the business. Programmatic guardrails, Alaybeyi maintains, are key to satisfying the needs of all stakeholders.
On that, HSBC agrees. “Ingraining risk and compliance into our culture is not an afterthought,” Munroe says “The inherent design and solution needs controls and transparency built into it.”