by David Binning

Australia’s big banks mull ethics in AI

May 18, 2020
IT Leadership

CIO Australia talks to data chiefs at CBA and NAB about how the two banking behemoths are progressing with the federal government’s AI Ethics trial which kicked-off last year.

Ethics is underscored heavily in a document [morality, ethical, ethics]
Credit: RapidEye / Getty Images

With Australia’s banking sector still dressing the burns it sustained during last year’s Banking Royal Commission, the issue of ethics in the deployment of artificial intelligence (AI) technologies isn’t a ball any of them want to be seen to drop.

Amid the grilling, the Federal Department of Industry and Science released a set of eight ‘voluntary’ AI Ethics Principles and a framework in November to help Australian organisations develop fairer AI practises, and therefore better navigate the emerging legal and reputational risks of this fast-evolving technology.

Industry was also invited to take part in a trial, which will run until the middle of this year, with NAB, CBA, Telstra, Microsoft Australia and Flamingo AI signing up.

With its huge numbers of customers and eye-popping volumes of data, banks naturally have a great interest in AI. And to date, Australia’s big four have made big investments in the technology, hiring the best talent to design, build and run it.

Dan Jermyn, chief decision scientist with CBA, tells CIO Australia the bank’s Customer Engagement Engine, which helps manage customer experience across the bank’s myriad channels, operates 300 machine learning models with some 157 billion data points, and growing.

As a data scientist, he is struck by what the bank – and industry more broadly – has already been able to achieve with AI.

Yet he acknowledges that with great power comes great responsibility.

“The fundamental ethical challenge facing organisations today is to resolve the questions ‘can we do it?’ with ‘should we do it?’, he says. “Are you comfortable explaining it to friends and family?”

One of the more obvious safeguards for AI, yet one often omitted from the more anxious conversations about its ethical risks, is human oversight.

The idea of machines learning more than – and about us – faster than we can, and then making unchecked decisions that affect lives – sometimes even ending them – is exciting Hollywood stuff, but the reality is that with a few exceptions this isn’t how AI actually operates in practice.

Of course, with the technology as such an early stage of development and being deployed so rapidly, that might not always be the case.

Banks are naturally alert to the vital role humans play, especially in helping to develop better customer experiences. And that is something Jermyn says he’s determined not to lose sight of at CBA as it moves to a greater reliance on AI, wihile noting the AI ethics framework has helped the bank “not to devolve humans”.

The word ‘values’ is one that can be cringeworthy when used by big corporate brands, especially banks. Yet Jermyn firmly believes it’s a key factor in guiding the responsible development of AI.

“It’s incredibly important for us as a bank to remain true to what our purpose is; whether using AI or otherwise.”

Stephen Bolinger, GM data privacy and ethics with NAB agrees, saying company values should be the main guardrails for any organisation deploying AI solutions that might present ethical challenges.

“Company values are a key component of where you draw the line so understanding what the core purposes of your business is, and how your decisions can impact customers helps set the right approach for assessing AI as well.

“Company values are key as that’s what ends up influencing decisions when you’re on the edge,” he says.

For NAB, the ethics trial has seen the bank’s data scientists and lawyers work more closely together, and getting more people involved generally. It has also coincided with NAB’s creation of a ‘global privacy office’ in February this year, which Bolinger oversees.

In addition to his role at NAB, Bolinger is currently the Australian country leader for the International Association of Privacy Professionals (IAPP). The organisation has 55,000 members globally, conducting research and making recommendations on privacy and data security.

While stressing that organisations should have robust ethical frameworks for building AI systems, he acknowledges the level of concern is disproportionate with the actual number of incidences where things have gone seriously wrong.

“There’s a broad set of negative use cases [but] they’re relatively isolated,” Bolinger said, highlighting the Uber driverless car incident in 2018, which resulted in the death of pedestrian in Arizona.

“There’s a tendency to think of some things as representative of the technology more generally,” he says.

But he’s not downplaying the need to ensure AI systems don’t reach beyond what they’re intended for, and to ensure there’s respect and trust in the community.

“AI has the ability to operate and do things at scale humans haven’t been able to do historically. These can pose risks to safety and autonomy, demanding special consideration and care. If we’re going to get people beyond scary scenarios we have to establish that trust that their interests are being served by that technology,” he says.

Bolinger urges Australian CIOs and data scientists working for multinationals to familiarise themselves with ’10 principles’ for AI set down by the US in January this year, which cover ethics. Also, the European Commission’s ‘Ethics Guidelines for Trustworthy AI’, and the GDPR laws which helped inform them. And of course, read up on the Privacy Act as well as any state government privacy laws, as well as the mandatory notifiable data breach laws.

A spokesperson for the Federal Department of Industry and Science told CIO Australia: “The Government is currently developing additional guidance material to support all organisations wishing to implement the ethical principles.”

The five participants (CBA, NAB, Microsoft, Telstra and Flamingo AI) will provide the government with case studies detailing their experiences applying the ethics principles when the trial concludes later this year. 

“The government is also actively engaged with other companies, experts and government agencies in turning these principles into something all organisations can practically implement,” the spokesperson said.

Microsoft declined to comment for this article, while Telstra and ANZ did not respond for requests for comment at the time of publication.

A spokesperson for Westpac said in a statement:

“Westpac is committed to the responsible use of technology to support our customers, people and communities. We believe that artificial intelligence can create value for our customers and have a set of guiding principles to help manage associated risks. We will continue to work with both industry and government to evolve those principles and their application to our technologies.”