A CIO’s guide to AI: How can Australian organisations make the technology work for them?

In part 2 of the CIO’s guide to AI, we discuss the practical steps and ideal mindset organisations need to adopt for successful deployments. Think big, but act small while being both pragmatic and experimental.

1 2 Page 2
Page 2 of 2

Hits and misses

Minter Ellison’s chief digital officer, Gary Adler, cited three quite distinct AI projects the law firm is currently managing for clients, highlighting both the potential benefits as well as certain shortcomings that highlight the current realities of the technology.

Two were with large financial services companies Minter declined to name, including one in the ASX50, as well as a global construction investment company, also in the ASX50.

The first concerned a large remediation matter with involving eight years of files for around 5,000 clients. Each file generated some 500 documents, creating a stack of over two million in total.

“Our approach was to invest in a relatively mature AI tool focussed on contract review,” Adler explained.

“It was then trained to identify the most important documents to triage looking for things like ‘fee’ statements’; we were less interested, for instance, in ‘change of address.”

gary adler Minter Ellison

Gary Adler

Because the machine learning algorithm was able to keep learning the process became exponentially more efficient, eventually delivering a 50 per cent reduction in overall review time.

The second financial services client, a leading superannuation player, wanted to update its product disclosure statements (PDS) clauses in line with new ACCC regulations.

In this case, a fairly cheap piece of off-the-shelf contract review software was chosen to apply machine learning in teaching the system all the key PDS clauses and to surface any potential issues. The result has been more efficient noting of potential legal problems, with the system improving exponentially, Adler noted, while the client’s own lawyers are now able to focus on more meaningful matters.

But it’s not all beer and skittles, Adler said. Another recent project had the goal of developing a machine learning contract review system to help a global construction investment company streamline reviews of subcontractor legal agreements.

The solution needed to be able to automatically review several thousand contracts, taking particular note of things like insurance, liability and site restoration to ensure compliance with the client’s requirement and to flag when conditions were not being met. A relatively new but legally focussed AI start-up was selected to help build it.

But it hasn’t worked entirely as expected as many contracts contain images – both hand drawn and CAD drawings, Adler said.

“This has thrown the AI into a bit of a tail-spin. It gets confused when it hits drawings rather than text – it doesn’t know quite what to do with,” he said.

The upshot is that while the system recorded accuracy levels around 80 per cent for written data, it managed only 17 per cent for non-standard information.

But Adler was adamant Minter Ellison, the startup and the client will persevere and learn how to improve the AI until they get it right, and urged any organisation to adopt the same attitude with their AI efforts.

“Genuinely adopt a mindset of experimentation and curiosity,” he said. “There’s lots of new technologies being pushed through so be open to failure.”

Which is not to say organisations should simply try throwing anything at the wall in the hopes of something sticking. Adler advised that organisations seek use cases that are low in complexity and high in impact.

The most important thing of course is that organisations have a clear idea of what it is they’re hoping to achieve with AI in the first place. While it might seem like a truism, the endgame is something that many fail to properly reflect on.

“Organisations need to define the problem they are trying to solve very clearly,” said UTS’s Blumenstein. “Do I need AI, yes or no? And why?”

Getting the staff

The CSIRO’s digital agency Data61 estimates Australia currently has around 7,000 AI specialist workers, up from just 650 in 2014. Its AI roadmap, published last year estimates by 2030 Australia will need “a workforce of between 32,000 to 161,000 employees in computer vision, robotics, human language technologies, data science and other areas of AI expertise.”

That’s going to be a tall order, even with federal government programs for growing the pool of people with PhDs and other qualifications in the data sciences, as well as new high-school curricula teaching such disciplines. State government efforts to improve AI education won’t be taken for granted.

However, the brain drain continues apace.

“Certainly in Australia what we’re seeing is some of the really good resources are being attracted straight to Silicon Valley, sometimes to Israel, sometimes into the EU,” said Adler.

He recommended companies think differently about how they recruit AI specialists who are now the most expensive professionals in tech.

“My tip there would be is to bring those resources in as and when you need them rather than hire them permanently. And focus on training your team working with digital partners to bring things to life,” Adler said.

Gartner’s Andrews was similarly sanguine about the skills shortage.

“Organisations worry they don’t have the skills they need, but most find the skills issue is less a problem once they actually start,” he said.

The final issue to be considered here is that of ethics and privacy. The two are not mutually exclusive of course, though the first surfaces issues such as brand identity, reputation and trust, while the second relates more to regulations and the law.

Ethics are never simple, and AI is already presenting vexing problems. In part one of this series on AI we noted an AI system developed by Amazon to sort through candidate resumes, which had been found to be actively rejecting women due to learned biases. Language translation algorithms have been found to generate results that discriminate unfairly too.

The Federal Government’s Department of Industry, Science, Energy and Resources partnered with Data 61 to create a set of AI Ethics Principles to help organisations be more alert to and mitigate against issues like this.

However, as Minter Ellison’s Adler noted, Australia’s Sex Discrimination Act (1984) only governs actions taken by people. On the other hand, there are currently over 20 pieces of Commonwealth Legislation that allow for decisions to be made by computers.

Telstra, CBA, NAB, Microsoft Australia and Flamingo AI are currently piloting the AI Ethics Principles, and are expected to report on their experiences later in the year.

On the subject of privacy, governments and law makers are definitely taking things more seriously, as evidenced by the seeming success of things like the Notifiable Data Breach laws, which turned two in February this year. Closer scrutiny of digital giants like Google and Facebook by the ACCC and others is another positive sign.

But there’s still a long way to go, while the next ‘Cambridge Analytica’ scandal probably isn’t too far away.

"Think ethically about the purpose for undertaking the project, and the data collected,” Adler advised.

“Don’t just ask 'can we?, but ask , ‘should we?’”

Related:

Copyright © 2020 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
7 secrets of successful remote IT teams