by David Binning

A CIO’s guide to AI: How can Australian organisations make the technology work for them?

Feature
Apr 06, 2020
IT Leadership

In part 2 of the CIO’s guide to AI, we discuss the practical steps and ideal mindset organisations need to adopt for successful deployments. Think big, but act small while being both pragmatic and experimental.

man concerned artificial intelligence ai sign
Credit: dny59 / Getty Images

One of the reasons for the general AI impasse in Australia – and to a lesser extent amongst our developed peers such as the US, UK, Germany and China – is that projects still tend to be cloistered within IT departments.

This creates a number of problems including lack of executive buy-in, isolation from the core business and its objectives, too narrow a set of perspectives across different departments and disciplines in the organisation, and a heightened fear of failure that comes from lack of shared ownership.

All of this contributes to what experts believe is the greatest obstacle to AI deployment: wrong perceptions of what the technology can and can’t do.

Generally, organisations have unrealistic expectations that the technology will have some sort of magical impact on operations. This can result in development of projects that are too large and ambitious in scope, amplifying the risk of failure and disappointment.

Other organisations may baulk at AI, seeing it as too expensive and/or complicated to consider. Then there’s the perceived ethical and legal risks, which tend to receive intense media scrutiny and coverage when they’re exposed; more than is warranted according to some.

“Don’t talk to someone who thinks everything looks like a nail,” warned Professor Michael Blumenstein, associate Dean with the University of Technology’s faculty of engineering and IT. “AI to some people just looks like this giant hammer: It’s not like that.”

According to Gartner VP and distinguished analyst, Whit Andrews, AI should be viewed as “augmented intelligence” with humans “in-the-loop”. And key to deploying it successfully he said is having a pragmatic, yet experimental approach starting with three core pillars.

The first is data collection, or system training.

One of the first major revelations for corporates is that for all their perceived cleverness, AI systems are generally only as good and effective as the data that are fed into them for training purposes. Many a project has been known to fail for neglecting this simple principle.

Recently, however, the pendulum appears to have swung the other way.

“Companies are getting bogged down in data prep,” Andrews said, as they try to manage and input too much information.

whit andrews Gartner

Whit Andrews

It can’t be viewed as an exact science, however, he added with some of the best-run projects and top teams often reporting data issues.

“Organisations successfully using AI are still likely to say they have data problems which tells us everybody does. I’ve never spoken to any organisation in 25 years that was completely satisfied with its search project: There’s always a middle way,” he said.

Secondly, Andrews advised organisations to synthesise data using simulations.

“Create something to perform a simulation within itself to apply outside of itself. Start with a small number of data sets. Do the first project. Do the second project with same data sets.

“And the third with data sets that are similar. Don’t stray too far from your initial investments and take advantage of every step you build. Get started using methods [like] programmatic or logical shifting to probabilistic models,” he said.

He cited the example of a university that started out a project with 200 ‘answers’. Using machine learning it was able to apply so-called semantic vectoring to match questions to these answers, scaling the project up to eventually produce 2,000 answers.

Data vs code

Getting AI right requires a shift in mindset; effectively an acceptance that data is different to code.

“Most technologists are familiar with the work in which humans convert their unstructured data to the logic of code,” Andrews added. “But in AI what we do is train computers to convert reality into immutable substitutes for code.”

Andrews recommended organisations develop a portfolio of four “experimental projects”. One might be high-visibility, the other low visibility, with another perhaps being high-risk and a fourth only low risk. No more than one should be a chatbot.

“Pick carefully and invest in them strongly,” he said.

Next, measure outcomes, and do so in different parts of the company in so far as it’s possible with the same lessons from the data. This is the most likely set of habits to result in success, he said.

The Melbourne Cricket Club (MCC) is running an Azure-based AI solution designed to better predict and manage fan attendance, in the hopes of reducing queuing and streamlining ticketing and sales of food and drink. It’s been working with Microsoft partner Revenite to build a “digital twin” based on Azure Data Warehouse, Stream Analytics and Data Factory.

Business intelligence officer James Aiken told CIO recently that while a big effort was made to collect not only the MCC’s own data, but also data such as AFL league stats and weather info, the system has developed an appetite for more data of better quality, while demanding it be more organised.

“We were finding we were trying models and getting 80-90 per cent accuracy,” he said, noting while this was nothing to sneeze at, there was serious room for improvement.

Deloitte Australia’s national analytics lead, Alan Marshall agreed that getting the data right is a critical first step to building something meaningful.

“It’s not just ‘we’re going to implement some AI’: we’re creating an ecosystem,” Marshall said.

He recommended a three-layered approach to doing this.

The first is the digitisation process creating the initial data ‘footprint’.

It needn’t be complicated nor expensive Marshall insisted. For example, Deloitte recently developed a solution for a Sydney Hospital, which used Amazon’s Alexa to effectively replace the nurse call button.

“We were able to create a data footprint where one didn’t exist; converting speech to text. It was an optimisation opportunity, giving us an understanding of whether we have enough nurses and orderlies,” he said.

The second layer is the data analysis platform to capture, analyse and curate the data lake, such as AWS or Azure.

The third layer is the algorithm itself. This is where the real work of inputting data and training the system comes in.

Another key consideration is whether to build or buy.

“Most organisations will have a buy v build decision point,” Marshall noted. The first option will likely be faster and cheaper, but the trade-off is that you’re paying for someone else’s intellectual property.

“DIY takes longer but you get something that’s truly your IP,” he said.

Next up: Hits and misses

Hits and misses

Minter Ellison’s chief digital officer, Gary Adler, cited three quite distinct AI projects the law firm is currently managing for clients, highlighting both the potential benefits as well as certain shortcomings that highlight the current realities of the technology.

Two were with large financial services companies Minter declined to name, including one in the ASX50, as well as a global construction investment company, also in the ASX50.

The first concerned a large remediation matter with involving eight years of files for around 5,000 clients. Each file generated some 500 documents, creating a stack of over two million in total.

“Our approach was to invest in a relatively mature AI tool focussed on contract review,” Adler explained.

“It was then trained to identify the most important documents to triage looking for things like ‘fee’ statements’; we were less interested, for instance, in ‘change of address.”

gary adler Minter Ellison

Gary Adler

Because the machine learning algorithm was able to keep learning the process became exponentially more efficient, eventually delivering a 50 per cent reduction in overall review time.

The second financial services client, a leading superannuation player, wanted to update its product disclosure statements (PDS) clauses in line with new ACCC regulations.

In this case, a fairly cheap piece of off-the-shelf contract review software was chosen to apply machine learning in teaching the system all the key PDS clauses and to surface any potential issues. The result has been more efficient noting of potential legal problems, with the system improving exponentially, Adler noted, while the client’s own lawyers are now able to focus on more meaningful matters.

But it’s not all beer and skittles, Adler said. Another recent project had the goal of developing a machine learning contract review system to help a global construction investment company streamline reviews of subcontractor legal agreements.

The solution needed to be able to automatically review several thousand contracts, taking particular note of things like insurance, liability and site restoration to ensure compliance with the client’s requirement and to flag when conditions were not being met. A relatively new but legally focussed AI start-up was selected to help build it.

But it hasn’t worked entirely as expected as many contracts contain images – both hand drawn and CAD drawings, Adler said.

“This has thrown the AI into a bit of a tail-spin. It gets confused when it hits drawings rather than text – it doesn’t know quite what to do with,” he said.

The upshot is that while the system recorded accuracy levels around 80 per cent for written data, it managed only 17 per cent for non-standard information.

But Adler was adamant Minter Ellison, the startup and the client will persevere and learn how to improve the AI until they get it right, and urged any organisation to adopt the same attitude with their AI efforts.

“Genuinely adopt a mindset of experimentation and curiosity,” he said. “There’s lots of new technologies being pushed through so be open to failure.”

Which is not to say organisations should simply try throwing anything at the wall in the hopes of something sticking. Adler advised that organisations seek use cases that are low in complexity and high in impact.

The most important thing of course is that organisations have a clear idea of what it is they’re hoping to achieve with AI in the first place. While it might seem like a truism, the endgame is something that many fail to properly reflect on.

“Organisations need to define the problem they are trying to solve very clearly,” said UTS’s Blumenstein. “Do I need AI, yes or no? And why?”

Getting the staff

The CSIRO’s digital agency Data61 estimates Australia currently has around 7,000 AI specialist workers, up from just 650 in 2014. Its AI roadmap, published last year estimates by 2030 Australia will need “a workforce of between 32,000 to 161,000 employees in computer vision, robotics, human language technologies, data science and other areas of AI expertise.”

That’s going to be a tall order, even with federal government programs for growing the pool of people with PhDs and other qualifications in the data sciences, as well as new high-school curricula teaching such disciplines. State government efforts to improve AI education won’t be taken for granted.

However, the brain drain continues apace.

“Certainly in Australia what we’re seeing is some of the really good resources are being attracted straight to Silicon Valley, sometimes to Israel, sometimes into the EU,” said Adler.

He recommended companies think differently about how they recruit AI specialists who are now the most expensive professionals in tech.

“My tip there would be is to bring those resources in as and when you need them rather than hire them permanently. And focus on training your team working with digital partners to bring things to life,” Adler said.

Gartner’s Andrews was similarly sanguine about the skills shortage.

“Organisations worry they don’t have the skills they need, but most find the skills issue is less a problem once they actually start,” he said.

The final issue to be considered here is that of ethics and privacy. The two are not mutually exclusive of course, though the first surfaces issues such as brand identity, reputation and trust, while the second relates more to regulations and the law.

Ethics are never simple, and AI is already presenting vexing problems. In part one of this series on AI we noted an AI system developed by Amazon to sort through candidate resumes, which had been found to be actively rejecting women due to learned biases. Language translation algorithms have been found to generate results that discriminate unfairly too.

The Federal Government’s Department of Industry, Science, Energy and Resources partnered with Data 61 to create a set of AI Ethics Principles to help organisations be more alert to and mitigate against issues like this.

However, as Minter Ellison’s Adler noted, Australia’s Sex Discrimination Act (1984) only governs actions taken by people. On the other hand, there are currently over 20 pieces of Commonwealth Legislation that allow for decisions to be made by computers.

Telstra, CBA, NAB, Microsoft Australia and Flamingo AI are currently piloting the AI Ethics Principles, and are expected to report on their experiences later in the year.

On the subject of privacy, governments and law makers are definitely taking things more seriously, as evidenced by the seeming success of things like the Notifiable Data Breach laws, which turned two in February this year. Closer scrutiny of digital giants like Google and Facebook by the ACCC and others is another positive sign.

But there’s still a long way to go, while the next ‘Cambridge Analytica’ scandal probably isn’t too far away.

“Think ethically about the purpose for undertaking the project, and the data collected,” Adler advised.

“Don’t just ask ‘can we?, but ask , ‘should we?’”