by Laurie Clarke

Amnesty International CIO John Gillespie challenges CIOs to lead on ethical AI

Interview
Mar 12, 2019
IT LeadershipNonprofits

Outgoing Amnesty International CIO John Gillespie challenged his fellow CIOs to take a leading role when it comes to AI ethics.

“The geeks are not going to inherit the earth – but our algorithms might,” he said. “Frankly, the only people who understand it are going to be CIOs in this room, who built out these algorithms, who built up the systems to take advantage of this technology, and it’s up to us to take that leadership and that responsibility.”

The 2018 CIO 100 member was speaking with one of Europe’s leading AI ethics lawyers, Professor Lokke Moerel, at CIO UK and Computerworld UK’s AI Summit at the end of February, when he said that CIOs shouldn’t try to dodge responsibility when it comes to the development of ethical AI.

[Also read – CIOs can provide board-level business and ethical leadership on AI]

“The CEO is worrying about shareholder value, products, cash flow, the sales teams are all about hitting targets,” he explained, “but when it comes to personal responsibility it should be CIOs taking it on.”

Gillespie, who passed on the baton as Amnesty International CIO to Kevin Antao the day of CIO UK‘s AI Summit, said that when it comes to AI algorithms, inherent biases can emerge from the data sets they’re built upon. “In terms of ethical issues, prediction models are all based on predictions we made in the past, so you’ve got this really historical ways of thinking, and that’s most obvious in recruitment and HR,” he said, referring to algorithms used in the hiring process that have been discovered to discriminate against women and those from ethnic minority backgrounds.

But how should CIOs get their organisation’s board to take these ethical AI-related issues seriously? Gillespie advises taking a tactical approach. “Like all the things we do, you have to bring it back to real stories that your board really cares about,” he said. “They don’t really care about AI, what they do really care about is your brand.

“Bring it back to things that are really important to them, try and connect it to anecdotes or real world examples, and bring in someone from outside who is speaking about it earnestly – you’ll get more leverage.”

But is Gillespie bullish on the future of ethics in AI? “I’m an optimist so I do genuinely believe in AI, I think actually you can create models that you can test and test and test and use them to explore other biases and drive forward from that,” Gillespie said.

And in some cases, he doesn’t foresee the need for futuristic technologies in regulation. “In relation to GDPR regulation, we have actually got very well understood traditional process for determining what’s going on in people’s mind: it’s called the court of law,” he said. “Over time we have to come up with similar ways of auditing and understanding some of these algorithms that we can examine. We haven’t even started on that journey yet.”