South African technology, business leaders push for diversity in AI

The tech industry has a long way to go to improve diversity and inclusion in the development of artificial intelligence applications . Here’s why bias is bad for business.

naadiya moosajee
Naadiya Moosajee

With many organisations investing in AI to streamline business processes and meet ever-changing customer needs, there is a need for certainty and trust — and to a large degree, that depends on assembling a diverse tech team.

That's according to  Mark Nasila, chief analytics officer at FNB's Chief Risk Office, who has been developing AI-based applications to optimise risk-assessment processes at the bank. A global concern about trustworthy AI is how to prevent biases introduced by humans during AI development or coding processes. To avoid this, companies need to determine what constitutes fairness and actively identify biases within their algorithms or data,  as well as implement controls to avoid unexpected outcomes, he says.

Debates about bias in AI often revolve around the issue of diversity. A recent article from the Harvard Business Review, for example, ran with the headline: "To build less-biased AI, hire a more-diverse team". That sounds simple enough in theory but in practice, the issue is far more complex.

AI needs to be accountable for results

It's not just about hiring people of different ethnicities and genders. The days of window dressing are over; what is needed is accountability and consequences, according to business owners, AI experts and those who are, or will be, subject to AI assessments.

To continue reading this article register now

Discover what your peers are reading. Sign up for our FREE email newsletters today!