The tech industry has a long way to go to improve diversity and inclusion in the development of artificial intelligence applications . Here’s why bias is bad for business. Credit: Naadiya Moosajee With many organisations investing in AI to streamline business processes and meet ever-changing customer needs, there is a need for certainty and trust — and to a large degree, that depends on assembling a diverse tech team. That’s according to Mark Nasila, chief analytics officer at First National Bank’s Chief Risk Office, who has been developing AI-based applications to optimise risk-assessment processes at the bank. A global concern about trustworthy AI is how to prevent biases introduced by humans during AI development and coding processes. To avoid this, companies need to determine what constitutes fairness and actively identify biases within their algorithms or data, as well as implement controls to avoid unexpected outcomes, he says. Debates about bias in AI often revolve around the issue of diversity. A recent article from the Harvard Business Review, for example, ran with the headline: “To build less-biased AI, hire a more-diverse team”. That sounds simple enough in theory but in practice, the issue is far more complex. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe AI needs to be accountable for results It’s not just about hiring people of different ethnicities and genders. The days of window dressing are over; what is needed is accountability and consequences, according to business owners, AI experts and those who are, or will be, subject to AI assessments. Just ask Naadiya Moosajee. When she and the co-founder of WomHub and WomEng — businesses catering to females in STEM (science, technology, engineering and manufacturing) — were denied a loan a few years ago, she immediately thought about how their failed attempt to secure gap financing would ultimately train AI to decline other people with similar profiles. Moosajee and her partner, both minority women, approached South Africa’s main financial institutions but even with collateral and their company’s healthy financial record, they were deemed too risky and their applications were turned down. In addition to the positive record of their business, Moosajee herself holds multiple degrees in engineering, has founded several non-profit and for-profit businesses, and has been awarded honours and accolades by organisations internationally for her social action-oriented entrepreneurship. Though she doesn’t point to any specific AI programme as being responsible for the rejections of her loan application, she’s concerned that her profile will be linked to the denials and encoded for future use. “When we are denied a loan, we become a data set. This data is being fed into AI and will be used to inform how they make lending decisions relating to others with a similar background based on historic evidence,” Moosajee says. “It’s not that we weren’t given a loan, it’s that the next person, in a similar category, will also be denied.” Banks in South Africa have all introduced digital processes into their loan approvals as they merge and downscale the amount of people involved in these processes, she notes. There has already been some fallout from the move. Data may point to discrimination In 2018 a group of complainants made the news when they took First National Bank (FNB) to court, claiming discrimination from loan practices originating at Saambou Bank, which had been acquired by FNB. In this case, those who applied for loans in more affluent areas (mostly white people) were given more favourable interest rates than applicants from less wealthy suburbs (mostly people of colour). They lost the case but a made a point about how using specific types of data in decision-making can negatively affect a wide swath of a population. “When we use data points like a person’s locale, we’re training AI to understand that people who live in areas like Mitchell’s Plain [a lower income Cape Town suburb] are higher risk,” Moosajee says. UiPath Lenore Kerrigan is country sales director at UiPath. Diversity puts checks and balances in processes and makes sure that a wide variety of points of view and experiences are being considered, which helps contribute to an end result that is balanced and fair, notes Lenore Kerrigan, country sales director at UiPath, a global software company that develops a platform for robotic process automation. As such, poor representativity on AI teams can lead to the development of solutions that have a hidden bias, not because of a clear intention on anyone’s part, but because of a lack of awareness. Organisations must ensure that the teams behind their AI are diverse, says Nasila, in the wake of the FNB lawsuit. It’s also key that diversity not only encompasses varied ethnicity but accounts for differences in background, class and lived experiences and other factors. Diversity fosters trust in AI Mark Nasila Mark Nasila is chief analytics officer for FNB Risk. Greater representation internally fosters trust externally, Nasila adds. In a data-driven economy, when customers trust a business they’re more likely to provide it with personal data because they believe that their data is being used to create products that will be to their benefit. “As data is the fuel that powers AI, this in turn, leads to better AI systems.” If the majority of developers are white males, we need to understand that they will design virtual worlds from a position of privilege and a lack of understanding around the challenges that someone of a different race or gender might experience, continues Moosajee. If, for example, you were asked to design the perfect car and you’ve only ever lived in areas with pristine and perfect roads, chances are that you’ll build a car with low suspension that is best suited to ideal driving conditions. On the other hand, if you grew up in a rural area that has no roads, the car you design would more likely be geared to handle more rugged and rough driving conditions. “It’s all based on your experience,” Moosajee says. Data science is part science, part art and part understanding real world context, she continues. “AI doesn’t understand context, it doesn’t understand where the data comes from or how it was used before. AI will literally just take the data you give it and adapt and learn with that data set going forward.” According to UiPath’s Kerrigan, companies have a social responsibility to regulate their AI practices and make sure that these are ethical and fair. She believes that it won’t be long before companies have compliance teams in place to guarantee that the business maximises the benefits of AI without creating intractable, harmful risks. More broadly, international organisations and bodies like the EU and UNESCO have created alliances that actively discuss issues around AI development, impact and implementation. IT pros urged to consider ethics in AI “We also have a responsibility as people in tech to really understand what good and ethical data looks like and how we can train algorithms to meet the needs of the general population going forward,” adds Moosajee. It’s not just enough to put policies in place to attract more women and people of different ethnicities into the industry, but what really needs to be done is figure out how to retain them, she stresses. “I see a lot of young women getting excited about tech industry but there is still an exclusionary ‘bro culture’ that makes women feel unwelcome. And they leave.” According to Forrester Research, data science teams that unintentionally create biased models run the risk of causing reputational damage to their business’ brand; they stand to lose customers and they could even face regulatory fines and legal action. When considering diversity in AI, two things need to be looked at — the data and the developers, says Kimara Naicker, a PhD student at the School of Chemistry and Physics at the University of KwaZulu-Natal. The development of AI technology depends largely on the input data being used to train models. And the algorithms behind the technology are created by humans who possess their own, unique bias and preferences. Ultimately, when bad data meets bias, that’s when problems occur. This is an imperfect science, Moosajee says. Ethical data sets are essentially data that is “blind”, so the idea is to not focus on things like gender or race because these might be used to make unfair assumptions or to favour one group over another. On the converse, biased data brings in the viewpoint and prejudices of whoever designed the system, and the historical norms they use to create rules. For example, women typically earn less than men. If data like this is fed into a system it could mean that a woman and a man with exactly the same credentials who apply for a loan will be treated differently purely because one is a man and the other a woman. This sort of historical-data bias occurred in the widely reported case of an Amazon recruitment tool that taught itself that male candidates were preferable. Historical hiring decisions, which have traditionally favoured men over women, were fed into this system, meaning that the AI learned to do the same. The company abandoned the software tool when executives understood that results were biased along gender lines. Bias arises in AI in initial goal-setting Bias, though, can creep in long before data is even collected, Kerrigan notes. It may start with how the problem is framed. If, for example, a credit company wants to design a model to assess credit worthiness, they need to first decide what they want to achieve. Perhaps they want to up their profit or maybe they want to reduce their risk by decreasing the number of people who don’t repay their loans. These goals, by themselves, have nothing to do with equality or fairness. Bias can also be introduced during data preparation when developers make decisions around what “attributes” are used to make decisions. In the FNB loan case outlined above, the use of suburbs as an attribute to determine risk is what ultimately created the alleged bias. “It comes down to accountability. We can’t just blame things on the computer. Artificial intelligence is only as intelligent as the data it’s fed,” Moosajee says. “And currently, it’s dumb. And biased. And sexist. And racist. Unless we really understand what is going into AI, we’re going to get the same inequality that we experience in the virtual world as we do in the physical world.” Related content news CIO Announces the CIO 100 UK and shares Industry Recognition Awards in flagship evening celebrations By Romy Tuin Sep 28, 2023 4 mins CIO 100 IDG Events Events feature 12 ‘best practices’ IT should avoid at all costs From telling everyone they’re your customer to establishing SLAs, to stamping out ‘shadow IT,’ these ‘industry best practices’ are sure to sink your chances of IT success. By Bob Lewis Sep 28, 2023 9 mins CIO IT Strategy Careers interview Qualcomm’s Cisco Sanchez on structuring IT for business growth The SVP and CIO takes a business model first approach to establishing an IT strategy capable of fueling Qualcomm’s ambitious growth agenda. By Dan Roberts Sep 28, 2023 13 mins IT Strategy IT Leadership feature Gen AI success starts with an effective pilot strategy To harness the promise of generative AI, IT leaders must develop processes for identifying use cases, educate employees, and get the tech (safely) into their hands. By Bob Violino Sep 27, 2023 10 mins Generative AI Innovation Emerging Technology Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe