Bias in AI systems can lead to unintended consequences. Making sure you you understand the ways AI systems are interpreting the data they receive is one way to address the problem. Alexa. Siri. Cortana. These virtual digital assistants are becoming part of our daily lives, and besides their functional similarities, they have one other important feature in common: they’re all coded to present as women. If you’ve never considered why all these AI helpers are female, you’re not alone. LivePerson, a cloud-based, customer service messaging solution, commissioned a survey of 1,000 U.S. adults to determine their perceptions of AI and the gender gap in technology. In response to a question that asked, “The Alexa, Siri and Google assistants are all female by default. Did you ever think about that fact?” Overall, 53.2 percent say they hadn’t. Unlike other technologes, “with AI, you are creating something that you’re interacting with as though it’s another person, or you’re using it to supplement human activities, make decisions and recommendations,” says Rob LoCascio, CEO of LivePerson. “And so we have to ask, why? Why are we gendering these ‘helper’ technologies as women? And what does that say about our expectations of women in the world and in the workplace? That women are inherently ‘helpers;’ that they are ‘nags;’ that they perform administrative roles; that they’re good at taking orders?” LoCascio says. Bias, amplified Of course, it’s not just digital assistants that have a bias problem. As Bloomberg reports, researchers have started to notice the tendency for AI systems to repeat and amplify the biases of their creators. “Companies, government agencies and hospitals are increasingly turning to machine learning, image recognition and other AI tools to help predict everything from the credit worthiness of a loan applicant to the preferred treatment for a person suffering from cancer. The tools have big blind spots that particularly effect women and minorities,” the article says. In recruiting and hiring, for example, AI is often used to identify ideal candidates for open positions. Left unchecked, however, AI can make diversity problems worse in fields like IT where there’s already a significant lack of women and underrepresented minorities. Alexa. Siri. Cortana. These virtual digital assistants are becoming part of our daily lives, and besides their functional similarities, they have one other important feature in common: they’re all coded to present as women. If you’ve never considered why all these AI helpers are female, you’re not alone. LivePerson, a cloud-based, customer service messaging solution, commissioned a survey of 1,000 U.S. adults to determine their perceptions of AI and the gender gap in technology. In response to a question that asked, “The Alexa, Siri and Google assistants are all female by default. Did you ever think about that fact?” Overall, 53.2 percent say they hadn’t. Unlike other technologes, “with AI, you are creating something that you’re interacting with as though it’s another person, or you’re using it to supplement human activities, make decisions and recommendations,” says Rob LoCascio, CEO of LivePerson. “And so we have to ask, why? Why are we gendering these ‘helper’ technologies as women? And what does that say about our expectations of women in the world and in the workplace? That women are inherently ‘helpers;’ that they are ‘nags;’ that they perform administrative roles; that they’re good at taking orders?” LoCascio says. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe Bias, amplified Of course, it’s not just digital assistants that have a bias problem. As Bloomberg reports, researchers have started to notice the tendency for AI systems to repeat and amplify the biases of their creators. “Companies, government agencies and hospitals are increasingly turning to machine learning, image recognition and other AI tools to help predict everything from the credit worthiness of a loan applicant to the preferred treatment for a person suffering from cancer. The tools have big blind spots that particularly effect women and minorities,” the article says. In recruiting and hiring, for example, AI is often used to identify ideal candidates for open positions. Left unchecked, however, AI can make diversity problems worse in fields like IT where there’s already a significant lack of women and underrepresented minorities. “In tech, if you’re looking at candidates for technical roles, the majority are going to be white men; so if you just throw all those inputs into the system and go with whatever comes out, then you’ll see your system making the correlation between, say, a developer and then associating that with a white man,” says Ankit Somani, co-founder of AI recruiter AllyO. “Based on the data it received, it’s not wrong, but if you aren’t understanding how the current lack of diversity is impacting these systems, then you can exacerbate the problem,” he says. Bias in AI also has a business impact. Take retail, for example. “More than 70 percent of online shopping is done by women. Women hold purchasing power and make most budgeting decisions, so if businesses want to attract and retain those customers, they have to think about these things,” LoCascio says. “It’s not just an issue for the businesses themselves, but for the world in general – if most of your user interfaces and user experience is designed by men, for men, then you’re going to lose out on a competitive edge,” he says. The human element Making sure you understand the ways AI systems are interpreting the data they receive and adjusting the algorithms is one way to address the bias problem, Somani says. “AI technology is good for crunching numbers and for processing large amounts of data and finding patterns. The human part is in making sense of the patterns, figuring out where there needs to be changes based on the outcomes, and, of course, in making actual emotional connections,” he says. “Together, you can work toward balance and making sure AI systems are performing fairly and equitably, but you have to have those checks. If you hire an employee, would you never have a performance review? Would you never check in to see how they’re doing and if their performance was up to par? Of course not. Why wouldn’t you also do this with the technologies you’re incorporating?” he says. Continuously examining the results you’re given and looking closely for biases is key, adds Melanie Pasch, head of content and community manager at AI career platform Gloat. For example, Gloat’s anonymous recruiting software not only anonymizes résumés by removing names and any other factor that could associate candidates with gender, sex, race or ethnicity, but other factors that could point to socioeconomic class. “We realized through looking at the results that certain hobbies – like horseback riding, for instance – were an unconscious indicator of socioeconomic class. We also realized, on a positive note, that for some roles, degrees weren’t necessarily correlating with success on the job, so we coded against those factors to remove them from consideration and focused on other data points,” she says. There should also be a process by which humans review AI decisions to make sure they’re working as intended, and aren’t exacerbating biases, Pasch says. “No one should be hiring solely based on what an algorithm says, because we know these issues exist. But as we make progress and start to see better representation, then the technology will learn and evolve with us,” Pasch says. Related content brandpost ST Engineering showcases applications of new technologies to stay ahead of disruption By Jane Chan Oct 03, 2023 7 mins Generative AI Generative AI Generative AI news Nominations extended for CIO100 ASEAN Awards 2023 By Shirin Robert Oct 02, 2023 2 mins IDG Events IT Leadership brandpost Unlocking value: Oracle enterprise license models for optimal ROI Helping you maximize your return on investment of Oracle software program licenses is not as complex as it sounds—learn more today. By Rimini Street Oct 02, 2023 4 mins Managed IT Services IT Management brandpost Lessons from the field: Why you need a platform engineering practice (…and how to build it) Adopting platform engineering will better serve customers and provide invaluable support to their development teams. By VMware Tanzu Vanguards Oct 02, 2023 6 mins Software Deployment Devops Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe