The use of artificial intelligence in the hiring process has increased in recent years with companies turning to automated assessments, digital interviews, and data analytics to parse through resumes and screen candidates. But as IT strives for better diversity, equity, and inclusion (DEI), it turns out AI can do more harm than help if companies aren\u2019t strategic and thoughtful about how they implement the technology.\n\n\u201cThe bias usually comes from the data. If you don\u2019t have a representative data set, or any number of characteristics that you decide on, then of course you\u2019re not going to be properly, finding and evaluating applicants,\u201d says Jelena Kova\u010devi\u0107, IEEE Fellow, William R. Berkley Professor, and Dean of the NYU Tandon School of Engineering.\n\nThe chief issue with AI\u2019s use in hiring is that, in an industry that has been predominantly male and white for decades, the historical data on which AI hiring systems are built will ultimately have an inherent bias. Without diverse historical data sets to train AI algorithms, AI hiring tools are very likely to carry the same biases that have existed in tech hiring since the 1980s. Still, used effectively, AI can help create a more efficient and fair hiring process, experts say.\n\nThe dangers of bias in AI\n\nBecause AI algorithms are typically trained on past data, bias with AI is always a concern. In data science, bias is defined as an error that arises from faulty assumptions in the learning algorithm. Train your algorithms with data that doesn\u2019t reflect the current landscape, and you will derive erroneous results. As such, with hiring, especially in an industry like IT, that has had historical issues with diversity, training an algorithm on historical hiring data can be a big mistake.\n\n\u201cIt\u2019s really hard to ensure a piece of AI software isn\u2019t inherently biased or has biased effects,\u201d says Ben Winters, an AI and human rights fellow at the Electronic Privacy Information Center. While steps can be taken to avoid this, he adds, \u201cmany systems have been shown to have biased effects based on race and disability.\u201d\n\nIf you don\u2019t have appreciable diversity in your data set, then it\u2019s impossible for an algorithm to know how individuals from underrepresented groups would have performed in the past. Instead, your algorithm will be biased toward what your data set represents and will compare all future candidates to that archetype, says Kova\u010devi\u0107.\n\n\u201cFor example, if Black people were systematically excluded from the past, and if you had no women in the pipeline in the past, and you create an algorithm based on that, there is no way the future will be properly predicted. If you hire only from \u2018Ivy League schools,\u2019 then you really don\u2019t know how an applicant from a lesser-known school will perform, so there are several layers of bias,\u201d she says.\n\nWendy Rentschler, head of corporate social responsibility, diversity, equity, and inclusion at BMC Software, is keenly aware of the potential negatives that AI can bring to the hiring process. She points to an infamous case of Amazon\u2019s attempt at developing an AI recruiting tool as a prime example: The company had to shut the project down because the algorithm discriminated against women.\n\n\u201cIf the largest and greatest software company can\u2019t do it, I give great pause to all the HR tech and their claims of being able to do it,\u201d says Rentschler.\n\nSome AI hiring software companies make big claims, but whether their software can help determine the right candidate remains to be seen. The technology can help companies streamline the hiring process and find new ways of identifying qualified candidates using AI, but it\u2019s important not to let lofty claims cloud judgment.\n\nIf you\u2019re trying to improve DEI in your organization, AI can seem like a quick fix or magic bullet, but if you\u2019re not strategic about your use of AI in the hiring process, it can backfire. The key is to ensure your hiring process and the tools you\u2019re using aren\u2019t excluding traditionally underrepresented groups.\n\nDiscrimination with AI\n\nIt\u2019s up to companies to ensure they\u2019re using AI in the hiring process as ethically as possible and not falling victim to overblown claims of what the tools can do. Matthew Scherer, senior policy counsel for worker privacy at the Center for Democracy & Technology, points out that, since the HR department doesn\u2019t generate revenue and is usually labeled as an expense, leaders are sometimes eager to bring in automation technology that can help cut costs. That eagerness, however, can cause companies to overlook potential negatives of the software they\u2019re using. Scherer also notes that a lot of the claims made by AI hiring software companies are often overblown, if not completely false.\n\n\u201cParticularly tools that claim to do things like analyze people\u2019s facial expressions, their tone of voice, anything that measures aspects of personality \u2014 that\u2019s snake oil,\u201d he says.\n\nAt best, tools that claim to measure tone of voice, expressions, and other aspects of a candidate\u2019s personality in, for example, a video interview are \u201cmeasuring how culturally \u2018normal\u2019 a person is,\u201d which can ultimately exclude candidates with disabilities or any candidate that doesn\u2019t fit what the algorithm determines is a typical candidate. These tools can also put disabled candidates in the uncomfortable position of having to decide whether they should disclose any disabilities before the interview process. Disabled candidates may have concerns that if they don\u2019t disclose, they won\u2019t get the right accommodations needed for the automated assessment, but they might not be comfortable disclosing a disability that early in the hiring process, or at all.\n\nAnd as Rentschler points out, BIPOC, women, and candidates with disabilities are often accustomed to the practice of \u201ccode switching\u201d in interviews \u2014 which is when underrepresented groups make certain adjustments to the way they speak, appear or behave, in order to make others more comfortable. In this case, AI systems might pick up on that and incorrectly identify their behavior as inauthentic or dishonest, turning away potentially strong candidates.\n\nScherer says discrimination laws fall into two categories: disparate impact, which is unintentional discrimination; and disparate treatment, which is intentional discrimination. It\u2019s difficult to design a tool that can avoid disparate impact \u201cwithout explicitly favoring candidates from particular groups, which would constitute disparate treatment under federal law.\u201d\n\nRegulations in AI hiring\n\nAI is a relatively new technology, leaving oversight scant when it comes to legislation, policies, and laws around privacy and trade practices. Winters points to a 2019 FTC complaint filed by EPIC alleging HireVue was using deceptive business practices related to the use of facial recognition in its hiring software.\n\nHireVue claimed to offer software that \u201ctracks and analyzes the speech and facial movements of candidates to be able to analyze fit, emotional intelligence, communication skills, cognitive ability, problem solving ability, and more.\u201d HireVue ultimately pulled back on its facial recognition claims and the use of the technology in its software.\n\nBut there\u2019s similar technology out there that uses games to \u201cpurportedly measure subjective behavioral attributes and match with organizational fit\u201d or that will use AI to \u201ccrawl the internet for publicly available information about statements by a candidate then analyze it for potential red flags or fit,\u201d according to Winters.\n\nThere\u2019s also concerns around the amount of data that AI can collect on a candidate while analyzing their video interviews, assessments, resumes, LinkedIn profiles, or other public social media profiles. Oftentimes, candidates might not even know they\u2019re being analyzed by AI tools in the interview process and there are few regulations on how that data is managed.\n\n\u201cOverall, there is currently very little oversight for AI hiring tools. Several state or local bills have been introduced. However, many of these bills have significant loopholes \u2014 namely not applying to government agencies and offering significant workarounds. The future of regulation in AI-supported hiring should require significant transparency, controls on the application of these tools, strict data collection, use, and retention limits, and independent third-party testing that is published freely,\u201d says Winters.\n\nResponsible use of AI in hiring\n\nRentschler and her team at BMC have focused on finding ways to use AI to help the company\u2019s \u201chuman capital be more strategic.\u201d They\u2019ve implemented tools that screen candidates quickly using skills-based assessments for the role they\u2019re applying to. BMC has also used AI to identify problematic language in its job descriptions, ensuring they\u2019re gender-neutral and inclusive. BMC has also employed the software to connect new hires with their benefits and internal organizational information during the onboarding process. Rentschler\u2019s objective is to find ways to implement AI and automation that can help the humans on her team do their jobs more effectively, rather than replace them.\n\nWhile AI algorithms can carry inherent bias based on historical hiring data, one way to avoid this is to focus more on skills-based hiring. Rentschler\u2019s team only uses AI tools to identify candidates who have specific skill sets they\u2019re looking to add to their workforce, and ignores any other identifiers such as education, gender, names, and other potentially identifying information that might have historically excluded a candidate from the process. By doing this, she has hired candidates from unexpected backgrounds, Rentschler says, including a Syrian refugee who was originally a dentist, but also had some coding experience. Because the system was focused only on looking for candidates with coding skills, the former dentist made it past the filter and was hired by the company.\n\nOther ethical strategies include having checks and balances in place. Scherer consulted with a company that designed a tool to send potential candidates to a recruiter, who would then review their resumes and decide whether they were a good fit for the job. Even if that recruiter rejected a resume, the candidate\u2019s resume would still be put through the algorithm again, and if it was flagged as a good potential candidate, it would be sent to another recruiter who wouldn\u2019t know it was already reviewed by someone else on the team. This ensures that resumes were being double-checked by humans and that they aren\u2019t relying solely on AI to determined qualified candidates. It also ensures that recruiters aren\u2019t overlooking qualified candidates.\n\n\u201cIt\u2019s important that the human retains the judgment and doesn\u2019t just rely on what the machine says. And that\u2019s the thing that is hard to train for, because the easiest thing for a human recruiter to do will always be to just say, \u2018I\u2019m going to just go with whatever the machine tells me if the company is expecting me to use that tool,\u2019\u201d says Scherer.