Automation is the future, but we need to enter the future with open eyes and a clear head. Credit: Thinkstock We have seen it too many times before. A major security or privacy breach creates a crisis for an enterprise. Headlines, lawsuits and, sometimes, the CEO testifying before congress. The CIO works around the clock only to be rewarded with a pink-slip and an uncertain career. Researchers from MIT and Stanford University tested three commercially released facial-analysis programs from major technology companies and will present findings that the software contains clear skin-type and gender biases. Facial recognition programs are good at recognizing white males but fail embarrassingly with females especially the darker the skin tone. The news broke last week but will be presented in full at the upcoming Conference on Fairness, Accountability, and Transparency. Bias will damage the relationship between the enterprise and the public. It can make a firm a target for critics who will view things like this as evidence the firm doesn’t share the values of its customers. But also, as AI makes more and more decisions related to things like investments, healthcare, lending and financial decisions, employment and so on, the risk of damaging people and financial and even criminal liability will increase. When we began storing and transmitting valuable, often personal and financial data, we created the risk of data breach. In the age of Artificial Intelligence and automation technologies, bias is the new breach. Artificial Intelligence and automation technologies are critical to your company strategy. But with it comes new sets of risks and issues that CIOs and other leaders must address. It is critical to create the systems and processes that will prevent bias from creeping in to a company’s AI software and detect it and mitigate the damage when it does. That will be the biggest challenge in the next few years, not loss of jobs or threats to personal safety from AI. Public examples It may seem odd that a software program could have built in bias, but the reasons for it are very simple. The experts that are developing AI technology are the ones feeding data into their programs. If they’re using data that already includes standard human biases, then their AI software will also reflect this bias. It’s not something that’s done consciously, but unfortunately, it hasn’t been a major consideration when initial programming begins on systems such as Alexa, Siri, or Google Home. Some critics would like to see AI interaction be both gender and ethnically neutral. We might want to adopt more generic robot sounding voices instead of the standard female voice we’ve been exposed to. This might be taking it a bit far, but the point is valid. We need to be constantly vigilant against the possibility of bias as we integrate AI into business organizations. Avoiding skewed data sets One of machine learning’s strengths is that it can create highly compelling predictive models from relatively small sets of data compared to traditional analytics approaches. Often this leads to exciting and highly valuable insights. But there is big risk in those big benefits. If we want AI to be completely unbiased, we must give it the best possible starting point. Current data sets may already be skewed towards automatic assumptions based on gender or ethnicity. We have to recognize this as we build AI systems from the ground up. The data needs to be completely transparent and free from our own personal biases. Only then will an AI system be able to provide us with the best support in an unbiased manner. Constant training and evaluation Once a system’s created and integrated into a business network the work doesn’t end. Bias can still be introduced over time – especially as new data is fed into the system. Employees that are instrumental in the implementation of new systems must be properly trained. They have to know how to look for what we’ll call creeping bias. As the system evolves, it needs to remain free from human shortcomings. Ethnic and gender diversity If a company is introducing facial recognition software for example, the system should be trained to recognize the diversity of the company’s employees and clients. It has to be able to recognize the correct gender, regardless of a user’s ethnic background. A good place to start is to ensure the technicians and contributors to the new AI programs are diverse themselves. Creative diversity While ethnic and gender diversity may seem easy enough to correct for, they aren’t the only type of bias that could eventually find its way into an AI system. So far, AI technology has been created by a relatively small group of people and they all have PHD’s. They’re not reflective of the average member of society. For this reason, CIO’s have to be conscious of the need to build background diversity into their AI programs. As these systems mature and evolve, it’s important to include more people from a wide range of backgrounds in the development process. This should include creative types from all areas. The idea is to provide the AI software with as much valid information from as many sources as possible. Over time, this will give it the best chance of successful integration into a business system. Rigorous and ongoing testing No matter how hard an implementation team works on the integration of a new AI system, there is still the danger that some bias will find its way into the process over time. To avoid this CIO’s, must introduce a continuous process of testing and evaluating the software. End users should be given tools to detect and correct bias in the programs they use when they find them. AI can be a game changing technology for business, but only if we’re always on the lookout for bias. Crisis management and response – be ready Finally, assume you wont always get it right. Work proactively with legal, corporate risk management, human resources, corporate communications and others to have practical, practiced and proven plans for dealing with disaster. Be up front about your concerns. Automation is the future, but we need to enter the future with open eyes and a clear head. Related content opinion Why Google and Amazon can win in any business – including yours From film production and consumer electronics to healthcare and grocery delivery, Amazon and Alphabet (Google) are moving in many directions at once. They have pioneered a new strategic model and their data-first principles allows them to enter and d By Michael Zammuto Jun 05, 2018 4 mins Enterprise Technology Industry opinion Enterprise CIOs should watch how AI and blockchain build the cannabis industry The legal cannabis industry is proving to be fast moving and innovative. Its IT leaders have the challenge and the luxury to adopt new technologies, including AI and blockchain, without the limitations of legacy systems. Enterprise CIOs in industries By Michael Zammuto Apr 20, 2018 4 mins Technology Industry Blockchain Markets opinion How artificial intelligence is changing the CIO's role Disruptive technologies are making CIOs play a more prominent role than ever. By Michael Zammuto Mar 08, 2018 5 mins CIO Artificial Intelligence IT Leadership opinion How CIOs can get buy-in for a job-destroying AI revolution If AI is to develop further, investment will be neededu2026and that will have to come from increased profitability. By Michael Zammuto Jan 02, 2018 5 mins IT Strategy Artificial Intelligence IT Leadership Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe