Today, artificial intelligence (AI) is helping us uncover new insight from data and enhance human decision-making. For instance, we use facial recognition to sign into our cell phones, and voice comprehension and intent analytics to get assistance. E-commerce retailers work with AI to predict and recommend new products to consumers. Banks use conversational AI to reduce fraud and better manage client experiences.\nMost of the AI that is in use today is narrow AI. General AI, which is more akin to human intelligence and can span a very broad range of decisions, emotions, and judgement, will not be here anytime soon. Narrow AI, which is here today, is actually very good at specific tasks, but \u201cnarrowness,\u201d by definition, can introduce some limitations, making it prone to bias.\u00a0 \u00a0\nBias may come from incomplete data samples or incorrect datasets. There is also interaction bias \u2013 skewed learning that happens through interactions over time. \u00a0And, sometimes bias may result from a sudden change in the business, such a new law or business rule. Finally, ineffective training algorithms can cause bias. Recognizing where biases come from helps with mitigation and can ensure that the AI application yields its intended business results.\nWhat leads to AI bias?\nWhile unintended bias can come from many causes, two of the largest drivers are bias in data and bias in training.\nThe most obvious cause of bias in data is lack of diversity in the data samples used to train the AI system. For example, we routinely run sensor data from aircraft engines through AI algorithms to predict part replacements and optimize asset performance. But if the AI is primarily trained for flights from the United States to Europe \u2013 flying over the cold Northern Hemisphere \u2013 and then used for flights in sub-Saharan Africa, it is easy to see that the dataset will fall outside of the trained model\u2019s parameters, and generate the wrong results. Put another way, the algorithm is only as smart as the data put into it.\nThe reality is that it can be hard to get comprehensive data to train AI systems, so many systems use only easy, readily available data. Sometimes, the data might not even exist to train the AI algorithm for all its potential use cases. For instance, AI software for recruiting struggles with recommending diverse candidates if it is trained only on a historical pool of non-diverse workers.\nAnother large driver of bias \u2013 bias in training \u2013 can come in through rushed and incomplete training algorithms. For example, an AI chatbot designed to learn from conversations and become more intelligent can pick up politically incorrect language that it gets exposed to and start using it, if it was not trained not to do so \u2013 as Microsoft learned with Tay. Similarly, the potential use of AI in the criminal justice system is concerning because we do not know yet if the training for the AI algorithms is done correctly.\nAgile programming has trained us in short-bite iterative development of products. This approach, coupled with the excitement around AI\u2019s promise, can drive early applications that quickly broaden beyond the intended use case. And because narrow AI does not cover for common sense, or a sense of fairness and equity, eliminating training bias requires a lot of planning and design work. This is where the human in the loop in the man-to-machine continuum becomes so important. Domain experts help think through and train the models accordingly.\nDiversity in both data and talent can mitigate bias\nThe best way to prevent data bias is to use a comprehensive and broad dataset, reflective of all possible edge use cases. If there is underrepresented or disproportionate internal data, external sources may fill in the gaps, and give the machine a richer, more complete picture. In a nutshell, the more comprehensive the dataset, the more accurate the AI predictions will be.\nDiversity in the teams working with AI also solves for training bias. When there is only a small group working on a system\u2019s design and algorithms, it becomes susceptible to the thinking of what could be like-minded individuals. Bringing in new team members with different skills, thinking, approaches, and background drives more holistic design. One of our biggest learnings is that AI is best trained by diverse teams that help identify the right questions for AI algorithms to solve.\nFor example, several teams used multi-terabytes of operational data in wealth management to train algorithms to drive higher trading income.\u00a0 The obvious approach was to focus on day traders, who are mostly single, 30-35 year old white males. One of the teams \u2013 with a set of diverse members beyond the usual data engineers and neural net experts \u2013 addressed that objective and also identified an even larger opportunity targeting single 50-55 year old women, which uncovered a high investible assets segment that previously had gone untapped. Diverse teams think of questions others may not even know to ask.\nAI also helps minimize bias\nFor all that has been said so far about the perils of bias in AI, the reality is that with proper design and thoughtful usage, we can help reduce bias in AI. In fact, in many situations, AI can minimize bias otherwise present in human decision-making. For example, in human resource recruiting functions, job descriptions can be run through AI programs to eliminate unconscious discrimination by flagging and removing words that contain gender biases, such as replacing \u201cwar room\u201d with \u201cnerve center.\u201d\nIn summary, proper design and a few key principles can mitigate unintended bias in AI applications. Proper governance practices are a must. Data coverage needs to be comprehensive. And diverse teams deliver better results.