Machines are fed mounds and mounds of data to extrapolate, interpret and learn.\u00a0 Unlike humans, algorithms are ill-equipped to consciously counteract learned biases because although we would like to believe AI\/ML correlates to human thinking, it really doesn\u2019t. AI\/ML has created what we have determined to be the newest industrial revolution by giving computers the ability to interpret human language and without intention, it has learned human biases as well.\nSo, where does the data being used by AI\/ML systems come from? Most of this historical data comes from the same type of people who created the algorithms and the programs using the algorithms which until recently has been those socio-economically above average and male. So \u201cwithout thinking\u201d or intent, gender and racial biases have dominated the AI\/ML learning process. An AI\/ML system is not capable of \u201cthinking on its feet\u201d or reversing this bias once it makes a decision. The point is AI\/ML systems are bias because humans are innately biased and AI\/ML systems are not capable of moral decisions only humans are; at least not yet any way.\nResearch has shown recruiting (HR) software is biased\nMuch research shows that as machines are acquiring human-like language capabilities, they are also absorbing deeply ingrained human biases concealed within language patterns. Within recruiting (HR) selection software, this means a resume may not make the \u201cfirst cut\u201d based on the language and pattern recognition of the resume not based on the skills. As time passes, writing resumes has become an art and science; this alone is a skill belonging to a data scientist coupled with a professional writer; above all someone highly language educated with an analytical mind. How many professional writers are capable of being data scientists? Our educational system needs to address this because I believe that everyone will need to be a highly skilled data scientist or have access to one quickly and easily. \u00a0\nRecent research has shown through implicit mathematical word association tests that categorize pleasant word versus unpleasant word associations, human psychological biases in AI\/ML systems can be exposed. Words associated to \u201cflowers\u201d versus \u201cinsects\u201d have been determined as psychologically more pleasant. Professional results for women are seen with gender biases through the words \u201cfemale\u201d and \u201cwoman\u201d as associated with humanities professions and with the home. On the other hand, \u201cmale\u201d and \u201cman\u201d algorithms result in associations with math, science and engineering professions.\u00a0 European American names perceiving to be more Anglo-Saxon were heavily associated with the words \u201cgift\u201d or \u201chappy\u201d while African American names were associated with unpleasant words.\nStatistically, research shows that even with an identical coefficient of variation (CV) of 50% a European American is still more likely to be interviewed over an African American.\n\n\u201cThe coefficient of variation (CV) represents the ratio of the standard deviation to the mean, and it is a useful statistic for comparing the degree of variation from one data series to another, even if the means are drastically different from one another.\u201d\n\nBecause algorithms can potentially show when the algorithm is biased, it suggests that algorithms, explicitly inherit the same social prejudices as the humans who programmed them. It is believed that although a complicated task, it is possible that AI\/ML systems can be programmed to address this mathematical bias. Correction is taking place already within companies like Google and Amazon search engines on the web. Machine translations of web searches construct mathematical representations of language in which the meaning of a word is refined into a series of numbers (word vector) based on which other words most frequently appear in correlation. This mathematical approach seems to capture the deep cultural societal language context more accurately over any possible dictionary definition.\nCan bias in AI\/ML be eliminated?\nHow to eliminate inappropriate bias by modifying interpretation is not as easy as you can imagine. Language inference and interpretation is a subtle human trait. It\u2019s typically based on such influences as socio-economic background, gender, education and race\u2026all of which contribute to human biases.\nHow to program algorithms designed to \u201cunderstand\u201d language, without weakening their interpretive powers, is extremely challenging. To select \u201conly one\u201d most appropriate interpretation and adding it to the decision tree leading to the next \u201conly one\u201d most appropriate interpretation and so on down the decision tree causes algorithms to mimic thinking.\nWhat if the first interpretation by AI\/ML goes down what we humans believe is the wrong path based on human intellect, cultural and moral laws? Immediate course correction input would be necessary as data accumulates along the decision tree as very minute behavior steps are executed. How do we program in moral and cultural acceptable laws into AI\/ML systems? Who decides what those moral and cultural laws are?\nAmazon, Google, IBM, Microsoft and many others have been evaluating bias within its AI\/ML platforms trying to understand both the problem and the solution. Amazon has even stopped using AI\/ML software as a recruiting and employment tool. After many years of research what has been determined is that since AI\/ML replicate patterns of male engineers who build the AI\/ML software and systems, the patterns simulated are of their making. Most major companies are beginning to look at the biases their AI\/ML systems have created and are trying to \u201cfind a cure.\u201d\u00a0\nOne suggestion is to have extreme diversity in the AI\/ML development team with constant diversity oversight from within. Also suggested is to create an AI\/ML supervisory and compliance body to police the systems applying AI\/ML diverse course corrections. This AI\/ML body of humans would be extremely powerful when empowered and ultimately become our AI\/ML moral authority.\nAre we entering into a science fiction novel similar to Orwell\u2019s \u201c1984\u201d? Don\u2019t you see evidence of a global race for a one-world economic and possible moral authority through AI\/ML domination? Whoever dominates the creation and deployment of AI\/ML platforms could affect not only small global decisions but major ones as well.