Humanity have always dreamed of some omniscient, omnipotent genie that can shoulder its workloads. Now, thanks to the hard work of computer scientists in the labs, we have our answer in artificial intelligence, which if you buy into the hype can do just about anything your company needs done \u2014 at least some of it, some of the time.\nYes, the AI innovations are amazing. Virtual helpers like Siri, Alexa, or Google Assistant would seem magical to a time traveler from as recently as 10 to 15 years ago. Your word is their command, and unlike voice recognition tools from the 1990s, they often come up with the right answer \u2014 if you avoid curveball questions like asking how many angels can dance on the head of a pin.\n\n[ Cut through the hype with our practical guide to machine learning in business and find out the10 signs you\u2019re ready for AI \u2014 but might not succeed. | | Get the latest insights with our CIO Daily newsletter. ]\n\nBut for all of their magic, AIs are still reliant on computer programming and that means they suffer from all of the limitations that hold back the more pedestrian code such as spreadsheets or word processors. They do a better job juggling the statistical vagaries of the world, but ultimately, they\u2019re still just computers that make decisions by computing a function and determining whether some number is bigger or smaller than a threshold. Underneath all of the clever mystery and sophisticated algorithms is a set of transistors implementing an IF-THEN decision.\nCan we live with this? Do we have any choice? With the drumbeat for AI across all industries only getting louder, we must begin to learn to live with the following dark secrets of artificial intelligence.\nMuch of what you find with AI is obvious\nThe toughest job for an AI scientist is telling the boss that the AI has discovered what everyone already knew. Perhaps it examined 10 billion photographs and discovered the sky is blue. But if you forgot to put night-time photos in the training set, it won\u2019t realize that it gets dark at night.\u00a0\nBut how can an AI avoid the obvious conclusions? The strongest signals in the data will be obvious to anyone working in the trenches and they\u2019ll also be obvious to the computer algorithms digging through the numbers. They\u2019ll be the first answer that the retriever will bring back and drop at your feet. At least the algorithms won\u2019t expect a treat.\nExploiting nuanced AI insights may not be worth it\nOf course, good AIs also lock on to small differences when the data is precise. But using these small insights can require deep strategic shifts to the company\u2019s workflow. Some of the subtle distinctions will be too subtle to be worth chasing. And computers will still obsess over them. The problem is that big signals are obvious and small signals may yield small or even nonexistent gains.\nMysterious computers are more threatening\nWhile early researchers hoped that the mathematical approach of a computer algorithm would lend an air of respectability to the final decision, many people in the world aren\u2019t willing to surrender to the god of logic. If anything, the complexity and mystery of AI make it easier for anyone unhappy with the answer to attack the process. Was the algorithm biased? The more mystery and complexity under the hood, the more reasons for the world to be suspicious and angry.\nAI is mainly curve fitting\nScientists have been plotting some noisy data and drawing lines through the points for hundreds of years. Many of the AI algorithms at the core of machine learning algorithms do just that. They take some data and draw a line through them. Much of the advancement has come from finding ways to break the problem into thousands, millions, or maybe even billions of little problems and then drawing lines through all of them. It\u2019s not magic; it\u2019s just an assembly line for how we\u2019ve been doing science for centuries. People who don\u2019t like AI and find it easy to poke holes in its decisions focus on the fact that there\u2019s often no deep theory or philosophical scaffolding to lend credibility to the answer. It\u2019s just a guesstimate for the slope of some line.\nGathering data is the real job\nEveryone who\u2019s started studying data science begins to realize that there\u2019s not much time for science because finding the data is the real job. AI is a close cousin to data science and it has the same challenges. It\u2019s 0.01% inspiration and 99.99% perspiring over file formats, missing data fields, and character codes.\nYou need massive data to reach deeper conclusions\nSome answers are easy to find, but deeper, more complex answers often require more and more data. Sometimes the amount of data will rise exponentially. AI can leave you with an insatiable appetite for more and more bits.\nYou\u2019re stuck with the biases of your data\nJust like the inhabitants of Plato\u2019s Cave, we\u2019re all limited by what we can see and perceive. AIs are no different. They\u2019re explicitly limited by their training set. If there are biases in the data \u2014 and there will be some \u2014 the AI will inherit them. If there are holes in the data, there will be holes in the AI\u2019s understanding of the world. \u00a0\nAI is a black hole for electricity\nMost good games have a final level or an ultimate goal. AIs, though, can keep getting more and more complex. As long as you\u2019re willing to pay the electricity bill, they\u2019ll keep churning out more complex models with more nodes, more levels, and more internal state. Maybe this extra complexity will be enough to make the model truly useful. Maybe some emergent sentient behavior will come out of the next run. But maybe we\u2019ll need an even larger collection of GPUs running through the night to really capture the effect.\nExplainable AI is just another turtle\nAI researchers have been devoting more time of late trying to explain just what the AI is doing.\u00a0 We can dig into the data and discover that the trained model relies heavily on these parameters that come from a particular corner of the data set. Often, though, the explanations are like those offered by magicians who explain one trick by performing another. Answering the question why is surprisingly hard. You can look at the simplest linear models and stare at the parameters, but often you\u2019ll be left scratching your head. If the model says to multiply the number of miles driven each year by a factor of 0.043255, you might wonder why not 0.043256 or 0.7, or maybe something outrageously different like 411 or 10 billion. Once you\u2019re using a continuum, all of the numbers along the axis might be right.\u00a0\nIt\u2019s like the old model where the Earth was just sitting on a giant Turtle. And where did this turtle stand? On the back of another Turtle. And where does the next stand? It\u2019s turtles all the way down.\nTrying to be fair is a challenge\nYou could leave height out of the training set, but the odds are pretty good that your AI program will find some other proxy to flag the taller people and choose them for your basketball squad. Maybe it will be shoe size. Or perhaps reach. People have dreamed that asking a neutral AI to make an unbiased decision would make the world a fairer place, but sometimes the issues are deeply embedded in reality and the algorithms can\u2019t do any better.\nSometimes the fixes are even worse\nIs forcing an AI to be fair any real solution? Some try to insist that AIs generate results with certain preordained percentages. They put their thumb on the scale and rewrite the algorithms to change the output. But then people start to wonder why we bother with any training or data analysis if you\u2019ve already decided the answer you want.\nHumans are the real problem\nWe\u2019re generally happy with AIs when the stakes are low. If you\u2019ve got 10 million pictures to sort, you\u2019re going to be happy if some AI will generate reasonably accurate results most of the time. Sure, there may be issues and mistakes. Some of the glitches might even reflect deep problems with the AI\u2019s biases, issues that might be worthy of a 200-page hairsplitting thesis.\nBut the AIs aren\u2019t the problem. They will do what they\u2019re told. If they get fussy and start generating error messages, we can hide those messages. If the training set doesn\u2019t generate perfect results, we can put aside the whining result asking for more data. If the accuracy isn\u2019t as high as possible, we can just file that result away. The AIs will go back to work and do the best they can.\nHumans, though, are a completely different animal. The AIs are their tools and the humans will be the ones who want to use them to find an advantage and profit from it. Some of these plans will be relatively innocent, but some will be driven by secret malice aforethought. Many times, when we run into a bad AI, it\u2019s because it\u2019s the puppet on the string for some human that\u2019s profiting from the bad behavior.