The stock prices are soaring. Everyone is still amazed by the way the generative AI algorithms can whip off some amazing artwork in any style and then turn on a dime to write long essays with great grammar. Every CIO and CEO has a slide or three in their deck ready to discuss how generative AI is going to transform their business.\n\nThe technology is still in its infancy but the capabilities are already undeniable. The next wave of computing will involve generative AI, probably in several places along the workflow. The ride is going to be unstoppable.\n\nWhat could possibly go wrong? Well, many things. The doomsayers imagine the total destruction of the economy and the enslavement of humans along with a good fraction of the animal world, too.\n\nThey\u2019re probably hyperventilating. But even if the worst cases never come along, it doesn\u2019t mean that everything will be perfect. Generative AI algorithms are still very new and evolving rapidly, but it\u2019s still possible to see cracks in the foundation. Look deeply into the algorithms and you\u2019ll still see places where they\u2019ll fail to deliver on the hype.\n\nHere are N of the dark secrets of generative AI algorithms to keep in mind when planning how to incorporate the technology into your enterprise workflow.\n\nThey conjure mistakes out of thin air\n\nThere\u2019s something almost magical about the way large language models (LLMs) write 1,000-word essays on obscure topics like the mating rituals of sand cranes or the importance of crenulations in 17th century Eastern European architecture. But the same magic power also enables them to conjure up mistakes out of nothing. They\u2019re cruising along, conjugating verbs and deploying grammar with the ability of a college-educated English major. Many of the facts are completely correct. Then, voil\u00e0, they just made something up like a fourth grader just trying to fake it.\n\nThe structure of LLMs makes this inevitable. They use probabilities to learn just how the words go together. Occasionally, the numbers choose the wrong words. There\u2019s no real knowledge or even ontology to guide them. It\u2019s just the odds and sometimes the dice rolls come up craps. We may think we\u2019re mind melding with a new superior being but we\u2019re really no different from a gambler on a bender in Las Vegas looking for a signal in the sequence of dice rolls.\n\nThey are data sieves\n\nHumans have tried to create an elaborate hierarchy of knowledge where some details are known to insiders and some are shared with everyone. This wishful hierarchy is most apparent in the military\u2019s classification system but many businesses have them as well. Maintaining these hierarchies is often a real hassle for the IT department and the CIOs that manage them.\n\nLLMs don\u2019t do so well with these classifications. While computers are the ultimate rule followers and they can keep catalogs of almost infinite complexity, the structure of LLMs don\u2019t really allow for some details to be secret and some to be shareable. It\u2019s all just a huge collection of probabilities and random walks down the Markov chains.\n\nThere are even creepy moments when an LLM will glue together two facts using its probabilities and infer some fact that\u2019s nominally secret. Humans might even do the same thing given the same details.\n\nThere may come a time when LLMs are able to maintain strong layers of secrecy, but for now the systems are best trained with information that\u2019s very public and won\u2019t cause a stir if it leaks out. Already there are several high profile examples involving company data leaks and LLM guardrails being circumvented. Some companies are trying to turn AI into a tool to stop data leaks but it will take some time before we understand the best way to do that. Until then, CIOs might do better to keep a tight leash on the data that\u2019s fed to them.\n\nThey proliferate laziness\n\nHumans are very good at trusting machines, especially if they save work. When the LLMs prove themselves correct most of the time, the humans start to trust them all the time.\n\nEven asking humans to double check the AIs doesn\u2019t work too well. After the humans get used to the AIs being right, they start drifting off and trusting that the machines will just be right.\n\nThis laziness starts filling the organization. Humans stop thinking for themselves and eventually the enterprise sinks into a low-energy stasis where no one wants to think outside the box. It can be relaxing and stress-free for a bit \u2014 until the competition shows up.\n\nTheir true cost is unknown\n\nNo one knows the correct cost for using an LLM. Oh, there\u2019s a price tag for many of the APIs that spells out the cost per token but there are some indications that the amount is heavily subsidized by venture capital. We saw the same thing happen with services like Uber. The prices were low until the investors\u2019 money ran out and then the prices soared.\n\nThere are some indications that the current prices aren\u2019t the real prices that will eventually come to dominate the marketplace. Renting a good GPU and keeping it running can be much more expensive. It is possible to save a bit of money by running your LLMs locally by filling a rack with video cards, but then you lose all the advantages of turnkey services like paying only for the machines when you need them.\n\nThey are a copyright nightmare\n\nThere are some nice LLMs on the market already that can handle general chores like doing high school homework assignments or writing college admissions essays that emphasize a student\u2019s independence, drive, writing ability, and moral character \u2014 oh, and their ability to think for themselves.\n\nBut most enterprises don\u2019t have these kinds of general chores for AI to undertake. They need to customize the results for their specific business. The basic LLMs can provide a foundation but there\u2019s still a great deal of training and fine-tuning needed.\n\nFew have figured out the best way to assemble this training data. Some enterprises are lucky enough to have big datasets they control. Most, however, are discovering that they don\u2019t have all the legal issues settled regarding copyrights (here, here, and here). Some authors are suing because they weren\u2019t consulted on using their writing to train an AI. Some artists feel plagiarized. Issues of privacy are still being sorted out (here and here).Can you train your AI on your customers\u2019 data? Are the copyright issues settled? Do you have the right legal forms in place? Is the data available in the right format? There are so many questions that stand in the way of creating a great, customized AI ready to work in your enterprise.\n\nThey may invite vendor lock-in\n\nIn theory, AI algorithms are generalized tools that have abstracted away all the complexity of user interfaces. They\u2019re supposed to be standalone, independent, and able to handle what life \u2014 or the idiot humans they serve \u2014 throws their way. In other words, they\u2019re not supposed to be rigid and inflexible as an API. In theory, this means it should be easy to switch vendors quickly because the AIs will just adapt. There won\u2019t be a need for some team of programmers to rewrite the glue code and do all the things that cause trouble when it becomes time to switch vendors.\n\nIn reality, though, there are still differences. The APIs may be simple, but they still have differences, like JSON structures for invocations. But the real differences are buried deep inside. Writing prompts for the generative AIs is a real art form. The AIs don\u2019t make it easy to get the best performance out of them. Already there\u2019s a job description for smart people who understand the idiosyncrasies and can write better prompts that will deliver better answers. Even if the API differences are minor, the weird differences in the prompt structure makes it hard to just switch AIs quickly.\n\nTheir intelligence remains shallow\n\nThe gap between a casual familiarity with the material and a deep, intelligent understanding has long been a theme in universities. Alexander Pope wrote, \u201cA little learning is a dangerous thing ;\n\nDrink deep, or taste not the Pierian spring.\u201d That was in 1709.\n\nOther smart people have noted similar problems with the limits of human intelligence. Socrates concluded that for all his knowledge, he really knew nothing. Shakespeare thought that the wise man knows himself to be a fool.The list is long and most of these insights into epistemology apply in one form or another to the magic of generative AI, often to a much greater extent. CIOs and tech leadership teams have a difficult challenge ahead of them. They need to leverage the best that generative AIs can, well, generate while trying to avoid running aground on all the intellectual shoals that have long been a problem for intelligences anywhere, human, alien, or computational.