The advent of gen AI changed everything, and the pace of that change is like nothing we've seen before. The potential impacts are reminiscent of the dawn of the Internet, and are likely to be just as transformative for businesses. According to McKinsey, gen AI is poised to add up to an annual $4.4 billion to the global economy.\n\nAI is now a board-level priority\n\nLast year, AI consisted of point solutions and niche applications that used ML to predict behaviors, find patterns, and spot anomalies in carefully curated data sets. Today's foundational models are jacks-of-all-trades. They can code, write poetry, draw in any art style, create PowerPoint slides and website mockups, write marketing copy and emails, and find new vulnerabilities in software and plot holes in unpublished novels.\n\n"Generative AI touches every aspect of the enterprise, and every aspect of society," says Bret Greenstein, partner and leader of the gen AI go-to-market strategy at PricewaterhouseCoopers. "With a pre-trained model, you can bring it into HR, finance, IT, customer service\u2014all of us are touched by it."\n\nAll of PwC's clients are having this discussion, he says. "I\u2019ve never seen this level of interest and excitement." And nobody has to get sold on it, he adds. Everyone wants it. "Once you have it in the workplace, you get 1,000 use cases because everyone wants to do it.\u201d\n\nThe potentials for increasing productivity or for enabling types of business not possible before are huge. But there's also the downside: the possibility gen AI will take companies down. The phrase "existential risk" is now everywhere\u2014not in the sense the AI would destroy humanity, but that it would make business functions, or even entire companies, obsolete.\n\nRegulating AI\n\nAt a recent gathering in Washington of AI leaders briefing regulators about AI, most have come out in favor of some kind of regulation of the industry. In Europe, the AI Act is on its way. Although written before gen AI, it has provisions that apply but it\u2019ll most likely go through modifications before going into effect.\n\nIn July, New York City started enforcing new rules about the use of AI in hiring decisions. And at the end of March, Italy banned ChatGPT entirely, before unbanning it again about a month later. "Italy\u2019s knee-jerk reaction was a shot across the bow," says Greenstein. But it's a sign of what's to come. "If you take something slightly risky and make it a thousand times bigger, the risks are amplified," he says. Gen AI is that amplification and the world's reaction to it is like enterprises and society reacting to the introduction of a foreign body. "We can't reject it, but we also can't let it take control and let it happen in an unregulated way," he says.\n\nAI and change management\n\nChange management has long been instrumental to the success of AI projects. It doesn't matter how accurate an AI model is, or how much benefit it\u2019ll bring to a company if the intended users refuse to have anything to do with it. But, until this year, this was a relatively manageable problem since the AI projects had limited scope. With gen AI, however, the impact on the workforce is going to be dramatically bigger.\n\n"This is the largest change management project in history," says Greenstein. "We\u2019ve never had a technology touch everyone so rapidly."\n\nEmbedded AI\n\nEmbedding AI into enterprise systems that employees were already using was a trend before gen AI came along. It made predictions and analytics broadly accessible and put the power of data in the hands of people who needed it, exactly when they needed it, and in the form that was most useful to them.\n\nIt took years for traditional AI\u2014ML and neural networks\u2014to get to the point where they could be embedded. Gen AI took a few months. Today, most major AI platforms, including OpenAI, have APIs allowing enterprises, and enterprise software vendors, to quickly add gen AI functionality into their systems.\n\n"Making existing applications better with embedded AI is awesome," says Greenstein.\n\nEven better is when the AI can be adapted to the unique needs of each business. Traditional ML requires a lot of data, experienced data scientists, as well as training and tuning. Today's gen AI platforms, however, require much less data since companies can start out with general-purpose foundation models and either fine-tune them on their own data, add a vector database, or inject information and examples directly into the prompt.\n\nAnd as it gets cheaper and easier to customize AIs, more companies will begin doing it for smaller use cases, says Greenstein, making it truly pervasive in the enterprise.\n\n"Given how we think the market will develop, generative AI will get embedded into every application we use," says Nick Patience, research director specializing in AI at S&P Global Market Intelligence.\n\nIn a recent report, he estimated that gen AI software revenues will grow from $3.7 billion this year to $36 billion by 2028.\n\n"And we may have even underestimated it," Patience says.\n\nHe\u2019s currently tracking 262 gen AI vendors, of which 117 specialize in text generation, and plans to produce a new version of the report in the next six months. Normally, he says, these kinds of reports are refreshed every two years, but this market is moving too quickly for that.\n\nOne reason gen AI is getting added faster to enterprise software than previous generations of AI is that it can potentially change the relationship between humans and software, he says.\n\n"It has the ability to let people converse in natural language and get things done," he says. "Previously, they\u2019d need to code or understand Excel or a query language. Through the use of natural language, you can as a human run complex queries on data sets and other things that are too complex to do on your own."\n\nPatience has been following this space for more than two decades and says he's never seen anything like it.\n\n"It\u2019s incredible," he says. "Our clients, and clients I've never spoken to before, all want to know what\u2019s going on. For those who have skills, it's going to be a force multiplier. For others, it\u2019ll be a bit more of a threat. But it\u2019ll enable people to do higher-value work than they are currently able to do."\n\nBusiness process automation\n\nAI has long played a role in RPA, albeit a small one. ML was used for sentiment analysis, and to scan documents, classify images, transcribe recordings, and other specific functions. Then gen AI came out.\n\n"The world has flipped since 2022," says David McCurdy, chief enterprise architect and CTO at Insight. "We've done a number of things with our customers that weren't in the toolbox 12 months ago. You now have the ability to jump over processes that have existed for years, sometimes decades, because of generative technology."\n\nOne of the best immediate use cases is summarizing documents and extracting information from material, he says.\n\n"This wasn't possible before," he says. "Now you can go in and extract a concept, not just a word. It\u2019s transforming some of our workflows."\n\nEnterprises still aren't extracting enough value from unstructured data hidden away in documents, though, says Nick Kramer, VP for applied solutions at management consultancy SSA & Company.\n\n"Existing technology just doesn't surface the most relevant content consistently and easily enough," he says. "This is where large language models get me really excited. The ability to ingest the corpus of company knowledge offers limitless possibilities."\n\nAI vendor management\n\nOnly the biggest companies are going to build or manage their own AI models, and even those will rely on vendors to provide most of the AI they use. Microsoft, Google, Salesforce\u2014all the major players are all in on AI, so it only makes sense to leverage that. But as gen AI touches more of a company's data, people, and processes, that vendor selection and management process becomes increasingly important.\n\n"I wouldn\u2019t acquire operational technology that didn\u2019t have ML and AI capabilities," says Insight's McCurdy. "If the companies aren't leveraging AI, and don't have a roadmap, we won't buy their software."\n\nThis is one of the ways that enterprises can avoid technical debt, he says, by investing in partners and companies investing in AI. But even when a vendor has AI on the roadmap, or is already building it, there are still risks. "Just like in the early days of the Internet, a lot of companies will come and go," says Rob Lee, chief curriculum director and faculty lead at the SANS Institute. He's already seeing this in the cybersecurity space. "At Black Hat, there were at least a hundred companies I saw," he says. "But do they have something truly sellable?"\n\nOne thing buyers have to be careful about is the security measures vendors put in place. With new technology deployments, security often comes as an afterthought. With AI, that would be a big mistake.\n\n"What happens if you upload your data to these AIs?" Lee asks. "You want to experiment, but if someone uploads the wrong spreadsheet to the wrong AI, you have a data breach."\n\nTrustworthy AI\n\nLast year, as classic AI became increasingly deployed into production, companies began to take the issue of trustworthiness more seriously. They wanted models that were reliable, free of bias, and built on ethical principles. Plus, the AI should be transparent and understandable since people want to know why AIs make the decisions and recommendations it does. Today, trustworthiness is a top priority for everyone from college students trying to get help with their homework, to global leaders looking to avoid an AI apocalypse. Researchers, vendors, consultants, and regulators are working on coming up with guardrails and ethical principles that will govern how AI is trained and deployed.\n\n"We're still in the early phases of this," says Donncha Carroll, partner in the revenue growth practice and head of the data science team at Lotis Blue Consulting. "You don\u2019t want to trust a system where you can\u2019t see or audit how it's operating, especially if it can make decisions that can have consequences. The oversight piece hasn't been figured out yet."\n\nOpen-source AI\n\nOpen source has long been a driver of innovation in the AI space. Many data science tools and base models are open source, or are based heavily on open-source projects. For a few months this year, there were concerns the new space of gen AI would be dominated by tech giants, companies who had the millions of dollars needed to train large language models (LLM), and the data to train them on.\n\nOpenAI's ChatGPT, Google's Bard, IBM's Watson, Anthropic's Claude, and other major foundation models are proprietary. But in February, Meta released Llama, an open-source LLM licensed for non-commercial use, which quickly became the base for many projects. Then, in July, Meta's Llama 2 came out, and this time, it was licensed for commercial use. Anyone could use it or modify it, for free, as long as they had fewer than 700 million active daily users. Microsoft quickly pledged to support it on its Azure platform. So did Amazon on AWS. And VMware made it one of the cornerstones of its gen AI stack.\n\nIn August, Meta continued releasing models. This time, it was Code Llama, an LLM trained for writing code. Then in September, the UAE's Technology Innovation Institute released Falcon 180B, the largest open-source model yet. It quickly rose to the top of the Hugging Face open LLM leaderboard, previously dominated by Llama 2 and its variants.\n\nFalcon was also released under a variant of the Apache 2 license, available for commercial use, and works for both natural language generation and code.\n\nThe open-source models make it possible for enterprises to deploy customized AI in their own infrastructure, without having to send their data to a cloud provider and offer greater flexibility and lower costs. Some of these open-source models are even small enough to run on desktop computers or mobile devices.\n\n"You're going to see more of this incredible computation power being distributed on the edge," says Lotis Blue's Carroll.\n\nSecure, reliable data infrastructure\n\nBoth ML and gen AI depend on data. Over the past 10 years, data has grown to be a company's most valuable asset, the electricity that powers innovation and value creation. To make all this possible, the data had to be collected, processed, and fed into the systems that needed it in a reliable, efficient, scalable, and secure way. Data warehouses then evolved into data lakes, and then data fabrics and other enterprise-wide data architectures. All that's going to prove valuable, both as companies continue to expand their traditional AI projects, and for the new gen AI functionality coming online. For many companies, that means public-facing chatbots like ChatGPT aren't an option because of the lack of enterprise-grade data protection.\n\n"There\u2019s a need to protect the data going into them," says McCurdy. "That creates an obvious barrier for some use cases until you establish that security perimeter."\n\nFor some, that means running OpenAI's model or others in private clouds, or even running open-source models on prem, depending on a company's risk profile. Meanwhile, even after years of effort, a lot of companies still don't have their data ready for AI. According to S&P Global's new 2023 Global Trends in AI survey of 1,500 AI practitioners and decision-makers, released in August, the biggest technological challenge to deploying AI is data management. And even though 69% of companies have at least one AI project in production, only 28% have reached enterprise scale.\n\nAccelerating pace of change\n\nGen AI wouldn't be possible without the global connectivity afforded by the Internet and the massive amount of information so easily available in digital form, ready to be used as training data. Then there's cloud computing, SaaS, and APIs, which allow new technologies to be deployed quickly and easily without large upfront integration costs for enterprises. So it's no surprise the adoption rate of gen AI is faster than that of any technology seen before. But beyond that, gen AI is also a technology that helps to accelerate its own development.\n\nIn April, venture capitalist Yohei Nakajima wondered if it was possible to have an "AI founder," which could run a company autonomously, and asked ChatGPT to build it. It took about three hours total, and ChatGPT wrote the code, the research paper, and a Twitter thread. Nakajima called it "BabyAGI" and it went viral on GitHub. It was an all-purpose agent that could be set to work at any objective, not just starting a company.\n\n"I jokingly asked the autonomous agent to create as many paperclips as possible," Nakajima wrote in a blog post describing the project. "It found out about the AI paperclip apocalypse and started by generating a safety protocol."\n\nBabyAGI uses OpenAI's GPT-4 API, Pinecone vector search, and the LangChain AI framework to figure out what tasks need to be done to achieve an objective, how to prioritize those tasks, and then do them. Similar projects include AutoGPT and AgentGPT.\n\nAnother example of AI pulling itself up by its own bootstraps this year was Alpaca, where Stanford University researchers used one of the early Llama models from Meta. This was a raw model that hadn't undergone reinforcement learning from human feedback\u2014an expensive and time-consuming process. Alpaca took a shortcut, using OpenAI's text-davinci-003, a close relative of ChatGPT, to generate 52,000 Q&A pairs and used them to train its new chatbot. The whole process cost less than $600, researchers said, $500 of which was spent on the OpenAI API and $100 for computing costs. And when the team tested it, it was comparable to text-davinci-003 in performance. In other words, gen AI models can write new code to improve their own performance, and they can generate data to train the next generation of models.\n\n"The tools and toolkits are changing so quickly," says Priya Iragavarapu, VP of digital technology services at AArete, a management consulting firm. "Even before leaders and the community are able to read and understand the manual." This creates challenges for companies trying to plan ahead, she says, since it's difficult to tell what's already possible and what's still in development. "It's getting hard for leaders to delineate between the two," she says.\n\nAs a result of the fast pace of change, many firms look to build flexible frameworks\u2014ones that allow them to drop in different models as they develop. For example, PricewaterhouseCoopers is not tying itself down to any particular LLM.\n\n"We have a plugin architecture," says PwC's Greenstein. "We\u2019ve been helping people build to whatever standards exist, but still have flexibility."\n\nThe company also has people keeping a close eye on the leading edge of development. "With AI, it\u2019s coming so quickly," he says. "We\u2019re focusing on 'no regret moves' like building an LLM-agnostic infrastructure. That\u2019s critical now because the models are leapfrogging each other."