ServiceNow is making generative AI accessible from more areas of its low-code development platform, putting it front and center in the chatbots enterprises are starting to use to interact with their ServiceNow applications.\n\nBut as software vendors like ServiceNow, Salesforce, or SAP offer new ways to take advantage of generative AI capabilities, such as summarizing text or generating new text or images from a simple prompt, there are risks CIOs need to consider before giving the technology free rein with their data.\n\nOnly last month ServiceNow rolled out its first generative AI tools: the ServiceNow Generative AI Controller for connecting large language models (LLMs) to its software automation platform, and Now Assist for Search, which uses those LLMs and an enterprise\u2019s own data to generate natural language responses to queries made in a virtual agent.\n\nNow Assist for Virtual Agent\n\nThe latest addition, Now Assist for Virtual Agent, builds on that foundation to make it easier for enterprises to employ generative AI more broadly in designing and running business processes.\n\nLike Salesforce with its Einstein GPT product, ServiceNow has chosen to adopt generative AI in a modular way, allowing CIOs to choose which LLM provider they integrate with.\n\nIn ServiceNow\u2019s case, the choice is initially somewhat limited to either OpenAI, the creator of GPT and other publicly available models, or Microsoft Azure, which also uses OpenAI technology. However, the company recently partnered with Nvidia to help enterprises develop custom LLMs trained on their own data, and has also worked with Hugging Face on an open-access LLM that enterprises will be able to use to build private models to match their own needs.\n\nThat public-private distinction is crucial, said Neil Ward-Dutton, a VP at analyst IDC covering AI and intelligent process automation.\n\n\u201cWe see a lot of confusion between the public foundation models promoted by the likes of OpenAI\u2014GPT-4 and so on\u2014and the generative AI models that we see ultimately delivering value to corporates, which will not necessarily be public,\u201d he said.\n\nMany uses of generative AI will only become attractive to enterprises when they can access specialized models, protected from public access, that are trained and tuned for their industry or even just for their organization alone. Other applications, with no need for company-specific data or high levels of accuracy, can be built on public models.\n\n\u201cVendors like Salesforce, ServiceNow, and others don\u2019t always do a good job of clearly distinguishing between these two approaches,\u201d he said. \u201cThey\u2019re all hedging their bets, partnering with the likes of OpenAI, Google, or Anthropic for access to public models, but also partnering with Nvidia, Hugging Face, and Cohere to help them implement specialized models for customers.\u201d\n\nServiceNow runs shared services internally on its Now Platform, and recently began piloting the use of generative AI in virtual agent conversations, according to the company\u2019s CIO, Chris Bedi (pictured). It\u2019s being used by go-to-market teams to access knowledge bases about policies and processes to facilitate contract renewals, he said.\n\nThe idea is that, rather than the virtual agent producing links to a stack of knowledge base articles that workers have to read for themselves, \u201cWe\u2019re saying \u2018here\u2019s the bite-sized pieces of content that can help you\u2019 at each different point in this conversation, which should boost productivity and speed,\u201d he said.\n\nGarbage in, garbage out\n\nBedi, too, sees a risk in giving generative AI tools access to the wrong data\u2014but it\u2019s not a new one.\n\n\u201cIt\u2019s the garbage in, garbage out problem, which IT people have faced forever,\u201d he said\u2014but with a twist. With traditional search tools, \u201cIf you had bad data, it would get surfaced,\u201d he said. \u201cBut generative AI is shining a much brighter light on it because it makes information so findable and digestible for humans. You\u2019ve really got to make sure what you\u2019re indexing in the large language models is up to date.\u201d\n\nThe risk of bad data showing up\u2014which is not unique to ServiceNow\u2019s implementation\u2014is why IDC\u2019s Ward-Dutton recommends CIOs question their software suppliers about the origin of any generative AI elements they include, and the data they\u2019re trained on.\n\nEnterprises will want to know whether the underlying model is public, or will be private to their organization, on what data it was pretrained, and how they can protect against bias in training data, he said.\n\nSome software vendors are starting to add layers to their generative AI platforms to make the information they surface more trustworthy.\n\nIn time, said Bedi, even that could be handled by large language models. \u201cYou can have models looking at the models,\u201d he said. \u201cThat technology is being build up very rapidly.\u201d\n\nHow it\u2019s done makes a difference, said Ward-Dutton, who advised CIOs to ask how a vendor\u2019s trust layer, if there is one, actually ensures data quality. \u201cIs it through managing how models are trained in the first place,\u201d he said, \u201cor in attempting to correct or minimize problems with content that the models create after the fact?\u201d\n\nHe advised CIOs to set up lab environments where they can test generative AI technologies safely, exploring use cases and examining the claims of vendors.\n\nThat\u2019s what Bedi at ServiceNow did with his own technology, to see how it performed. CIOs looking to do the same could get better results by following his diversity play: running pilots with a mix of tenured employees and new starters.\n\n\u201cWe thought that was important because people who joined recently will ask questions tenured people will not because they just know it through tribal knowledge,\u201d he said. \u201cAnd tenured people will spot things that feel or look funny a lot better than people who have just joined.\u201d\n\nThat enabled him to get started without worrying about whether he had a perfect content repository for the virtual agent to draw on. \n\nA limited set of ServiceNow customers already has access to Now Assist for Search and, now, Now Assist for Virtual Agent. For the others, the company plans to make the features generally available as part of its Vancouver platform release in September 2023.