Amid all the conversations about how AI is revolutionizing work\u2014making everyday tasks more efficient and repeatable and multiplying the efforts of individuals\u2014it\u2019s easy to get a bit carried away: What can\u2019t AI do?\n\nDespite its name, generative AI\u2014AI capable of creating images, code, text, music, whatever\u2014can\u2019t make something from nothing. AI models are trained on the information they\u2019re given. In the case of large language models (LLMs), this usually means a big body of text. If the AI is trained on accurate, up-to-date, and well-organized information, it will tend to respond with answers that are accurate, up-to-date, and relevant. Research from MIT has shown that integrating a knowledge base into a LLM tends to improve the output and reduce hallucinations. This means that AI and ML advancements, far from superseding the need for knowledge management, actually make it more essential.\n\nQuality in, quality out\n\nLLMs trained on stale, incomplete information are prone to \u201challucinations\u201d\u2014incorrect results, from slightly off-base to totally incoherent. Hallucinations include incorrect answers to questions and false information about people and events.\n\nThe classic computing rule of \u201cgarbage in, garbage out\u201d applies to generative AI, too. Your AI model is dependent on the training data you provide; if that data is outdated, poorly structured, or full of holes, the AI will start inventing answers that mislead users and create headaches, even chaos, for your organization. \n\nAvoiding hallucinations requires a body of knowledge that is:\n\nA knowledge management (KM) approach that enables discussion and collaboration improves the quality of your knowledge base, since it allows you to work with colleagues to vet the AI\u2019s responses and refine prompt structure to improve answer quality. This interaction acts as a form of reinforcement learning in AI: humans applying their judgment to the quality and accuracy of the AI-generated output and helping the AI (and humans) improve.\n\nAsk the right questions\n\nWith LLMs, how you structure your queries affects the quality of your results. That\u2019s why prompt engineering\u2014understanding how to structure queries to get the best results from an AI\u2014is emerging as both a crucial skill and an area where generative AI can help with both sides of the conversation: the prompt and the response.\n\nAccording to the Gartner\u00ae report Solution Path for Knowledge Management (June 2023), \u201cPrompt engineering, the act of formulating an instruction or question for an AI, is rapidly becoming a critical skill in and of itself. Interacting with intelligent assistants in an iterative, conversational way will improve the knowledge workers\u2019 ability to guide the AI through KM tasks and share the knowledge gained with human colleagues.\u201d\n\nUse AI to centralize knowledge-sharing\n\nCapturing and sharing knowledge is essential to a thriving KM practice. AI-powered knowledge capture, content enrichment, and AI assistants can help you introduce learning and knowledge-sharing practices to the entire organization and embed them in everyday workflows. \n\nPer Gartner\u2019s Solution Path for Knowledge Management, \u201cProducts like Stack Overflow for Teams can be integrated with Microsoft Teams or Slack to provide a Q&A forum with a persistent knowledge store. Users can post a direct question to the community. Answers are upvoted or downvoted and the best answer becomes pinned as the top response. All answered questions are searchable and can be curated like any other knowledge source. This approach has the additional advantage of keeping knowledge sharing central to the flow of work.\u201d\n\nAnother Gartner report, Assessing How Generative AI Can Improve Developer Experience (June 2023), recommends that organizations \u201ccollect and disseminate proven practices (such as tips for prompt engineering and approaches to code validation) for using generative AI tools by forming a community of practice for generative-AI-augmented development.\u201d The report further recommends that organizations \u201censure you have the skills and knowledge necessary to be successful using generative AI by learning and applying your organization\u2019s approved tools, use cases and processes.\u201d\n\nMind the complexity cliff\n\nGenerative AI tools are great for new developers and more seasoned ones looking to learn new skills or expand existing ones. But there\u2019s a complexity cliff: After a certain point, an AI\u2019s ability to handle the nuances, interdependencies, and full context of a problem and its solution drops off. \n\n\u201cLLMs are very good at enhancing developers, allowing them to do more and move faster,\u201d Marcos Grappeggia, product manager for Google Cloud\u2019s Duet, said on a recent episode of the Stack Overflow podcast. That includes testing and experimenting with languages and technologies beyond their comfort zone. But Grappeggia cautions that LLMs \u201care not a great replacement for day-to-day developers\u2026if you don\u2019t understand your code, that\u2019s still a recipe for failure.\u201d\n\nThat complexity cliff is where you need humans, with their capacity for original thought and their ability to exercise experience-informed judgment. Your goal is a KM strategy that leverages the huge power of AI by refining and validating it on human-made knowledge.\n\nStack Overflow for Teams is purpose-built to capture, collaborate, and share knowledge\u2014everything from new technologies like GenAI to transformations like cloud. Find out how organizations are using Stack Overflow for Teams to build secure, collective knowledge bases and scale learning across teams at stackoverflow.co\/teams.\n\nGARTNER is a registered trademark and service mark of Gartner, Inc. and\/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.