While there\u2019s an open letter calling for all AI labs to immediately pause training of AI systems more powerful than GPT-4 for six months, the reality is the genie is already out of the bottle. Here are ways to get a better grasp of what these systems are capable of, and utilize them to construct an effective corporate use policy for your organization.\n\nGenerative AI is the headline-grabbing form of AI that uses un- and semi-supervised algorithms to create new content from existing materials, such as text, audio, video, images, and code. Use cases for this branch of AI are exploding, and it\u2019s being used by organizations to better serve customers, take more advantage of existing enterprise data, and improve operational efficiencies, among many other uses.\n\nBut just like other emerging technologies, it doesn\u2019t come without significant risks and challenges. According to a recent Salesforce survey of senior IT leaders, 79% of respondents believe the technology has the potential to be a security risk, 73% are concerned it could be biased, and 59% believe its outputs are inaccurate. In addition, legal concerns need to be considered, especially if externally used generative AI-created content is factual and accurate, content copyrighted, or comes from a competitor.\n\nAs an example, and a reality check, ChatGPT itself tells us that, \u201cmy responses are generated based on patterns and associations learned from a large dataset of text, and I do not have the ability to verify the accuracy or credibility of every source referenced in the dataset.\u201d\n\nThe legal risks alone are extensive, and according to non-profit Tech Policy Press they include risks revolving around contracts, cybersecurity, data privacy, deceptive trade practice, discrimination, disinformation, ethics, IP, and validation.\n\nIn fact, it\u2019s likely your organization has a large number of employees currently experimenting with generative AI, and as this activity moves from experimentation to real-life deployment, it\u2019s important to be proactive before unintended consequences happen.\n\n\u201cWhen AI-generated code works, it\u2019s sublime,\u201d says Cassie Kozyrkov, chief decision scientist at Google. \u201cBut it doesn\u2019t always work, so don\u2019t forget to test ChatGPT\u2019s output before pasting it somewhere that matters.\u201d\n\nA corporate use policy and associated training can help to educate employees on some of the risks and pitfalls of the technology, and provide rules and recommendations for how to get the most out of the tech, and, therefore, the most business value without putting the organization at risk.\n\nWith this in mind, here are six best practices to develop a corporate use policy for generative AI.\n\nDetermine your policy scope \u2013 The first step to craft your corporate use policy is to consider the scope. For example, will this cover all forms of AI or just generative AI? Focusing on generative AI may be a useful approach since it addresses large language models (LLMs), including ChatGPT, without having to boil the ocean across the AI universe. How you establish AI governance for the broader topic is another matter and there are hundreds of resources available online.\n\nInvolve all relevant stakeholders across your organization \u2013 This may include HR, legal, sales, marketing, business development, operations, and IT. Each group may see different use cases and different ramifications of how the content may be used or mis-used. Involving IT and innovation groups can help show that the policy isn\u2019t just a clamp-down from a risk management perspective, but a balanced set of recommendations that seek to maximize productive use and business benefit while at the same time manage business risk.\n\nConsider how generative AI is used now and may be used in the future \u2013 Working with all stakeholders, itemize all your internal and external use cases that are being applied today, and those envisioned for the future. Each of these can help inform policy development and ensure you\u2019re covering the waterfront. For example, if you already see proposal teams, including contractors, experimenting with content drafting, or product teams experimenting with creative marketing copy, then you know there could be subsequent IP risk due to outputs potentially infringing on others\u2019 IP rights.\n\nBe in a state of constant development \u2013 When developing the corporate use policy, it\u2019s important to think holistically and cover the information that goes into the system, how the generative AI system is used, and then how the information that comes out of the system is subsequently utilized. Focus on both internal and external use cases and everything in between. By requiring all AI-generated content to be labelled as such to ensure transparency and avoid confusion with human-generated content, even for internal use, it may help to prevent accidental repurposing of that content for external use, or act on the information thinking it\u2019s factual and accurate without verification.\n\nShare broadly across the organization \u2013 Since policies often get quickly forgotten or not even read, it\u2019s important to accompany the policy with suitable training and education. This may include developing training videos and hosting live sessions. For example, a live Q&A with representatives from your IT, innovation, legal, marketing, and proposal teams, or other suitable groups, can help educate employees on the opportunities and challenges ahead. Be sure to give plenty of examples to help make it real for the audience, like when major legal cases crop up and can be cited as examples.\n\nMake it a living document \u2013 As with all policy documents, you\u2019ll want to make this a living document and update it at a suitable cadence as your emerging use cases, external market conditions, and developments dictate. Having all your stakeholders \u201csign\u201d the policy or incorporate it into an existing policy manual signed by your CEO will show it has their approval and is important to the organization. Your policy should be just one of many parts of your broader governance approach, whether that\u2019s for generative AI, or even AI or technology governance in general. \n\nThis is not intended to be legal advice, and your legal and HR departments should play a lead role in approving and disseminating the policy. But hopefully it provides some pointers for consideration. Much like the corporate social media policies of a decade or more ago, spending time on this now will help mitigate the surprises and evolving risks in the years ahead.