In my previous column in May, when I wrote about generative AI uses and the cybersecurity risks they could pose, CISOs noted that their organizations hadn\u2019t deployed many (if any) generative AI-based solutions at scale.\n\nWhat a difference a few months makes. Now, generative AI use has infiltrated the enterprise with tools and platforms like OpenAI\u2019s ChatGPT \/ DALL-E, Anthropic\u2019s Claude.ai, Stable Diffusion, and others in ways both expected and unexpected.\n\nIn a recent post, McKinsey noted that generative AI is expected to have a \u201csignificant impact across all industry sectors.\u201d As an example, the consultancy refers to how generative AI technology could potentially add $200 - $400 billion in added annual value to the banking industry if full implementation moves ahead on various use cases. The potential value across a broader spectrum of institutions could be enormous. But while organizations look to incorporate elements of generative AI to strengthen efficiency and productivity, others are worried about controlling its usage within the enterprise. \n\nGenerative AI on the loose in enterprises\n\nI contacted a range of world-leading CIOs, CISOs, and cybersecurity experts across industries to gather their take regarding the recent surge in the unmanaged usage of generative AI in company operations. Here\u2019s what I learned.\n\nOrganizations are seeing a dramatic rise in informal adoption of gen AI \u2013 tools and platforms used without official sanctioning. Employees are using it to develop software, write code, create content, and prepare sales and marketing plans. A few months ago, monitoring these unsanctioned uses was not on a CISO\u2019s list of to-dos. Today, it is, as they create a mysterious new risk and attack surface to defend against.\n\nOne cybersecurity expert told me, \u201cCompanies are unprepared for the influx of AI-based products today \u2013 from a people, process, or technology perspective. Furthermore, heightening the issue is that a lot of the adoption of AI is not visible at the product level but at a contractual level. There are no regulations around disclosure of \u2018AI Inside.\u2019\u201d\n\nAnother CISO told me that his primary concerns included the potential for IP infringement and data poisoning. He also identified technologies to secure the AI engines and workflows used by the company (or its 3rd party partners) that support creative content development.\n\nA high-level CISO in capital management feared \u201cplagiarism, biased information impacting decisions or recommendations, data loss to numerous organizations, and reliance and economic waste on products that don\u2019t prove short or medium value.\u201d\n\nOne CIO executive told me that his most significant concern right now is having their proprietary data or content incorporated into the training set (or information-retrieval repository) of a third-party product to then be presented as a work product of that company.\n\nPrivacy leaks?\n\nAmong the respondents, the clear message was that companies fear unintended data leakage. A CISO at a major marketing software firm worried about this explicitly, stating, \u201cThe real risk is that you have unintentional data leakage of confidential information. People send things into ChatGPT that they shouldn't, now stored in ChatGPT servers. Maybe it gets used in modeling. Maybe it then winds up getting exposed. So I think the real risk here is the exposure of sensitive information. We have to ask ourselves, "Is that data being adequately protected or not?"\n\nAnother respondent provided a recent example of an engineer trying to send a source code snippet up to ChatGPT that included an API key in it. While they were able to detect the issue, in general, this could be very dangerous. Not all companies have security systems that can detect, block, or remediate this type of behavior.\n\nAnother information security executive cited Samsung\u2019s temporary ban of ChatGPT in its systems. The electronics company learned the hard way that content input into ChatGPT\u2019s prompt can be viewed publicly. In this case, the input contained the source code of software responsible for the company\u2019s semiconductor equipment. What followed was a knee-jerk reaction to ban ChatGPT.\n\nControlling the Gen AI outbreak\n\nWhat can CISOs and corporate security experts do to put some sort of limits on this AI outbreak? One executive said that it\u2019s essential to toughen up basic security measures like \u201ca combination of access control, CASB\/proxy\/application firewalls\/SASE, data protection, and data loss protection.\u201d\n\nAnother CIO pointed to reading and implementing some of the concrete steps offered by the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework report. Senior leaders must recognize that risk is inherent in generative AI usage in the enterprise, and proper risk mitigation procedures are likely to evolve.\n\nStill, another respondent mentioned that in their company, generative AI usage policies have been incorporated into employee training modules, and that policy is straightforward to access and read. The person added, \u201cIn every vendor\/client relationship we secure with GenAI providers, we ensure that the terms of the service have explicit language about the data and content we use as input not being folded into the training foundation of the 3rd party service.\u201d\n\nCorporate governance and regulatory requirements\n\nAnd what about corporate governance and regulatory requirements? What can organizations do in this area? One of the CISOs surveyed suggested that executive boards should determine what governance practices should be established to maximize the benefits of generative AI against the potential risk and legal\/regulatory requirements.\n\nIn a nutshell, the same executive provided the following checklist:\n\nIn summary, enterprise employees are working with AI tools, with or without the corporate blessing for such use. To help rein in what could be a widespread information leak or other significant damaging incident, CIOs, CISOs, and their corporations need to control generative AI use in the organization.\n\nThey will need to determine whether this takes shape in the form of greater adherence to existing corporate security measures, augmenting these, and\/or finding new forms of internal controls on employee use of third-party vendors.\n\nIn my next article, I\u2019ll share some processes to manage and remediate the use of generative AI in enterprise organizations. Stay tuned!