The rise of generative AI Credit: Getty Images Generative AI has quickly changed what the world thought was possible with artificial intelligence, and its mainstream adoption may seem shocking to many who don’t work in tech. It inspires awe and unease — and often both at the same time. So, what are its implications for the enterprise and cybersecurity? A technology inflection point Generative AI operates on neural networks powered by deep learning systems, just like the brain works. These systems are like the processes of human learning. But unlike human learning, the power of crowd-source data combined with the right information in Generative AI means that processing answers will be light years faster. What might take 30 years for an individual to process could take just an eyeblink. That is a benefit that can be derived depending on the quality as well as massive amounts of data that can be fed into it. It is a scientific and engineering game-changer for the enterprise. A technology that can greatly improve the efficiency of organizations – allowing them to be significantly more productive with the same number of human resources. But the shock of how fast Generative AI applications such as ChatGPT, Bard, and GitHub Pilot emerged seemingly overnight has understandably taken enterprise IT leaders by surprise. So fast that in just six months, the popularization of Generative AI tools is already reaching a technology inflection point. The cybersecurity challenges Generative AI, including ChatGPT, is primarily delivered through a software as a service (SaaS) model by third parties. One of the challenges this poses is that interacting with Generative AI requires providing data to this third party. Large learning models (LLMs) that back these AI tools require storage of that data to intelligently respond to subsequent prompts. The use of AI presents significant issues around sensitive data loss, and compliance. Providing sensitive information to Generative AI programs such as personally identifiable data (PII), protected health information (PHI), or intellectual property (IP) needs to be viewed in the same lens as other data processor and data controller relationships. As such, proper controls must be in place. Information fed into AI tools like ChatGPT becomes part of its pool of knowledge. Any subscriber to ChatGPT has access to that common dataset. This means any data uploaded or asked about can then be replayed back within certain app guardrails to other third parties who ask similar questions. It’s worth noting that this is very similar to software-as-a-service (SaaS) application problems as it can impact the response of future queries when used as a training set. As it stands today, most Generative AI tools do not have concrete data security policies for user-provided data. The insider threat also becomes significant with AI. Insiders with intimate knowledge of their enterprise can use ChatGPT to create very realistic email. They can duplicate another’s style, typos, everything. Moreover, attackers can also duplicate websites exactly. What enterprises need for security Fortunately, there are Generative AI Protection solutions, such as Symantec DLP Cloud, Adaptive Protection on Symantec Endpoint Security Complete (SESC), and real time link in email security that address these emerging challenges and block attacks in different, targeted ways. Symantec DLP Cloud extends Generative AI Protection for enterprises, with the capabilities they need to discover, and subsequently monitor and control, interaction with generative AI tools within their organizations. Among other benefits, DLP can use AI to speed incident prioritization, helping senior analysts to triage the most significant and recognize those that are not a critical threat to the enterprise. The benefits include: Provide enterprises with the capability to understand the risks they’re subject to, on a per tool basis with generative AI. Allow the safe and secure use of popular AI tools by supplying the necessary safeguards for blocking sensitive data from being uploaded or posted intentionally or inadvertently. Identify, classify, and document compliance for PHI, PII, and other critical data. The bottom line: Symantec Generative AI Protection allows enterprises to “say yes” to generative AI’s productivity enhancing innovations without compromising data security and compliance. Learn more about the implications of Generative AI to the enterprise here. About Alex Au Yeung Broadcom Alex Au Yeung is the Chief Product Officer of the Symantec Enterprise Division at Broadcom. A 25+ year software veteran, Alex is responsible for product strategy, product management and marketing for all of Symantec. Related content brandpost Sponsored by Broadcom Generative AI and the Transformation of Everything Insights on innovations in the Broadcom product portfolio By Alex Au Yeung, Chief Product Officer, Symantec Enterprise Division Oct 24, 2023 5 mins Generative AI brandpost Sponsored by Broadcom Mitigating mayhem in a complex hybrid IT world How to build a resilient enterprise in the face of unexpected (and expected) IT mayhem moments. By Greg Lotko, Senior Vice President and General Manager, Mainframe Software Division Sep 26, 2023 7 mins Hybrid Cloud brandpost Sponsored by Broadcom Artificial Intelligence in Cybersecurity: Good or Evil? Approaching generative AI from a security perspective By Rob Greer, Vice President and General Manager, Symantec Enterprise Division Sep 12, 2023 6 mins Artificial Intelligence Security brandpost Sponsored by Broadcom Accelerate change with Value Stream Management Building a culture of trust enables organizations to thrive in challenging times. By Serge Lucio, Vice President and General Manager, Agile Operations Division Aug 29, 2023 5 mins Digital Transformation Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe