Ok, real talk, who always drives the speed limit? I\u2019ll admit to being hasty at times and failing in this regard. There\u2019s a saying that rules were meant to be broken, but I don\u2019t really think that is true. They\u2019re meant to be followed but often aren\u2019t. It's just that when we need to get somewhere fast and shortcuts are presented, sometimes we take them.\n\nIt looks like I\u2019m not alone. Despite numerous stories about companies restricting generative AI use in the workplace, employees don\u2019t seem to be using it any less. Recent research conducted by Dell indicates that 91% of respondents have dabbled with generative AI in their lives in some capacity, with another 71% reporting they\u2019ve specifically used it at work1. Much like Shadow IT before it, Shadow AI appears to already be here\u2013all while many organizations are still figuring out how to use GenAI and what their posture will be. I\u2019m here to tell you that this must be at the top of everyone\u2019s lists because the costs of getting this wrong are astronomical, and there\u2019s no sitting out this new GenAI age.\n\nA brief history lesson\n\nWith the advent of public cloud came the \u201cswipe a credit card and go\u201d era of Shadow IT. Business units eager to move quickly took discretionary departmental budgets and plowed them into as-a-Service offerings that meant they no longer needed to wait on IT. The result was a wave of innovation and the ability to bootstrap quickly around an idea, prove it out, and then scale it. Much has been written in this space about the ad hoc nature of such deployments and how we\u2019ve seen a shift to centralized cloud strategies and right-sizing. But the area I want to focus on is the unintended consequences of public cloud adoption that created wave after wave of data loss and exposure. I\u2019m talking about public cloud being open by default and developers unfamiliar with the architecture leaving S3 buckets open on the internet. We are not short on case studies in this area\u2014a simple Google search will yield plenty\u2014after all, when it comes to security, it\u2019s only a matter of time for any organization to be targeted. These very public failures caused brand trust erosion, regulatory oversight and penalties, customer privacy violations, and a host of other financial and societal implications. We\u2019ve come a long way in this space and, fortunately, security tools, advancing familiarity with cloud models, and adjustments to the cloud offerings themselves with alerting on such misconfigurations have reduced the occurrence of these issues over time.\n\nIf we don\u2019t learn from history, we\u2019re doomed to repeat it\n\nShadow AI has the potential to eclipse Shadow IT. How and why is that you ask? With Shadow IT, your developers were really the only points of failure in the equation; with generative AI every user has the potential to be one. This means you must count on everyone\u2014from admins to executives\u2014to make the right decision each and every time they use GenAI. This requires you to put a high degree of trust in user behavior, but it also forces your users to self-govern in a way that might hamstring their own speed and agility if they\u2019re constantly second-guessing their own actions. There is an easier way, but we\u2019ll get to that later.\n\nAdding to this complexity, the way generative AI works introduces a double layer of exposure risk, further stacking the odds against the user. You see, unlike IaaS where organizations hold encryption keys, AIaaS, by default, is learning from your data. Today, vendors are beginning to offer privately hosted versions with policies that reduce this learning risk, but there is still the concern of access. To enforce security and ensure the AI isn\u2019t compromised, these vendors maintain the right to review prompts, code, and outputs, and if they maintain that right, that means the data is stored in a place that can potentially be accessed at government request. The second part of the risk is unique to GenAI. Unlike open S3 buckets on the internet, where you still had a chance of hiding in plain sight because the scope of what a bad actor could look for was limited by tools at their disposal, GenAI gives bad actors the ability to analyze greater amounts of data, meaning there\u2019s a better chance of discovery.\n\nWhat\u2019s at stake?\n\nEverything from privacy of customer data to intellectual property is at risk. One of the greatest examples I can think of is the closely guarded trade secret of Coca-Cola. About 15 years ago, Pepsi was offered confidential and highly proprietary data on the recipe of Coke and did the right thing: turning the thieves in to the FBI. What would happen if it was put through a public GenAI program, or if it was placed into a private hosted instance where authorized personnel abused trusted access? Would they be able to retain their trade secret? It\u2019s hard to say. The reality is each and every business is full of trade secrets and confidential private data. Most of us have signed an NDA or non-compete agreement at some point. Lawyers draft agreements to try and protect IP using a myriad of protection classes granted by the locales they operate in. I\u2019m not saying that these vendors can\u2019t be trusted, far from it, but the legal responsibility is on your organization to define the right policies to protect your data.\n\nThat said, let\u2019s take a look at three prescriptive ways you can reduce your risk and protect your most valuable asset: your data.\n\nToo important for ad hoc\n\nFirst, you need to establish a centralized strategy and corresponding governance. Generative AI is effectively a bet-the-company type of initiative and therefore should be an executive-level imperative. Executive leadership, and ideally the c-suite, should be actively involved in defining the use cases, working with IT to create secure access, and establishing protections for data that are clear to the organization. The good news is that when you centralize the process and access to these technologies, it is much easier to enforce, and even easier to scale across the business, but it also takes time and effort to build. Therefore, it's best to establish some easy wins for the organization that can propel things forward and ensure success. Often these use cases will orient around safer expressions of AI and ones that do not pose a risk to critical data.\n\nData and use case classification is key\n\nIf you know what has to be protected, you can eliminate much of the threat. Data is the great differentiator and if your data house is in order, you are in much better shape. Understand and share practical guidance on areas of data that should never go into a public or hosted private cloud AI offering. Examples of this type of data are things like trade secrets, sensitive processes you use, things still under NDA like launches, meeting minutes, personally identifiable customer data, and more. For those types of assets, you want to reserve AI usage to AI solutions where you retain complete control (i.e., on-premises deployments) or enterprise-ready AI solutions where no conversation logs are kept, the host doesn\u2019t have access to your queries, and the machines are not trained on your usage.\n\nBringing AI to your data\n\nAI might be the biggest tailwind we\u2019ve ever seen in the discussion of cloud right-sizing and bringing workloads back on-premises. The reality is if you control the service and bring AI to the data there are a wealth of advantages over the hosted AI varieties. That being said, this is also the easy button for governance, employee productivity, and ensuring secure data access for your organization. Like most security conversations, it pays to think about the end-user experience and ensure it beats the alternative to ensure compliance. Users will love not having to second guess what they\u2019re putting into a company-owned AI, they\u2019ll move faster, freer, across more use cases, and with more creativity as a result. And you\u2026you\u2019ll not have to worry about attack surfaces and exposure risks you can\u2019t see.\n\nHow to get started\n\nMany organizations will struggle to know where to start and that\u2019s where having a strong technology partner can help. Dell works closely with customers to define AI strategies, uncover, and prioritize high-value use cases, implement solutions, understand the data science, and even speed adoption with education on how to embrace prompt engineering within roles. This truly is the future of work: every employee has an opportunity to tap into these truly transformative technologies. Learn more about what\u2019s possible at dell.com\/ai.