By Matt Kraning, CTO, Cortex\n\n\n\nArtificial intelligence (AI) and machine learning (ML) are terms that are heard everywhere across the IT security landscape today, as organizations and attackers are both seeking to leverage these advancements in service of their goals. For the bad actors, it\u2019s about breaking down defenses and finding vulnerabilities faster. But what value can AI and ML offer when you\u2019re working to secure an organization?\n\nIt would be great to say that these technologies are an end to themselves for your cybersecurity and that merely adopting them means your organization is fully protected. But it\u2019s not that simple. Not all uses of AI and ML are created equal. And\u2014spoiler alert\u2014it\u2019s not all about using the latest algorithms.\n\nHowever, in order to meet the challenges and speed of today\u2019s threat landscape, AI and ML are vital parts of a holistic security solution and should be focused on the ultimate outcome of preventing every type of attack you can and responding as fast as possible to the ones you can\u2019t.\n\nAI alone is not an answer\n\nArtificial intelligence itself is not a differentiator for security. In fact, there are many different AI frameworks and models in common usage today. Generally speaking, those frameworks come from academia and are open-source, public implementations available to everyone. So, it\u2019s not the AI framework that makes a difference. What differentiates is how the AI is used and what data is available for AI to learn from.\n\nWhat makes AI better and smarter for cybersecurity?\n\nRegardless of the purpose, AI that learns how to act via machine learning needs high-quality data and as much data as possible to be effective. It\u2019s through that abundance of good data that AI comes to have an understanding of possible scenarios. The more real-world data it acquires, the smarter it becomes and the more experience it can leverage.\n\nSo, think about this through the lens of cybersecurity. Learning from just one deployment or threat vector isn\u2019t enough. What\u2019s needed is a solution that learns from all deployments and a tool that leverages information from all its users\u2014not just a single organization. The bigger the pool of environments and users, the smarter the AI. To that end, you also need a system that can handle both large volumes\u2014and different kinds\u2014of data.\n\nAI is about more than just simply doing math with a computer. While data is a critical component for AI to be effective, the AI and ML itself also need to be baked into operational processes. AI and ML should not be thought of as stand-alone technologies but rather as enabling technologies that bring value to security processes and operations.\n\nThe most successful AI techniques are the ones that combine large-scale statistical pattern matching from ML to learn, along with other techniques integrating things like domain knowledge to provide a hybrid system. Statistical techniques derived solely from ML are generally unable to adapt to newly developed, previously unseen threats that by definition have little to no baseline statistics associated with them. Similarly, domain expertise can be leveraged to create logic (often partly derived from large-scale data analysis) that effectively prevents and detects specific attacker tactics and techniques.\n\nHowever, aggregating these insights using expert systems results in unbalanced and skewed error rates across deployments. What\u2019s needed is an AI system that uses statistical insights from ML together with domain-driven insights from other parts of the system that can generalize to novel attacks while maintaining consistent and low-error rates for all.\n\nThe value AI and ML truly provide for cybersecurity\n\nAt a fundamental level, using AI and ML well in your organization\u2019s security enables security operations center (SOC) teams to do a lot more effectively, with fewer people. It\u2019s a multiplying factor that strengthens an organization\u2019s capacity and allows analysts\u2019 skills to be put towards the right work to leverage their experience.\n\nA common use case for AI and ML in security is to help establish a baseline of normal operations and then alert a team to potential anomalies. AI and ML can also be used to improve operational effectiveness by identifying the more mundane tasks that people are doing all the time. The technology can create or suggest automation playbooks that will save time and resources.\n\nAI and ML also help inform and power automation\u2014which is the key to scalability in environments where staff and resources are always constrained. Every SOC today needs to address more threats that are more sophisticated, with fewer people. At the end of the day, the goal of AI and ML is to help provide a good security outcome in a way that specifically makes rapid use of very scarce resources.\n\nHow AI and ML can improve security outcomes\n\nWith security operations, there is never just one problem that needs to be solved, but rather a series of problems that are often coupled. With AI and ML helping to improve automation and remove manual processes across security operations, it can be possible to prevent more risks from becoming security incidents. If you prevent more risks, then the organization can respond more effectively, as it will be responding to fewer actual security incidents.\n\nAI and ML give you the benefit of focus and the power to scale with the threat landscape by leveraging the same tools as the attackers, strengthening your organization\u2019s overall security posture.\n\nTo learn more, visit us here.\n\n\n\n About Matt Kraning:\n\nMatt is the CTO of Cortex at Palo Alto Networks. He's an expert in large-scale optimization, distributed sensing, and machine learning algorithms run on massively parallel systems. Prior to co-founding Expanse, Matt worked for DARPA, including a deployment to Afghanistan. Matt holds PhD and Master\u2019s degrees in Electrical Engineering, and a Bachelor\u2019s degree in Physics, all from Stanford University.