Artificial intelligence and automation adoption rates are rising, and investment plans are high on enterprise radars. AI is in pilots or use at 41% of companies, with another 42% actively researching it, according to the 2019 IDG Digital Business Study.\nCybersecurity has emerged as an ideal use case for these technologies. Digital business has opened a score of new risks and vulnerabilities that, combined with a security skills gap, is weighing down security teams. As a result, more organizations are looking at AI and machine learning as a way to relieve some of the burden on security teams by sifting through high volumes of security data and automating routine tasks.\n\u201cWe have a lot of repetitive tasks \u2013 we can build the right framework so those controls happen automatically to a point where we need a human looking at it,\u201d Ken Foster, head of global cyber risk governance at Fiserv, said on the new CSO Executive Sessions podcast. \u201cSo, I can repurpose my smart people who I want making the decisions that I\u2019m not comfortable AI making. If I can get that designed well enough to pull some workload off of them, we\u2019ll start moving the needle faster.\u201d\nWe asked security leaders and practitioners to describe how AI and automation technologies will come into play this year. Here\u2019s what they had to say.\nAn ounce of detection\u2026\n\u201c2020 needs to be the year where AI in cybersecurity moves beyond the hype and becomes common practice,\u201d says Tim Wulgaert (@timwulgaert), owner and lead consultant, FJAM Consulting.\nIT and security leaders suggest that detection and identification of potential threats make ideal initial use cases for AI\/automation.\n\u201cThe volume of data being generated is perhaps the largest challenge in cybersecurity,\u201d says David Mytton (@davidmytton), CTO and expert in residence, Seedcamp.\u00a0 \u201cAs more and more systems become instrumented \u2014 who has logged in and when, what was downloaded and when, what was accessed and when \u2014 the problem shifts from knowing that \u2018something\u2019 has happened, to highlighting that \u2018something unusual\u2019 has happened.\u201d\nThat \u201csomething unusual\u201d might be irregular user or system behaviors, or simply false alarms.\n\u201cThe hope is that these systems will minimize false alarms and insubstantial issues (e.g., port scanning or pings), leaving a much smaller set of \u2018real\u2019 threats to review and address,\u201d says Michael Overly, partner, Foley & Lardner LLP.\nThe ultimate goal is to find those unusual incidents fast.\n\u201cThe effectiveness of AI solutions this year can be measured via the time-to-discovery metric, which measures how long it takes an organization to detect a breach,\u201d says Kayne McGladrey (@kaynemcgladrey), CISO, Pensar Development. \u201cReducing time to discovery can be achieved through AI\u2019s tenacity, which doesn\u2019t need holidays, coffee breaks, or sleep, which is unlike Tier 1 security operations center analysts who also get bored reading endless log files and alerts.\u201d\nThat said, differentiating the usual from the unusual will require correlating technologies around identity and user access.\n\u201cAutomation will have a huge impact on user access in the coming year,\u201d says Jason Wankovsky (@gomindsight), CTO and VP of consulting services, Mindsight. \u201cMultifactor authentication will certainly be a growth sector. In addition, artificial intelligence will assist system and network administrators in monitoring technology environments.\u201d\nWulgaert agrees: \u201cBehavioral analysis of access patterns and user logs will help to identify potential security events, but can also play a big role in supporting and optimizing multifactor authentication, by adding behavior into the mix of factors. IA will be become a core functionality of IAM-tooling.\u201d\nBehavioral analysis will also help defend against common attacks such as malware.\n\u201cMalware attacks are only going to get worse this year,\u201d says technology writer Will Kelly (@willkelly). \u201cBecause AI-based anti-virus solutions focus on actions, not signatures, they can home in on the unusual behaviors that are the calling cards of malware and zero-day exploits to help mitigate such attacks.\u201d\nHuman, machine, or both?\nWhile AI and automation will play a critical role in relieving overburdened IT security teams, organizations will still require highly skilled individuals to perform high-level analysis and remediation activities \u2013 not to mention the training required for machine learning to be effective.\n\u201cWe need AI\/automation, but we also need humans to teach it and leverage it,\u201d advises Omo Osagiede (@ Borderless_i), director, Borderless-I Consulting Ltd.\u00a0\nFurthermore, the tools must be augmented by human intelligence to make correlations and decisions based on the systems\u2019 output.\n\u201cAlthough automation and machine learning will improve efficiency, human expertise, logical thinking, and creativity will be further valued to deploy and effectively use new technology, as well as deter against emerging threats,\u201d says Caroline Wong (@CarolineWMWong), chief strategy officer, Cobalt.io.\nWhat\u2019s lurking and what\u2019s ahead\nThere\u2019s a legitimate worry about AI and automation: that \u201cthreat actors will also seize automated technology to conduct more widespread, pervasive attacks,\u201d says Wong.\nHackers are already experimenting with these technologies to break through organizational defenses.\u00a0\n\u201cArtificial intelligence is a cybersecurity double-edged sword,\u201d says Robert Siciliano (@RobertSiciliano), Chief Security Architect, ProtectNow. \u201cAI can learn, adapt and hack faster and smarter than current conventional penetration tools or vulnerability scans.\u201d\u00a0\n\u201cAlways count on thieves to use any means at their disposal to bypass security controls within an organization. This includes AI, which can aid criminals in analyzing cyber defense mechanisms and behavioral patterns of employees in order to circumvent security,\u201d says Scott Schober (@ScottBVS), CEO, author, cyber expert. \u201cAdversarial machine learning will be used by criminals to fool defensive AI by flooding training models with malicious data.\n\u201cAI requires training, including massive amounts of data and simulated attacks. It cannot defend against real threats until it can identify them with some degree of precision,\u201d Schober adds. Only after AI can successfully identify real threats can it begin to effectively defend networks from both human and AI attacks. This approach does not feel very proactive, but it is necessary to win the long game of cyber defense.\u201d\nAs AI and automation gain traction, expect to see advances that play to that long game of defense.\n\u201cOngoing developments in machine learning and natural language processing will improve our ability to analyse threat actor behaviour within the context of intent, opportunity and capability,\u201d says Osagiede. \u201cHowever, to really leverage AI-driven improvements in the quality of threat intelligence, automation must also improve the orchestration (or acceleration) of aspects of incident response, freeing up human security analysts to focus on more strategic defence measures.\u201d\nWulgaert adds: \u201cWe can expect some major steps forward in data protection. AI can help data protection solutions to support, correct, or even prevent end-user behavior and as such prevent data leakage, unauthorized access, etc. Last but not least, AI will continue to make its mark in threat analysis, and I guess become a minimum requirement for any good cybersecurity threat detection solution.\u201d\nFor more on this topic and other issues that are keeping CISOs up at night, listen to the new CSO Executive Sessions podcast, hosted by CSO VP\/Publisher Bob Bragdon.