The past few years have exposed a staggering amount of personal and financial consumer information while damaging the reputations of major brands.\nThe economic losses are significant. The average cost of a corporate breach was $11.7 million in 2017, up 23% from the previous year, according to a recent\u00a0Accenture study.\nHere\u2019s the really bad news: 95% of all cybercrime results from human error, according to a 2014 IBM study. Despite the advanced security technologies available today\u2014including nascent AI applications that can take matters out of human hands\u2014most major hacks target vulnerabilities rooted in human behavior, not just those in systems and networks.\nMost major hacks exploit vulnerabilities rooted in human behavior, not systems and\u00a0networks.\nHere are some typical human behaviors that play into the hands of cybercriminals, with tech solutions that organizations can deploy to strengthen their defenses.\u00a0\nHabituation\nResearch has shown that waves of security warnings and the constancy of threats actually makes employees\u00a0less\u00a0likely to respond to them. In psychology, this pattern is known as habituation. For decades, therapists have been using habituation to treat phobias, according to Alex Blau, vice president at behavioral design firm ideas42.\nMisplaced fear\nIn the wake of every high\u2011profile global attack, security pros generally rush to prevent the same thing from happening within their organizations\u2014while often ignoring\u00a0known\u00a0threats such as critical patch upgrades. This is\u00a0the result of availability bias: people tend to overemphasize the likelihood of something happening again, based on how easy it is to remember.\u00a0\nDefault bias\nMost people never change the default security settings on their computers and don\u2019t opt into extra security features such as simple encryption, even when they know it will protect their data from being stolen. This pattern has given IT departments headaches for decades.\nPeer enforcement\nEmployees tend to model peer behavior. This phenomenon, called social proof, can significantly influence behavior, especially when trying to get users to embrace security hygiene practices that appear more abstract than real.\nWhen employers train their employees, they may increase knowledge\u00a0but\u00a0rarely\u00a0change\u00a0behavior.\nData security training programs may increase employee knowledge, but they rarely change behavior. However, the chances of success rise sharply when training becomes a constant feedback system for users.\nThe promise of AI\nA cybersecurity skills shortage is one reason why many are pinning their hopes on AI to help manage risk in concert with human intelligence.\u00a0For example, MIT\u2019s Computer Science and Artificial Intelligence Lab has developed\u00a0 an \u201cadaptive cybersecurity platform\u201d called\u00a0AI2\u00a0that adapts and improves performance over time by combining machine learning tools with human security analysts.\nAI2 sifts through tens of millions of log lines each day, flagging anything deemed suspicious. Analysts confirm or adjust the results and tag legitimate threats. Over time, AI2\u2019s algorithms fine\u2011tune their monitoring, learn from mistakes, and get better at detecting breaches and reducing false positives. In early trials at MIT, AI2 has correctly predicted 85% of cyber attacks.\nTruly effective solutions will come from platforms like AI2 that blend human and machine intelligence.\n\u201cYou can only automate what you're certain about, and there is still an enormous amount of uncertainty in cybersecurity,\u201d says longtime security expert and author Bruce Schneier. \u201cAutomation has its place, but the focus needs to be on making people effective, not on replacing them.\u201d\nTo learn more visit, ServiceNow\u2019s website\u00a0dedicated to CIOs and education about the benefits of machine learning. You can also\u00a0read the global study.