Week after week, we’ve gotten used to news media reports about ever-more jaw-dropping data breaches. The breach at the credit reporting firm Equifax is just the latest, and so far highest-profile reminder that more than 5 million personal records are lost or stolen every day. Each breach costs companies on average $3.6 million. CEOs have lost their jobs and reputations, and CSOs wake up each morning dreading the news that personal customer data is in the hands of hackers.
It wasn’t always like this. Twenty years ago, cyber-related threats barely cracked the top 10 security threats facing U.S. companies, let alone data-specific threats. And historically, a company’s primary concern about its data related to governance and compliance, not security.
When I recently asked the VP of IT security for a Fortune 1000 company what his approach to data security was, his response was simply “I wish I knew; it’s not my job. It’s critically important for us to be engaged, but I only get informed after the fact.”
Such responses are depressingly common in an industry that is only just grasping the full impact of data security on their business.
This is the first of a series of posts in which I explore “data friction” that results when security constraints inhibit the ability to satisfy the data needs of the business.
The Growing Value of Data
In today’s software economy, data has become one of a company’s most important assets. Consumers expect personalized experiences that businesses can only deliver by gathering, analyzing, and managing data at scale. That data can be used to drive new insights, decisions, and strategies throughout the business.
The imperative to collect and store more information about customers creates a feedback loop that’s not always virtuous. Data is stored in more places than ever before, and it contains more personal information than ever before. Both sides of the business equation derive potential benefits: Companies offer a better experience, and sell more products or services; Customers are more loyal and/or engaged, and then buy more of those products and services.
And while overall, this creates a greater value for businesses and customers alike, it also creates a more target-rich environment for an attacker. Protecting that data is more complex than ever before.
The old standard practice of “securing the edge” by using corporate firewalls and authentication systems is no longer adequate. Increasingly, enterprises must contend with mobile devices in the hands of employees and customers, an ever-growing list of connected IoT devices, as well as public, private, and hybrid cloud infrastructure. And while we still need to do the basic blocking and tackling of verifying identities, securing the transport layer, and encrypting transactions, they’re just starting points.
Companies have increasingly focused on mitigating risks and boosting their capacity to recover once the edge has been breached. Security and Event Incident Management (SEIM) systems — examples include ArcSight and Splunk— are becoming more sophisticated and using machine learning and artificial intelligence to better identify threats. But even so, while the damage can be done in minutes or hours, the average time it takes to detect and respond to a data breach, according to a global study of security breaches by The Ponemon Institute, is more than six months.
But what complicates the discussion around securing the data is the data itself. When you combine the inexorable growth in the amount of data that companies gather with newly intricate and sometimes convoluted ways in which it’s used, you end up with a quagmire. Companies struggle to understand, let alone quantify, their risk. And while techniques like data masking help eliminate personal information from data troves, it’s useless if you can’t deliver the data to the people in your business who work with it day after day.
Even if you are able to identify, secure, and deliver data, it’s extremely difficult to fully understand how it’s being used at scale, and even harder to take action against new threats. As user workflows become fragmented across disparate systems, retaining the semantic information and inserting points of control must be re-implemented for each and every system.
These are all forms of data friction that occur when data’s inherent constraints keep it from satisfying the demands of the business. Tackling it requires a new approach that brings together data operators — those who manage data and its related systems — with data consumers including developers, data scientists, and anyone else who needs data to do their jobs. DataOps is the emerging movement that seeks to eliminate data friction through people, process, and technology.
I’ll have more to say on this in additional posts, including how data friction inhibits a successful data security strategy, and how DataOps techniques can help open new possibilities for the business.
Read more about Delphix.