In November 2005, Jason Spaltro, executive director of information security at Sony Pictures Entertainment, sat down in a conference room with an auditor who had just completed a review of his security practices.
The auditor told Spaltro that Sony had several security weaknesses, including insufficiently strong access controls, which is a key Sarbanes-Oxley requirement.
Furthermore, the auditor told Spaltro, the passwords Sony employees were using did not meet best practice standards that called for combinations of random letters, numbers and symbols. Sony employees were using proper nouns. (Sox does not dictate how secure passwords need to be, but it does insist that public companies protect and monitor access to networks, which many auditors and consultants interpret as requiring complex password-naming conventions.)
Summing up, the auditor told Spaltro, “If you were a bank, you’d be out of business.”
Frustrated, Spaltro responded, “If a bank was a Hollywood studio, it would be out of business.”
Spaltro argued that if his people had to remember those nonintuitive passwords, they’d most likely write them down on sticky notes and post them on their monitors. And how secure would that be?
After some debate, the auditor agreed not to note “weak passwords” as a Sox failure.
Doing the Right Thing
Spaltro’s experience illuminates a transaction that’s rarely discussed outside corporate walls. Compliance with federal, state, and international privacy and security laws and regulations often is more an interpretive art than an empirical science—and it is frequently a matter for negotiation. How to (or, for some CIOs, even whether to) follow regulations is neither a simple question with a simple answer nor a straightforward issue of following instructions. This makes it more an exercise in risk management than governance. Often, doing the right thing means doing what’s right for the bottom line, not necessarily what’s right in terms of the regulation or even what’s right for the customer.
“There are decisions that have to be made,” Spaltro explains. “We’re trying to remain profitable for our shareholders, and we literally could go broke trying to cover for everything. So, you make risk-based decisions: What’re the most important things that are absolutely required by law?” Spaltro does those, noting that “Sony is over-compliant in many areas,” and he says that Sony takes “the protection of personal information very seriously and invests heavily in controls to protect it.”
He adds that “Legislative requirements are mandatory, but going the extra step is a business decision” based on what makes business sense.
So you adjust, you decide, you weigh the issues. It’s not black and white, yes or no.
When it comes to compliance, you can, in fact, be a little bit pregnant.
When business metrics are applied to compliance, many companies decide to deploy as little technology or process as possible—or to ignore the governing laws and regulations completely.
According to “The Global State of Information Security 2006” survey conducted by CIO and PricewaterhouseCoopers, about a quarter of U.S. executives who say their companies must comply with Sox regulations admit to being noncompliant with the 2002 law. (See The Global State of Information Security 2006.) Two-thirds of U.S. companies are not compliant with the two-year-old Payment Card Industry (PCI) Data Security Standard, with guidelines (and penalties) developed by the major credit card companies to protect their customers’ credit card numbers. And 42 percent of U.S. healthcare companies admit to not complying with the almost 10-year-old Health Insurance Portability and Accountability Act (HIPAA), which requires health institutions to secure private health information.
“The dirty little secret here is that everybody tries to figure out how much risk they can assume without being embarrassed or caught,” says David Taylor, a former Gartner security analyst and now vice president for data security strategies for Protegrity, a security and privacy consultancy. “The people I regularly talk to are trying to figure out if [their security] fails, what’s the smallest amount they need to do to stay out of trouble and how they can blame someone else.”
The percentage of CIOs who admit to being noncompliant at first may be a bit unnerving, leaving the impression that a significant portion of IT executives are scofflaws. But the problem is more complicated than bad or irresponsible behavior.
What most security experts believe is that CIOs and CSOs are so overwhelmed by the demands of their jobs—running projects, innovating, keeping the lights on and putting out those ever-smoldering IT fires—that they simply don’t have the time to decipher the laws that affect them, much less the time to invest in reconfiguring systems and processes to meet regulatory requirements. (This problem is exacerbated in smaller companies. See The ROI of Noncompliance in the Mid-Market.) And make no mistake: It takes a lot of time. According to a 2006 Gartner report, IT organizations spend between 5,000 and 20,000 man hours a year trying to stay compliant with Sarbanes-Oxley’s requirements.
Complicating the CIO’s task, even auditors frequently are unclear as to what the laws mean. Alex Bakman, founder and CTO for Ecora Software, which sells audit compliance applications, asserts that the checklist for Sox compliance that some auditors use can differ within the same company. “For Sox, how IT needs to be managed for compliance is really all over the place,” Bakman says.
But there is, thankfully, an emerging consensus on how to comply in ways that make business sense.
Sarbanes-Oxley, which became law in 2004, remains something of a mystery. “Sox is tough; it’s hard,” says Rich Mogull, a Sox analyst at Gartner. “When we look at what companies are doing, there are shifting standards, and auditors enforce the standards differently.”
What it means to be Sox compliant can be a moving target. Many confused CIOs have turned to standard to-do checklists supplied by Sox auditors or consultants. But that strategy, Spaltro argues, frequently leads to no compliance at all. “When they begin to implement [the checklist], they quickly find out how much more it costs than they thought,” he says. “Soon, they can’t keep up with the demands of completing all the items and they give up.”
Spaltro recommends CIOs and security executives sit down with outside auditors, their corporate legal staff and executives from human resources to figure out what Sox compliance means—not what it means in the abstract but what it means to their company specifically. For example, a bank’s risk of not following a strict interpretation of Sox compliance may be higher than, say, an entertainment company like Sony. A bank must build a higher level of trust with its customers because it manages their money. Risks at other companies may be lower, which means compliance may require lower (and less expensive) levels of controls. “I sincerely believe that if we left it all up to the auditors to tell us what works, we wouldn’t have a business at the end of the day,” Spaltro says.
Most compliance experts agree that CIOs should at least be able to show that they have made a good faith effort to comply with the law. For example, the Sox requirement to control access to sensitive information includes deploying ID management, requiring companies to monitor log files. But how do you know if you have met the standard for monitoring, and what is the standard?
For Sergio Pedro, a managing director in PricewaterhouseCoopers’ advisory services practice, monitoring doesn’t mean spending thousands on technology. Rather, he says the process is very similar to the one your CPA may ask you to follow when tracking the days you spend in an office in one state and the days you are in an office in another state: Just write it down. The person designated to monitor the log files should mark on a printed copy of the files any concerns, initial the file and file it. “Sox deficiencies basically involve not being able to produce evidence that you’ve done your job,” Pedro says. “You just have to prove that you checked the process.”
Two federal laws require organizations to protect the private information of customers: the Gramm-Leach-Bliley Act (GLBA) of 1999 and HIPAA. Generally, GLBA requires any organization that stores personal financial information on customers to have a comprehensive security program that identifies threats and risks, and the steps it has taken to address them. HIPAA is similar to GLBA, but instead of protecting financial information, the act requires healthcare organizations to secure health records and guarantee patient privacy.
Both laws, by design, are not highly prescriptive. The Federal Trade Commission, which enforces the so-called Safeguard Rule as part of the GLBA, states that “the plan must be appropriate to the company’s size and complexity, the nature and scope of its activities, and the sensitivity of the customer information it handles.” Which leaves a lot of room for interpretation.
And interpretation usually requires negotiation. Glen Damiani, director of IT at the money management firm United Capital Markets (UCM), says much of his job involves mediating between employees who insist they need access to certain information and the compliance officer who insists on strict access controls. Mostly, a middle ground can be worked out, he says.
For example, when Damiani was head of technology at a healthcare organization (prior to joining UCM), a clinician was transferred from the oncology department to the obstetrics unit. The clinician asked Damiani for permission to access the records of her former oncology patients. She wanted to continue to track their progress and provide them moral support. But the company’s privacy officer ruled that the clinician’s new responsibilities in obstetrics did not give her the right to access the oncology database.
Damiani negotiated a compromise in which the clinician could have access to those patients she had had direct contact with while working in the oncology unit. Strictly speaking, HIPAA does not allow such access, but Damiani argued that it would facilitate continuity of care, an important medical principle. “Compliance has got to be a give and take,” Damiani says. “The compliance officer may be dead set on not giving access, but another senior person is trying to do their job.”
Auditors are increasingly looking for guidance in these matters, Pedro says. But be advised: CIOs should document the discussions and highlight the main points of why the decision was made to allow access to certain data. That way, you can explain your reasoning to state and federal regulators when they come knocking on your door if data is lost or leaked. A clearly worded argument backing up a decision may help convince regulators that you thought about the risks, you thought about how to mitigate them and you took compliance with the law into consideration.
This is particularly pertinent when it comes to deciding whether to encrypt data. No law currently requires companies to encrypt data, but it is an effective way to protect data, and the FTC has fined companies for privacy breaches. So what to do? Pedro says a best practice has developed in the past year that recommends encryption for data that may leave the organization (data on laptops, PDAs and in e-mail) while leaving data that remains in-house (on, say, a mainframe) unencrypted.
Again, you will want to document why your organization believes that this is an appropriate practice. “If you can show that you have read the pertinent regulations, and show that this is your interpretation of what the regulation says, and you can show intent to protect the data, you are more protected than those who haven’t done that,” Pedro says.
Notification: The Risk Analysis
At a 2006 security conference, Protegrity VP Taylor ate lunch next to the CISO of a large metropolitan university. The CISO told Taylor that she had received an e-mail from one of her programmers informing her that the school may have experienced a breach that may have exposed students’ personal information. The programmer was unsure if the law required the school to report the incident and asked the CISO for guidance.
Taylor asked her what she did. She said she wrote back to the programmer telling him not to do anything.
Taylor told the CISO that the university should have reported the breach. The CISO disagreed, saying, essentially, that because very few people review system log files and because only one or two people at the university understood the systems and the data in them, it was probable that the breach would go unremarked and undiscovered. “I was thinking, Wow,” Taylor recalls. “That’s a risky chance to take.”
According to Behnam Dayanim, a privacy attorney with Paul, Hastings, Janofsky & Walker, state security breach notification laws are among the most frequently ignored types of security regulation. About 35 states have passed security breach notification laws, which lay out, to varying degrees, when an enterprise needs to notify customers and clients if their private information may have been exposed to an unauthorized user. According to CIO and PricewaterhouseCoopers’ “The Global State of Information Security 2006” survey, 32 percent of U.S. organizations admit to not being compliant with state privacy regulations.
There are two possible explanations for why the noncompliance rate is so high. First, the risk of being caught is low because it has been extremely difficult to tie a specific instance of fraud to a specific breach at a specific company, says Jim Lewis, a security expert at the Center for Strategic and International Studies in Washington, D.C. (Recent lawsuits filed against TJX—owner of discount retailers TJ Maxx, Marshalls and other stores—which claim credit card numbers stolen during a security breach in 2005 and 2006 led to specific instances of fraud, may indicate that this fact of security life is changing. For more on the TJX breach, see Financial Penalties for Security Breaches Will Promote Change.)
Second, these laws tend not to be terribly specific regarding situations and requirements. For example, California’s security breach notification law, the first in the nation, does not require notification of a security breach if the private data was encrypted. However, it also does not require encryption.
For that reason, how companies protect private data has become a risk-based business decision, says Sony’s Spaltro. Sony processes about 5 million credit card transactions a month, mostly associated with its PlayStation consoles and the massively multiplayer online games it sells. Although Spaltro declines to talk about Sony’s security practices, he says that while Sony Online Entertainment is fully compliant, every company weighs the cost of protecting personal data with the cost of what it would take to notify customers if a breach occurred. Spaltro offers a hypothetical example of a company that relies on legacy systems to store and manage credit card transactions for its customers. The cost to harden the legacy database against a possible intrusion could come to $10 million, he says. The cost to notify customers in case of a breach might be $1 million. With those figures, says Spaltro, “it’s a valid business decision to accept the risk” of a security breach. “I will not invest $10 million to avoid a possible $1 million loss,” he suggests.
That reasoning is “shortsighted,” argues Ari Schwartz, a privacy expert at the Center for Democracy and Technology. The cost of notification is only a small part of the potential cost to a company. Damage to the corporate brand can be significant. And if the FTC rules that the company was in any way negligent, it could face multimillion-dollar fines. In 2006, the FTC fined information aggregator ChoicePoint $15 million after the company admitted to inadvertently selling more than 163,000 personal financial records to thieves. The FTC ruled ChoicePoint had not taken proper precautions to check the background of customers asking for the information.
Crime and Punishment
How can a CIO know that the security measures he’s taken will be adjudged customary and reasonable by federal or state regulators? Looking at the 15 security breach cases the FTC has ruled on since 2002, a picture emerges as to what regulators deem reasonable (the FTC plans this spring to release a document that will set out more specific guidelines), and by looking at the 14 cases the FTC has settled against companies that have experienced security breaches, one can get a sense of what’s deemed customary. (For a look at some of the more recent cases, see When Companies Violate the Rules.)
In 2005 the FTC handed down a judgment against BJ’s Wholesale Club. BJ’s was found to have “engaged in a number of practices which, taken together, did not provide reasonable security for sensitive customer information.” The FTC ruled that BJ’s had failed to encrypt personal data transmitted over the Internet; had stored personal data after it no longer needed the information; used commonly known default passwords for access to files containing personal information; and did not use commercially available technology to secure wireless connections, detect intrusions or conduct security audits.
“We know security can’t be perfect, so we don’t expect perfection,” says Jessica Rich, assistant director of the Division for Privacy and Identity Protection at the FTC. “But companies need to try, and if you do that, you will be much better off.”
But for some, “trying” requires a lot more than receiving an “A” for effort. You better know what you are doing, says Joe Fantuzzi, CEO of the information security company Workshare. He says the FTC’s ruling against BJ’s was intended to send the message that if you claim to protect your customers’ personal data in your privacy statement (as BJ’s did), your security had better be up to the task.
“The point is that companies make risk management versus compliance management trade-offs all the time,” Fantuzzi says. “Just make sure you do your homework so you know you made the right trade-off.”
Allan Holmes is a Washington, D.C.-based freelancer and security expert.