When John Michael Sullivan moved to Charlotte, N.C., to help develop a mobile computer program for Lance Inc., he hung up an old plaque. Inscribed “Dr. Crime’s Terminal of Doom,” the memento celebrated Sullivan’s youthful love of the movie Indiana Jones and the Temple of Doom?and his reputation as a computer hacker who went by the handle Dr. Crime.
“I was a hacker long before being a hacker was cool,” Sullivan wrote on a webpage the FBI later found on his hard drive, describing his affection for the plaque. “More than once I was accused (falsely?) of perpetrating acts of computer crime against various systems and agencies. But regardless if I did or didn’t, I never got caught…. And although I have ’settled in’ to a real job, Dr. Crime still lives…quietly, anonymously and discreet.”
Or not. After Sullivan was demoted at snack-food maker Lance in May 1998, he planted a logic bomb. This malicious code, set to execute on Sept. 23, 1998, the anniversary of his hire date, would destroy part of the program being written for the handheld computers for Lance’s sales force. When the bomb went off?months after Sullivan had resigned?more than 700 salespeople who rove the Southeastern United States with truckloads of Captain’s Wafers, Cape Cod Potato Chips and Toastchee crackers couldn’t communicate electronically with headquarters for days, and Lance feared the attack might cost $1 million.
The evidence Dr. Crime left is unique, but the scenario? Hardly. Whether it’s sabotage or the theft of trade secrets, a growing number of companies are learning the hard way that their biggest security risks are on the inside. Employees, contractors, temps and other insiders are trusted users. They know how a company works, and they understand its weaknesses?and that gives the occasional bad apple a chance to really make things rotten.
Rather than handling the situation internally as something to cover up, as do many companies faced with insider crime, Lance decided to act. “We wanted to send the message that these types of actions were not accepted by senior management,” said Rudy Gragnani, vice president of IS at the $583 million company, in an interview that his edgy legal department allowed him to conduct only via e-mail. “The livelihood of our sales representatives was being impacted, and we took this situation very seriously.”
In April 2001, the then-40-year-old Sullivan?who also wrote on that webpage that he’d relocated from New York to North Carolina to give his family a better quality of life?was sentenced to two years in prison without parole and ordered to pay almost $200,000 restitution. He lost an appeal in February 2002.
Damage by insiders such as Sullivan “is an incredibly fast-growing problem,” says Patrick Gray, who worked for the FBI for 20 years until he retired in late 2001 to join Internet Security Systems, a managed security company based in Atlanta. “It’s a tough threat that CIOs are going to have to address. Whether you’re a Fortune 100 company or a three or four person company, you still have to deal with that biosphere that sits between the keyboard and the chair.”
Supposedly the wake-up calls came in 1996, in computer sabotage’s most famous chapter, when a former systems administrator at Omega Engineering in Bridgeport, N.J., unleashed malicious code that cost the company more than $10 million; in February 2002, Tim Lloyd, 39, was sentenced to 41 months in federal prison and ordered to pay Omega more than $2 million in restitution.
But the bells are still ringing.
This past January, Cumming, Ga.-based software vendor NetSupport worked with the FBI to arrest a sales manager who allegedly offered to sell the company’s customer list to at least two competitors for $20,000.
And in March, the FBI arrested a former employee of Global Crossing on charges of identity theft and posting threatening communications on the Internet?this after he allegedly posted menacing messages and personal information at his website (including Social Security numbers and birthdays) about hundreds of current and former employees at the communications company.
Those cases attract wide publicity, yet observers say they are surprised at how little companies do to minimize the risk posed by employees. “I’ll talk to my peers in other organizations, where it’s sort of, ’We think we’re protected?there’s a guy downstairs who takes care of it,’” says Tim Talbot, senior vice president and CIO at PHH Arval, a fleet-management company based in Hunt Valley, Md., that’s a subsidiary of the Avis Group. “OK, so the guy downstairs has never made a mistake, knowingly or unknowingly?”
Many companies don’t do enough to protect against insider threats because they are leery of breaking the trust they have built with their employees. Treat someone like a criminal, the thinking goes, and he might start to act like one. The good news is that there are some easy ways to improve internal security without making honest people feel like crooks?steps that will help protect against external threats as well. Here are five things you can do.
Emphasize Security from Day One
Good security starts with whom you hire, and that’s why it’s crucial to have a preemployment screening, including reference checks, says one executive who’s been there. “You really have to know the people that you’re hiring and make sure that their interests ally with yours,” says Craig Goldberg, CEO of New York City-based Internet Trading Technologies, which successfully prosecuted two employees who, unhappy with the company, attempted extortion and then attacked the company’s systems. (Goldberg told his story at a recent CIO security forum webcast. Find it online at www.cio.com/printlinks.)
CIOs can also limit the damage any one employee can do by setting up access controls that map a person’s job function to the resources he needs to do that job. Do that from day one, and your company can avoid giving the impression that access levels have to do with him as a person?they’re simply part of a given job function. (See “Software Sentries,” Page 80, for details on the technology that can help you do this.)
Also, there should be checks and balances in place that minimize the damage that one IT employee could do. One person might be in charge of changing files, another in charge of changing the network fabric and a third in charge of modifying payroll records. “Most big computer systems have a log-in that might be in a generic way described as the superuser,” says Daniel Geer, CTO of managed security company @Stake in Cambridge, Mass. “If I gain the superuser power and I should not have it, the question is, How far does it extend? I’d rather not have the power to change the company invested in one person?not because I don’t trust that person, but because if their credentials are stolen, that is an uncontainable risk.”
Build Security from the Inside Out
These access controls are only the first step toward a decreasing emphasis on what’s known as perimeter protection?security’s equivalent of the moat around a castle. Surprisingly, more than half of companies that responded to one CIO survey last year don’t have critical information restricted to a confined area, separate from other information that requires less security. In other words, once an intruder gets over the moat, he won’t even need to pick a lock to get the crown jewels. “Some corporations run hard on the outside and soft on the inside: Once you get in, you have free access,” says Larry Bickner, vice president and information security officer at Nasdaq in New York City.
To protect its trading floor, Nasdaq takes the opposite approach, and one that experts recommend: progressive hardening from the inside out. “We break our world into various trust zones, and we control who’s within that zone or space,” Bickner says. “I don’t have access to human resources servers or systems. It’s not part of my job. We have a completely different trust space for the market system, and where those overlap, we control those connections very strictly…. Even if one layer isn’t set correctly, the other layers compensate. That layering gives you hardening. Our architecture is hardened to the point that when you’re on the inside, it’s not much easier to get at things, frankly, from being on the outside.”
Make Security Part of the Culture
Another key element is establishing a culture that values security. That helps keep the honest people honest and makes it easier to deal with people who cross the line. At George Washington University in Washington, D.C., the CIO and his information security officer, Krizi Trivisani, have made computer security part of the university’s code of conduct that students, faculty and staff have to read and sign once a year. “Policy is a great vehicle,” says CIO Dave Swartz. “Of course, you have to be ready to enforce the policy, and that’s the problem. What’s the hammer?” Swartz’s department forwards people who break security policies (including students who try to test hacker techniques they’ve learned in class) to the appropriate disciplinary organization, but they prefer to focus on prevention. The IT department hosts regular security forums and invites members of the legal department, compliance office, and audit, policy and student groups. “Education and awareness is a very powerful tool,” Swartz says.
CIOs who decide to implement stricter policies for employees should be doubly sensitive to educating users about reasons for the changes. “This is a classic situation where what your culture is and what you’ve done in the past lays a foundation for future efforts,” says Mitchell Marks, an organizational psychologist in San Francisco. “If you don’t explain why you are [increasing security], then people will talk about it at the coffee machine, fill in the information voids with perceptions that are probably more negative than reality [and conclude]: Leadership doesn’t trust us.”
Watch for Unusual Activity
Despite those precautions, companies also need to protect against the possibility that those levels of security will be broken. At Sony Pictures Entertainment, right before a big movie release like Spider-Man, the hacks start coming from insiders and outsiders who want to get a prereleased version of the movie or see the stars’ salaries. That’s where the company’s intrusion detection system (IDS) steps in, by watching for unauthorized activity. Employees who poke around for inappropriate information on Sony’s network might generate an alert that lands on the desk of Jeff Uslan, director of information protection and security at the Culver City, Calif.-based company. “The system would tell me your machine address and IP address,” he says. “You might get a call from myself, saying, ’Is there something I can help you with, because you’re trying to get into these files that you shouldn’t.’” The IDS would also help Uslan find out if a hacker had infiltrated Sony’s system and was using an employee’s credentials or computer to launch an attack.
In addition to an IDS, Oakland, Calif.-based shipping company APL uses a product called Silent Runner, from a company by the same name, to get a visual look at what’s happening on the shipping company’s network?a high number of FTP downloads, for example, or unusual activity in a department that is going through a painful reorganization, or even e-mails that match keyword searches. “I have a bird’s-eye view of what’s happening,” says Van Nguyen, director of information security. “I don’t necessarily look at every single one of the 11,000 employees, but when I need to I can.”
That isn’t enough for everyone, of course. Some companies, especially ones that deal with financial transactions or other sensitive information, will have to go to a more extreme route and use more sophisticated monitoring and controls. (For a checklist of the internal controls at one company that deals with wads of cash, see “How Harrah’s Protects the House’s Money,” Page 78.)
Know How to Let Go
A little sensitivity when someone leaves the company can go a long way in avoiding retaliation or sabotage. (See “How to Fire People,” at www.cio.com/printlinks.) But there are technical details to take care of as well. It can take months for IT departments to painstakingly close the accounts of a former employee. That usually happens because of poor communication with HR or because there are so many different accounts controlled by different systems administrators, which is a major problem not only because employees might attempt to access system resources but also because hackers can take advantage of inactive accounts. “We see a lot of companies that don’t have policies to cancel passwords and log-in names when somebody is terminated,” says FBI supervisory special agent David Ford, who manages a regional computer crimes office in Atlanta. “You would think that would be the first thing that would happen, but a lot of companies don’t take the basic steps you would expect.”
Until recently, the New York City-based clothing designer Josephine Chaus was no exception. When Ed Eskew became vice president of IT about three years ago, there was no formal system in place for shutting down accounts of employees who resign or are let go. Now, human resources and IT work together closely?a process that, unfortunately, had to be used when the company recently had layoffs. “The moment a person is called from their desk into HR for termination, our IT people will go to their desk and remove the CPU” and change the password for their voice mail, Eskew says. People who leave the company voluntarily may get an interim password with limited access during their notice period.
Sound extreme? Perhaps, but Eskew says there’s no way to tell how someone will react to being fired. “You like to think that people will behave themselves professionally, but from a security perspective, how do you know? How do you explain that you didn’t protect against that?”
But that’s not always enough, as Lance learned when “Dr. Crime” ended up behind bars. Now, says IT chief Gragnani, “when someone leaves our IT department under suspect circumstances, we will go back and review the program changes that person has implemented recently.”
It’s another prudent move for IT executives faced with securing their company’s assets. But it’s not like they have to spend all day, every day treating their colleagues as suspects.
Nasdaq’s Bickner uses 80 percent of his time getting people to do the right thing and only 20 percent making sure no one does the wrong thing. “Most of the people will do the right thing most of the time,” he says. “We’re counting on people to make the right decisions and training them to do that. And the more you succeed on average, the less you begin to see any errant behavior.”