Arthur Riel says he was just doing his job.
When he was hired by Morgan Stanley in 2000 and put in charge of the $52 billion financial company’s e-mail archiving system, gaining access to its most sensitive corporate communications, the company was already involved in litigation that involved its e-mail retention policies. That suit would end in a landmark 2005 judgment against the bank, which awarded $1.57 billion in damages to financier Ronald Perelman. (In March 2007, Morgan Stanley won an appeal to Florida’s District Court of Appeal.)
It was part of Riel’s $500,000 a year job, he says, to make sure that would never happen again.
To do that, Riel had what he calls “carte blanche to go through e-mail.” What he says he discovered reading company e-mails throughout 2003 were what he construed as dubious business ethics, potential conflicts of interest and sexual banter within Morgan Stanley’s executive ranks that, he says, ran contrary to the bank’s code of conduct.
Based on his reading of executive e-mails, most notably CTO Guy Chiarello’s, Riel alleged that the e-mails showed the improper influence of Morgan Stanley’s Investment Banking division in how the IT department, with its multimillion-dollar budget, purchased technology products; the improper solicitation of tickets to New York Yankees–Boston Red Sox baseball games and other high-profile sporting events from vendors such as EMC; and the influencing, through one of Chiarello’s direct reports, of the outcome of Computerworld magazine’s Smithsonian Leadership Award process, of which Morgan Stanley was a sponsor. (Computerworld is a CIO sister publication.) “I reported what was basically a kickback scheme going on in IT,” Riel says.
E-mail exchanges that contained sexual banter and involved Riel’s boss, CIO Moira Kilcoyne, added to Riel’s conviction that something was wrong at the top. Believing, he says, that he was doing his duty, Riel claims to have sent hard copies of the offending e-mails to Stephen Crawford, Morgan Stanley’s then-CFO, on Jan. 15, 2004, anonymously via interoffice mail.
Riel’s superiors vigorously dispute his story.
First, according to a Morgan Stanley spokesperson, the company asserts that Riel was never authorized to monitor, read or disseminate other employees’ e-mails “as he saw fit.” Second, the spokesperson denies that a package of e-mails was either sent to or received by Crawford. And third, after conducting an internal investigation, the company maintains that it found no evidence warranting disciplinary action against anyone identified by Riel.
On Aug. 18, 2004, moments after Riel’s BlackBerry service was shut off, Kilcoyne, along with a vice president of HR, called Riel into her office. She told him that he was being placed on administrative leave with full pay. Morgan Stanley security searched his office and eventually found more than 350 e-mails on his PC, e-mails of which Riel was neither the writer nor the intended recipient.
On Sept. 27, 2005, 13 months after being placed on leave, Riel was “terminated for gross misconduct,” says the Morgan Stanley spokesperson.
Riel filed a $10 million whistle-blower Sarbanes-Oxley suit and a $10 million federal defamation suit against Morgan Stanley. In June 2006, the Department of Labor dismissed the whistle-blower suit and said it had found no cause to believe that Morgan Stanley had violated any part of the Sarbanes-Oxley act. It also found that Morgan Stanley had “terminated other employees in the past for similar misconduct.”
In February 2007, a federal judge dismissed seven of the eight complaints Riel had filed in his suit. (A small issue concerning compensation was uncontested.) In a statement, Morgan Stanley said that the dismissal of the seven complaints and the whistle-blower suit “further confirms that Arthur Riel’s allegations are without any legal or factual merit.”
Today, in light of everything that transpired, Riel says he learned a lesson that all CIOs should heed: “It’s critical that IT departments determine a policy for who should have access to what.” During his time at Morgan Stanley, he claims, “there was no policy.”
With Power Comes Responsibility
As the need to broaden access to systems and applications increases due to business and regulatory demands, so does the potential for malfeasance, whether it’s your network admin testing the corporate firewall on his own time and inadvertently leaving it open, a salesperson accessing a customer’s credit card information or a rogue help desk staffer hell-bent on sabotaging your CEO by reading his e-mail.
Like good governments, IT departments need checks and balances, and they need to marry access with accountability. A December 2006 Computer Emergency Readiness Team (CERT) study on insider threats found that a lack of physical and electronic access controls facilitates insider IT sabotage. The situation is even more critical now because new, widely deployed applications for identifying and monitoring employee behavior have thrust IT into what was formerly the domain of HR and legal departments. Tom Sanzone, CIO of Credit Suisse, says he works “hand in glove” with HR, legal, compliance and corporate auditors, and has formalized an IT risk function to ensure that all access policies are consistent and repeatable on a global scale. “Those relationships are very important,” he says. (For more on building those relationships, see “CIOs Need Business Partners To Achieve Security Goals.” )
Many CIOs have discovered that their new policing role presents the same challenges faced by the men and women who wear blue uniforms: If people can’t trust the police —or if something happens that damages that trust—then whom can they trust? (For how to repair trust once it’s compromised, see “Maurice Schweitzer Addresses the Importance of Truth and Deception in Business.” )
“If IT does something that they shouldn’t, then the general employee thinks, I’m going to find a way to get around the monitoring because we can’t even trust the people in IT,” says David Zweig, an associate professor of organizational behavior at the University of Toronto at Scarborough. “It’s a cycle of increasing deviance, which, unfortunately, could create more monitoring.”
At Network Services Company (NSC), a distributor in the paper and janitorial supply industry, CIO Paul Roche asserted control over how and when his IT department can access employee systems and, working with HR and legal, he has developed a policy for dealing with suspected employee infractions. For example, the IT policy states that IT personnel can’t start snooping around employees’ PCs without prior HR approval. “Employees know we’re not going to look the other way,” says Roche.
Any CIO’s mettle—no matter how rock-solid his policy or relationships—will be tested when one of his own crosses the line and breaks the trust between users and the IT department. “The expectation has to be that if you’re going to give someone authority, at some point it will be misused,” says Khalid Kark, a senior security analyst at Forrester Research. “And who will guard the guards?”
Bad Guys and Do-Gooders
Despite Riel’s assertion that Morgan Stanley had no policy for which systems and e-mail accounts he could access, Morgan Stanley says Riel was never authorized to do what he did. (No one from Morgan Stanley’s IT department was made available for this article.)
Morgan Stanley isn’t alone in having to deal publicly with renegade IT employees. Wal-Mart disclosed last March that over a four-month period one of its systems technicians, Bruce Gabbard, had monitored and recorded telephone conversations between Wal-Mart public relations staffers and a New York Times reporter. “These recordings were not authorized by the company and were in direct violation of the established operational policy that forbids such activity without prior written approval from the legal department,” Wal-Mart said in a statement. In addition, Wal-Mart revealed that Gabbard had “intercepted text messages and pages, including communications that did not involve Wal-Mart associates,” which the company maintains “is not authorized by company policies under any circumstances.” Gabbard, who was fired, claimed in an April Wall Street Journal article that his “spying activities were sanctioned by superiors.” Wal-Mart says that it has removed the recording equipment and related hardware from the system. “Any future use of this equipment will be under the direct supervision of the legal department,” Wal-Mart stated.
In February, the Massachusetts Department of Industrial Accidents (DIA) disclosed that Francis Osborn, an IT contractor, had accessed and retrieved workers’ compensation claimants’ Social Security numbers from a DIA database. According to court documents, Osborn accessed 1,200 files and opened credit card accounts using three claimants’ information, charging thousands of dollars to those fraudulent accounts. In a statement, the DIA commissioner said the department was “conducting a thorough review of all security procedures.” Osborn was fired, arrested and charged with identity fraud.
Other incidents, however, are less egregiously criminal and therefore harder for CIOs to evaluate and handle. In February 2006, New Hampshire officials announced that they had discovered password-cracking software (a program called Cain & Abel) planted on a state server. Cain & Abel potentially could have given hackers visibility into the state’s cache of credit card numbers used to conduct transactions with the division of motor vehicles, state liquor stores and the veterans home. Douglas Oliver, an IT employee who in one news report referred to himself as the state’s “chief technical hacker,” admitted to media outlets that he had installed the program, saying he was using it to test system security. He said he did so with state CIO Richard Bailey’s knowledge. (Bailey did not respond to repeated requests for an interview.) Oliver was placed on paid leave during an investigation that involved the FBI and the U.S. Department of Justice.
On April 4, 2006, state officials announced that the Cain & Abel program had never been turned on and that it was “very unlikely” that any credit card information had been exposed. Oliver, who had never been named as the IT worker responsible for the incident, was invited to return to his job on April 25, 2006.
A more highly publicized incident occurred at Sandia National Laboratories in New Mexico. After a series of hacks on the lab’s network in 2004, Shawn Carpenter, a Sandia network security analyst, launched his own investigation. He eventually linked the attacks to a Chinese cyberespionage group and also discovered that U.S. government documents had been stolen. He shared his findings with the Army Counterintelligence Group and the FBI. In response, Sandia fired Carpenter in January 2005 for, as reported in Computerworld, “inappropriate use of confidential information.” But in February 2007, a New Mexico jury awarded Carpenter $4.3 million in his wrongful termination suit and in the process transformed him from a rogue IT worker into a national hero. (Sandia is appealing the verdict.)
The moral is that whether they’re dealing with a malcontent, a crook or a conscientious employee doing his job to the best of his abilities, CIOs need to be alert to risks and threats in their own backyard. (For signs that there could be trouble in your department, see “Six Signs IT Staffers Could Be Ready To Cause Trouble.”)
“It’s not the external hacker you need to worry about so much,” says John Halamka, CIO of CareGroup and Harvard Medical School. “It’s the internal employees who have legitimate access to the systems and can do most harm.”
The Sinful Six
Since the dawn of the Internet age, IT has been aware that the Web is a Pandora’s box filled with tools that anyone with a PC, a network connection and a devious mind can employ to make mischief. But now regulations such as Sarbanes-Oxley, the Health Insurance Portability and Accountability Act (HIPAA), Gramm-Leach-Bliley and Payment Card Industry (PCI) data security standards have focused the non-IT executive’s attention on what evils can lurk alongside the business benefits IT can provide. (For more on IT’s regulatory compliance responsibilities, see “Your Guide to Good Enough Compliance.”)
“Business management has become much more aware that IT risk is business risk,” says Richard Hunter, a vice president and expert on security and privacy with Gartner. Consequently, even companies in lightly regulated industries have begun to pay more attention to their liabilities and their user management policies. For employees everywhere, the message is (or should be) clear: “You don’t have privacy where corporate life is concerned,” Hunter says. And “corporate security” will always trump “user privacy.”
This, in turn, has created a more authoritative role for IT departments as they monitor and dictate what employees can and can’t do with the technology they provide. A list has emerged in IT circles, “The Sinful Six,” describing the types of Internet sites that can’t be viewed at work: those containing pornography, anything promoting gambling, anything deemed tasteless, hate material, violence and illegal activities. Roche says visiting any of these sites, along with any kind of site that is a danger to PCs (exposing them to malware and spyware), is in direct violation of NSC’s HR policies.
New technologies have also made it easier for IT to identify who and where the violators are. According to the American Management Association’s 2005 electronic monitoring survey, 76 percent of the 526 companies surveyed said they conduct some form of electronic monitoring. In a recent paper written by the University of Toronto’s Zweig, it’s estimated that more than 40 million U.S. employees are subject to some type of electronic performance monitoring, “such as counting keystrokes, listening in on phone calls, tracking e-mail and even video-based monitoring of availability.” (For a list of monitoring tools, see “Five Enterprise Content Monitoring Tools.” ) But even though a recent Harris Interactive study of U.S. office workers found that most employees don’t let the knowledge that they’re being monitored interfere with their nonwork use of the Internet (more than half of respondents said they send and receive personal messages on their work e-mail accounts), CIOs do not want to be thought of as IT cops. “You don’t want to be the bad guy who’s enforcing the policy,” says CareGroup CIO Halamka.
The ROI of Privacy
In conversations with CIOs, Forrester’s Kark says he’s discovered that most companies “don’t want to put in draconian measures tfo say that [their company] is going to monitor everything, even though they have the right to do so.” In those companies that create cultures with more user-friendly privacy measures, Kark says that he’s found that there’s a higher level of trust among users and management.
According to Zweig’s research, monitoring “continues to violate the basic psychological boundaries between the employer and employee—one that is predicated on some minimal level of privacy, autonomy and respect. Once this boundary has been violated, a host of negative implications are likely, ranging from dissatisfaction and stress to resistance and deviance.” Therefore, he says, it’s critical for a company that wants to engender a culture of collaboration and trust to make it perfectly clear to all employees both inside and outside IT, just what IT will and, more important, will not do. “It should be communicated to everyone in the organization that the IT department does not have carte blanche,” Zweig says. “It isn’t open season on people.”
How to Monitor the Monitors
And that brings us back to the IT department—those entrusted with the access, know-how and a front-row seat on all the monitoring action. In organizations where there is “open season” on employees’ digital wakes, CIOs and analysts say there’s usually an unregulated “cowboy culture” within the IT department and, most likely, little trust and respect between management and users. In such organizations, Forrester’s Kark says he finds that more IT employees have access to a system than is actually appropriate. At one company, for example, he determined that 32 employees (including the CIO) had access to a very sensitive area of the company’s systems when, in fact, only three people actually needed the access to do their jobs; the other 29 were superfluous and therefore potential risks. Kark calls that situation “typical.”
Even though anyone with PC access can wreak havoc on your systems, research from a CERT Insider Threat study shows that technology sabotage almost always comes from within the IT ranks. In 49 incidents of IT-enabled sabotage examined, 86 percent of the perpetrators held technical positions, and 90 percent of them had been granted administrator or privileged system access when they were hired.
“I worry about the trusted person,” says Credit Suisse’s Sanzone. “To run an organization like this you have many trusted individuals that have access to sensitive things as part of their job. Probably, your risk is as high or if not higher [with the trusted person] than with any other.”
But taking some of that power and access away from IT employees can be a delicate procedure. In the CERT study, 92 percent of all of the insiders attacked their organizations following a negative work-related event, such as a dispute with a boss, a demotion or a transfer. “The people who have had privileged access have enjoyed the freedom to do whatever they wanted to do,” Kark says. “If you put in control where there was no control before, there’s going to be some resistance.”
Network Services Company’s Roche was “somewhat seriously concerned” about that kind of resistance when he instituted a new policy for how his IT staffers would monitor employees’ computers. Each IT staffer received a specific ID and password for tapping into systems for monitoring and “running a report” on an employee. Each monitoring event could be initiated only by HR, and every one would be logged. Roche credits the time he took to explain to everyone why he was instituting the policy and why it was important to the company for the fact that he didn’t get the pushback he anticipated. That, and the perception that “they wouldn’t want someone doing it to them.”
Kark says there are three key things that CIOs need to make certain (and communicate to their staffs) when rolling out these types of policies. First, make it clear who in IT has ownership and responsibility for each part of the process when any type of event is triggered by the HR, legal, physical security or compliance departments. Second, there needs to be a decision tree for how IT employees will respond to each incident and investigation, with a detailed analysis of different types of scenarios and the resulting procedures. And third, CIOs and their staffs should run simulations and tests on how the processes will play out when an event happens.
For all of Riel’s claims of whistle-blowing at Morgan Stanley, it was, ironically, one of Riel’s subordinates who followed the proper chain of command and blew the whistle on Riel.
Despite Morgan Stanley’s insistence that its procedures functioned properly, a lot of things went wrong. Of course, a lot of things can go wrong anywhere, but accepting that inevitability, and planning for how to handle it, is the key to good security and a lot less anxiety for CIOs. “We do everything we can to stay on top of this,” says Credit Suisse’s Sanzone.
“But sure, I worry.”