Jeff Jones, senior director of Microsoft’s Trustworthy Computing security initiative, was quoted in late July in an Associated Press article making a statement that seems at first to be the understatement of the year. Referring to the online publication of code that could help hackers take advantage of a recently discovered flaw in most versions of Windows operating systems, Jones said, “We continue to believe that publication of exploit code in cases like this is not good for customers.”
Jones’s conviction that giving hackers the key to the castle was not good for those of us who live inside the castle seemed, to most readers, to be a statement of the obvious. The strange thing is, the obvious isn’t so obvious to security experts. A great many security experts think the best way to make sure that security holes are plugged as quickly as possible is to get as much information as possible to the greatest number of people as quickly as possible. In other words, if more people are familiar with the problem, then more people will work on finding a solution. In the meantime, of course, posting that information is the equivalent of serving up a bomb for any company that has not yet installed the appropriate patches.
In July, several websites, following the lead of a “security research group” based in China, published the program that could allow hackers to gain control of a Windows operating system by entering the system through a hole in the Distributed Component Object Model Interface. Malicious hackers could put that knowledge to work creating mass-mailer worms that move from one computer to the next, leaving havoc in their wake. And while none of the code was published until nine days after Microsoft had announced the flaw and offered a patch, the publication ratcheted up the risk for those systems that had not installed the patch. On the other hand, its publication, and consequent publications (such as this one) about its publication, helped to warn of what some experts have called the worst Windows security hole in history.
The debate about whether to publish exploit code is clearly illustrated by the public response to this most recent episode. In this case, as in many cases, the first group to publish the code was not the group that discovered the flaw. That honor goes to a group of Polish experts known as The Last Stage of Delirium Research Group, or LSD, which found the flaw more than a month before it dutifully sent a message to Microsoft, which scrambled to create a patch. As it happens, LSD was not always such a good scout. Last April, the group was slammed by security experts after it released exploit code for a devastating Sendmail flaw. In response, LSD issued a public statement asserting its conviction that failure to publish detailed information about security flaws could be more damaging than publishing. This time around, LSD showed the kind of restraint that its critics had called for. The result: The group was slammed by security experts who agree with LSD’s public statement.
What do you think? Does the publication of exploit code help or hinder Internet security?
Sound Off is a weekly column about current IT-related issues. Web Editorial Director Art Jahnke (firstname.lastname@example.org) always welcomes feedback.
I think this is the wrong question. Should there be exploits to be published online? is a better question. Just take a look at the long list of vulnerabilities published for just the past few days on sites such as Symantec. Nearly every major vendor of software is listed for some problem. Recently, we have had major busts on Microsoft and Cisco IOS that have kept my folks much too busy, and we no longer seem to have the time to test patches before the next one is released. How about a new feature in the feature-rich software: no vulnerabilities!
Security and Privacy Officer
Monette Information Systems
I feel that the question this article alludes to is moot.
Yes, we can logically support that it is unethical to post “holey” code before giving the vendor a chance to fix it, but honestly, this is a reflection of the world we live in. Get used to it.
I also sympathize with Gerald McGowan (previous responder) in that we no longer have time to test before implementing fixes. The immediacy of patch implementation and the frequency with which they are delivered are overwhelming. I applaud Microsoft and others for building very complex, feature-rich OSs and apps, but I lament the fact that I need to “manage” security patch updates weekly across the enterprise as part of my scheduled tasks.
Posting “at risk” code prior to notifying the vendor is selfish, shortsighted and malicious. People who do so should be penalized. Hackers don’t think ahead to recognize the billions of people whose lives are damaged by each attack. Their limited vision provides them only with glimpses of short-term authority absent long-term responsibility and consequences.
We as IT must maintain vigilance on all fronts. We owe it to our organizations, families and coworkers.
The question is no longer should buggy code be posted for all to exploit, but how do we defend against the inevitable?
James A. Taylor
While I strongly agree that we have to alert the masses that a danger exists, I just as strongly believe that publishing the code gives even the most novice hacker the keys to the city. That places many more systems in danger than is necessary.
Microsoft isn’t the only bad guy in this situation. We need to take some of the blame as we continually want bigger, faster, more fully integrated systems that will work with anyone’s hardware and software and be completely secure.
We need to slow down on our requests for more and bigger, and instead require more secure. If we stop buying more—and only more secure—it will get their attention.
Manager, Network Administration and Security
How this thing with Microsoft works: 1. They develop software. 2. They sell the software to me. 3. The software has lots of bugs that damage my business. 4. They then demand that I pay an annual maintenance fee to fix the bugs (MS-Technet is not free). 5. Instead of fixing the bugs, they develop a new version. 6. Instead of giving me the new version, they sell it to me. 7. I notice that many of my old applications no longer work. 8. I have to buy new versions of the languages (databases, whatever) needed to make them work. And guess what! I have to buy all of this from Microsoft. 9. I then have to fund the reprogramming and testing of replacement software (talk about a lack of ROI here!). 10. Then the cycle starts over.
Should they be responsible for the security bugs? They should be responsible for all bugs. But, unfortunately, our lax government is not willing to enforce the antitrust acts against a monopolist.
Sufficient information should be published to allow users to determine if they need a fix or patch, but not enough information to give anyone enough to exploit that code.
Systems Development Enterprises