In 1982, at the height of the Cold War, a vast explosion, visible from space, lit up Siberia. NORAD and others in the U.S. defense establishment worried: was this a nuclear test, or a missile being launched from a region where nobody had suspected that missies were stored? But no: it turns out the explosion, one of the largest non-nuclear blasts ever created, came from a remote area of the new Trans-Siberian Pipeline. And according to Thomas C. Reed, a U.S. National Security Advisor at the time, it was an audacious act of sabotage by U.S. intelligence.
The operation, according to Reed, went like this: a Soviet double agent told the Americans what technology the Soviets were trying to acquire from the west to build and operate their pipeline. The CIA made sure that the software they ended up with had built-in flaws, causing the pumps and valves on the pipeline to "go haywire," eventually causing the explosion. It was a deftly executed move, and it remained a secret until decades later. But to those in the know, it dramatically demonstrated something that had been in the realm of science fiction: that an attack implemented in software could have dramatic and damaging impact on the real, physical world.
[ Also on ITworld: How ‘power fingerprint’ could improve security for ICS/SCADA systems ]
Flash-forward to our current era, with revelations that the Stuxnet worm was a weapon used by the U.S. and perhaps the Israelis to sabotage Iran's nuclear program. Though the technology is different, the results were similar: industrial machinery -- in this case, the centrifuges Iran was using to enrich uranium -- was tricked into doing things it wasn't physically capable of doing, leading to damage and delay.
There's no doubt that Stuxnet was a sophisticated attack. But the ways that attacks like it can still succeed against industrial infrastructure may alarm and unsettle you. And the truth has less to do with the skills of the attackers and more to do with the rickety state of our industrial control systems.
Ad hoc networks
Dr. Alex Tarter is technical director of cyber security programs at Ultra Electronics, 3eTI, a supplier of products and systems that secure critical infrastructure for military and industrial networks, and he painted a picture of how industrial control systems work that you might find alarming. Start with the programmable logic controllers, or PLCs: these small, embedded computers are what actually control industrial equipment.
But what controls the controllers? And how can different controllers from different manufacturers talk to each other? Because no power plant or oil refinery wants to be beholden to, say, Siemens, forever just that was the first vendor they bought PLCs from. So the PLCs are controlled and coordinated using OLE for Process Control, or OPC, a protocol custom designed for this purpose.
And OPC runs more or less exclusively on ... Windows XP machines, whose operating system is, of course, no longer supported by the manufacturer. According to Tarter, the industry's hopes lie in the new and improved OPC-UA standard, but in practice innumerable industrial facilities are still using OPC running on Windows XP. Making things worse, many of those XP machines are connected to the Internet; even those that aren't are just off-the-shelf machines with USB ports that a thumb drive can easily plug into.
A vulnerable computer attached to industrial control equipment that runs, say, a nuclear power plant is obviously troubling. The way those computers are connected to the PLCs compounds those problems, Tarter explains. At the dawn of electronic industrial control systems, physical wires connected the control panels to the PLCs. Any command that the PLC got could be guaranteed to come from that control panel. But just as those control panels have been replaced by the aforementioned commodity PCs, the connections between them and the PLCs have been replaced by standard Internet Protocol networks. These networks are subject to all the standard attacks that the open Internet is prone to -- but, as a legacy of those hard-wired days, PLCs aren't designed to anticipate those attacks. When they receive a command, they act on it. "In control systems," Tarter says, "access equals authority."
[ Also on ITworld: Whitelisting project helps industrial control systems owners find suspicious files ]
So what specific dangers lurk on these internal industrial networks? Well, there's plenty of basic, low-end malware, much of which isn't even there deliberately: the computers are often just infected by the same self-replicating cybernuisances mucking up PCs worldwide. Usually these are created for some nefarious purpose but often breed in the wild mindlessly. That keylogger virus that hit the U.S. drone system a few years back? Tarter says this was meant to steal World of Warcraft passwords. The Slammer worm's infection of an Ohio nuclear power plant in 2003 was a similar incident: not targeted, annoying and expensive to fix, but with no disastrous consequences. These infections are the computer equivalent of swine or avian flu: designed for one niche but occasionally jumping to another, with panic-inducing but ultimately limited results.
If malware is deliberately aimed at industrial facilities, though, it can do some real damage. Once it infiltrates the network, it can set up a man in the middle attack and do all sorts of interesting and scary things, like convince a PLC to send signals to the control room indicating that everything is fine when it's not (or vice versa). Again, because of their hardwired legacy, those PLCs are mainly checking to make sure the signals bearing their orders aren't physically damaged; if the checksums are good, the PLCs do what they're told.
If there's one saving grace, it's that many malware authors, accustomed to dealing with PCs, give control systems too much credit for robustness. BlackEnergy, was malware that infiltrated a number of energy production facilities. It was equipped by a scanner that was supposed to detect OPC servers; but, Tarter says, the very act of scanning crashed those servers, creating inconveniences but preventing any larger-scale havoc.
Why we're vulnerable
So what's preventing large-scale moves to fix these problems? Part of it, of course, is inertia: if things seem to be running fine, why rock the boat -- and spend money -- over threats that seem theoretical? As Tarter points out, many of these facilites we're talking about are operated by publicly owned or heavily regulated utilities, for whom raising rates to pay for improved security is a big political headache.
And then there's convenience -- you know, the exact same reason you leave a post-it note with your password on it right next to your keyboard. When it comes to huge industrial facilities, owners want reliability and availability. If you tell a component to do something, you want it to do it -- and that means the simplest possible connection between you and the PLC, even if that connection's not very secure. If you need to replace a component, you want to be able to hot-swap it. None of this encourages secure architecture.
Finally, there's a sort of willful ignorance at play. While PCs generally have good logging and diagnostic capabilities, many PLCs do not, says Tarter. It can be difficult to tell exactly when or how a component has been infected with malware. If it stops working, often rather than trying to figure out what went wrong, it's easier just to physically replace it with a new component and hope for the best.
And after all, usually nothing goes wrong. BlackEnergy's clunkiness proves that it's actually pretty difficult to build a subtle malware package. Stuxnet probably took years of expert work to put together. But it also serves as a proof of concept that shows that it can be done -- and that complacency is dangerous.
This story, "Inside the rickety, vulnerable systems that run just about every power plant" was originally published by ITworld.