There is no debate among security experts that the security of Internet-enabled medical devices is woefully inadequate.
But there is considerable disagreement about how risky that is for patients. Some say the benefits of connected devices far outweigh what they consider minute risks of a catastrophic attack; while others say even a relatively low likelihood of an attack is too much. Life and health are, after all, much more significant than a credit card number being stolen.
And it is clear that medical devices are made with just one kind of security built in – to function flawlessly, possibly for years at a time, so as not to jeopardize the life or health of the patients they serve.
The other kind – security from malicious online attacks – not so much.
Until recently, that was largely irrelevant. Medical devices weren’t Internet enabled. But that has all changed, with an explosive increase in medical devices connected to the Web plus Electronic Health Records (EHR), driven by incentives in the Affordable Care Act (ACA – commonly known as Obamacare) to improve health care while controlling costs.
And that has led to reports from people like Scott Erven, head of information security at Essentia Health, which operates about 100 clinics, hospitals and pharmacies in Minnesota, North Dakota, Wisconsin and Idaho. Erven recently completed a two-year audit of the chain’s equipment, and said the security problems he found were even worse than he expected.
He told Wired magazine that many of the devices had, “common security holes, including lack of authentication …; weak passwords or default and hardcoded vendor passwords like ‘admin’ or ‘1234’; and embedded web servers and administrative interfaces that make it easy to identify and manipulate devices once an attacker finds them on a network.”
He listed a number of examples of what could happen. Among them: Bluetooth-enabled defibrillators that an attacker could control to deliver random shocks to a patient’s heart or prevent a medically needed shock; or the possibility that an attacker could “take critical equipment down during emergencies or crash all of the testing equipment in a lab and reset the configuration to factory settings.”
There is also general agreement that there are multiple reasons for those vulnerabilities, starting with both a skill and culture gap. Developers of medical device software are very skilled at making it reliable but not in securing it for use with networked applications.
As Carl Wright, general manager of North America for TrapX Security, puts it, “IT is not their core competency. There are a lot of vertical industries where that’s the case. It’s almost like operating in the previous decade.”
That gap is not likely to be closed anytime soon. There is an acute shortage in the medical device field of workers who can conduct cyber security assessments of devices.
Then there is the culture gap, which Jon Heimerl, senior security strategist at Solutionary, said is seen in resistance to more security. Since the goal has always been to make medical devices, “as easy to set up and connect as possible,” adding security controls, “often goes against the very nature (of medical professionals). Adding security that potentially interferes with device connectivity, and limits medical functionality seems counterintuitive,” he said.
Gary McGraw, CTO of Cigital, in a post for TechTarget co-authored with Chandu Ketkar, noted that, “requiring doctors to log in to a medical device just before starting a medical procedure is a bad idea because they simply won't do it regularly.”
And many medical professionals don’t even see the need for major concern about security. Ken Hoyme, distinguished scientist at Adventium Labs, speaking at the NIST ISPAB discussion, said medical device developers and those who use them in hospitals don’t understand why hackers would want to harm patients. “The view of hospitals is, ‘Why would anybody want to do that?’” he said.
“Their view is that if somebody’s out there, they’re trying to get information to sell it, but … a targeted attack against a patient is outside their thought process. It leads to something I call faith-based mismanagement: ‘I don’t believe anybody would do that, therefore my likelihood is zero and I don’t need to mitigate it.’”
Jay Radcliffe, a medical device security expert and Type-One diabetic, thinks the medical professionals are mostly correct. He declared during a round-table discussion at the recent Black Hat conference in Las Vegas that the benefits of connected devices are enormous, and the risks are miniscule.
He agreed that malicious hacks are technically possible, and could have catastrophic results – hence the now-famous decision by former vice-president Dick Cheney to have his pacemaker replaced with one that was not connected to the Web. But Radcliffe said, for the average person like himself, it would be much more likely for, “an attacker to sneak up behind him and deliver a fatal blow to his head with a baseball bat,” than to be harmed by a cyber attack.
So far, he’s correct. The Food and Drug Administration (FDA), which issued a "Safety Communication" in June 2013 titled, “Cybersecurity for Medical Devices and Hospital Networks,” said in that memo that it, “is not aware of any patient injuries or deaths associated with these (vulnerabilities) nor do we have any indication that any specific devices or systems in clinical use have been purposely targeted at this time.”
FDA press spokeswoman Jennifer Rodriguez said that remains true more than a year later.
But Greg Martin, CTO of ThreatStream, said just because something hasn’t happened yet doesn’t mean it won’t. “These days, hackers span a range of motivations from criminal, activist to potential terrorist,” he said. “The risk cannot be ignored on a hunch. If the vulnerability exists, it will be exploited in the wild.”
McGraw comes down somewhere in the middle of that debate – he agrees that the risks are likely very small for those other than possible targets of assassination – but he said that doesn’t mean security shouldn’t be a primary concern.
As connected medical devices proliferate, he said, the risks to average people may increase. The average person has a bank account, he noted, and if malicious hackers could gain control of a person’s pacemaker, they could threaten to do him harm if he didn’t send the money in his account to them.
Wright agrees that the trend is toward more danger for even the average patient. With medical devices becoming part of the Internet of Things (IoT), the risks are rising, he said, noting that cyber terrorists are attracted more by ways to harm people than they are to stealing information to make a profit.
Solving the security problem by hardening the devices will not be easy, however, since they are expensive, they are made to last for years without being updated, and if manufacturers modify them, they have to seek recertification from the FDA.
Wright argued that while it may be expensive to do security modifications, manufacturers can afford it, without passing the increase along to providers or patients. “They have the best profit margin in the market,” he said. “So they can take 10% and put it back into security.”
Kevin McAleavey, cofounder and chief architect of the KNOS Project and a malware expert, said he believes the devices should only be connected to, “a local, private network that doesn't connect outside of that network without at least something in between that can copy and paste any necessary information and then pass it across an air gap."
He also recommended that the devices have very low power, so their signal can't travel more than a few feet. “Most pacemakers are like this and use Bluetooth, so anyone who can access them pretty much has to be in the same room as the patient.”
Heimerl said authentication doesn’t need to be time consuming. He said he heard of a hospital that added passwords to all of its terminals and applications, and then gave an RFI badge reader with a built-in profile to each medical staff member.
“As the staff approached the workstation, the application would log them on and preload the applications that the person put in their profile. They added great security, and the caregiver was required to enter only a three- or four-letter/digit PIN to finish the logon process – so it was simple,” he said.
Several experts said they think government needs to play a more explicit role. Heimerl wondered aloud about the FDA Safety Communication. Is it a mandate? Is it a regulatory requirement?”
McGraw said there is progress being made, however. “The good news is that very good people are working on it,” he said, citing Kevin Fu, associate professor at the University of Michigan and director of the Archimedes Center for Medical Device Security, who moderated the recent NIST ISPAB discussion.
“Manufacturers want to do it,” he added. “We don’t have to convince them that there are issues.”
But, he agreed that improving device security will be time consuming, costly and will never be perfect.
“A lot of people think security is a thing. It’s a property,” he said, and must be designed with expected threats in mind. Deciding on those threats is, “a trade off that every day the manufacturers and patients need to think about. Which one would you pick, and how much would you pay?
“There is no right answer,” he said.
This story, "Yes, Medical Device Security is Lousy — So What?" was originally published by CSO .