Africa

Americas

by Scott Nelson

Searching for a path to IoT security

Opinion
Mar 25, 2016
Internet of ThingsIT LeadershipPrivacy

My conversation with an informed skeptic.

We probably see 5-10 Internet of Things (IoT) security posts or news feeds every day.  I read one last week that said that “real IoT security is over 10 years away.” The article was an accounting of a panel discussion between experienced security experts so their analysis was a downer for someone like me who believes that the IoT has so much potential in everyday human endeavors.  Panelist Eugen Kaspersky summarized the problem from the point of view of critical infrastructure security: “I am waiting for any government to introduce a cyber‑resilience strategy.”  But then he later adds “Lawyers are late and regulators are late. Government regulators typically take 10 years to recognize a problem.”

Hopefully we will not have to wait for the government to solve our problems when it comes to security and IoT.  I recommend Blake Ross’s recent security post as one review of the implications of that expectation.  Ross’s article points out that security can be a paradox – solving one problem can create another.  I’m okay with paradoxes that merely threaten the life of Schrödinger’s cat or provide humorous dialogue for Alan Arkin in Catch 22.  But I am not comfortable with waiting a decade or unanswerable questions when it comes to security.

So I spoke with Todd Carpenter, Chief Engineer and co-owner of Adventium Labs, an R&D firm at the leading edge of critical systems engineering and cyber-security. Todd’s areas of expertise include engineering high-value, real-time, fault-tolerant, secure systems in space, military and commercial avionics, medical, and petrochemical domains.  He leads Adventium’s risk assessment and management services that evaluates and teaches how to evaluate security risk for cyber-physical systems and products in medical, avionics, and industrial domains.  Adventium was recently awarded a $2.2M cyber-security contract by the Department of Homeland Security to “produce a cyber-physical architecture that has safety and security monitoring functions in support of DHS S&T cybersecurity division’s larger Cyber Physical Systems Security program.”  Todd received the H.W. Sweatt award at Honeywell for his work on the master scheduler used in the Boeing 777 Avionics.  He understands critical systems design, assurance and security at a level shared by very few.  Not surprisingly, Todd is a skeptic when it comes to IoT security.

Accountable to an appropriate balance

SN:  I believe that the IoT can create great value if it is balanced against reasonable risk and I believe that we have the tools necessary to achieve this balance.  Clearly the more critical the application, the more robust the security required. As a case in point, if the FBI has to force Apple to show them the data from your iPhone, then the technology exists to create the right balance.  Todd, you see it differently.  What I am missing?

TC: I recently read the report on the Deepwater Horizon, the tragic undersea oil rig blowout in 2010.  This paragraph from the summary pretty much sums it up:

“Analysis of the available evidence indicates that when given the opportunity to save time and money – and make money – tradeoffs were made for the certain thing – production – because there were perceived to be no downsides associated with the uncertain thing – failure caused by the lack of sufficient protection. Thus, as a result of a cascade of deeply flawed failure and signal analysis, decision-making, communication, and organizational – managerial processes, safety was compromised to the point that the blowout occurred with catastrophic effects.”

My interactions with the business community lead me to believe this attitude is unfortunately the norm.  I’ve heard, “Our financial responsibility is to the shareholders.  We don’t pay for it [lack of security], so it is not a consideration.”  In other words, the risks are external.  Brand value is one way that companies do pay for the externalities and some companies recognize its importance. 

But the way your question is phrased actually highlights the issue:

“I believe that there is great value balanced against reasonable security risk.”

What you are missing is that the value to risk ratio is not perceived the same way by all stakeholders.  In particular, less responsible companies create value for themselves and manage their own risk rather than the value/risk ratios of their consumers and unfortunate bystanders.  Responsible companies in an appropriate regulatory environment manage both.  The avionics community, for instance, and international aviation commission manage risk to workers, passengers, and the innocent bystanders on the ground underneath the flight paths.

SN: So you are not saying the right balance of security design to criticality of the application doesn’t exist.  Rather, it’s the companies developing these applications that are not as diligent as necessary to reach that balance. You referenced a life critical application as an example where the security and safety failed to make your point.  Deepwater Horizon is a very complex system with many participants.  What about the design of less critical applications like operational convenience?

TC: I recently read a couple of articles about companies who created products to maximize their own value, i.e. rushed to get product to market.  In one example, a security system is flawed in ways that no competent systems engineer or architect should tolerate.  This company produced, and sold, a system that can’t be updated.  A basic, well-understood tenant of networked design is that you must design your systems with the ability to update them.  Nobody is asking for a perfectly secure system.  But the industry agrees that you have to be able to reach out and update your system if a flaw makes it through to the field or if the enemy adapts.  Burglars can now get plans to build a device to remotely turn off this particular home security system at will. I fear that kits to break into  this system will be available online soon.

Now in this particular case I hope the company will fix the issue and ship, for free, new units to all individuals who have the flawed unit.  But who is going to pay the cost of any real compromises that occur? Will this company reimburse the victims who have their homes broken into via this exploit to restore their valuables?  I doubt it.  More likely the victim’s homeowner’s insurance, i.e. all of us, will pick up the tab to replace the valuables and the home-owners will lose time from work so they can re-secure their homes and get their valuables back.  The company that created this situation of unwarranted consumer trust is not accountable for the full impact of the risk they delivered to market.

SN: Okay, so we have our first point of agreement: brand accountability to customers in the IoT as it relates to security is not yet an “efficient market.”  Indeed, that was a key take away from the panelists in Steve Bush’s “10 years to security” article – companies today have very little motivation to pay for security because consumers are not holding them accountable.  I see three forces that can make companies more accountable to security.  The first is the somewhat altruistic self-regulation, something in which I believe but upon which I would not depend. The second is government regulation, something in which I do not believe but upon which we often depend because of the tort, and finally the customer and efficient market in which I believe and will depend because it makes or breaks brands.

Good requirements definition AND Checks & Balances

SN: I believe that there are companies who care enough about their brand to follow good security practices to protect their customers and their customers will create the efficient market.  I think this can happen faster than regulation but it is not certain.  But I also agree that good systems designers and product managers should be accountable to market expectations for security – at the appropriate level of risk.  Unfortunately, I think that enthusiasm with the technology is overriding fundamental practice in many cases today.  But I have read, and even heard you say, that the problem with IoT security is not one of technology, but rather this process and diligence.  How should the CIO of an IoT focused company implement checks and balances to mitigate poor security in design?  What are the fundamentals?

TC: Go back to engineering process and requirements.  Establish policy to refresh requirements as the threats evolve and recommendations change.  Security is not a static — do it once and forget it.  Instead, you need to monitor the field for abnormal situations and undesirable trends, identify flaws, and issue patches.  As a CIO, you should be held responsible for establishing basic risk management requirements. The CTO should be held responsible for identifying solutions.  You or your CFO should provide the audit to make sure the solutions satisfy the requirements – checks and balances.  Solutions and audit should definitely be in separate parts of your organization.

If you are a tiny company, and need a place to start, start with the Center for Internet Security Controls. The California Attorney General’s Office recently stated not doing so would be indicative of an organization’s failure to provide reasonable security.  Also check out the IEEE article, “Avoiding the top 10 software security design flaws.”  If you have a web interface as part of your solution (whether on your device or at your company), address the OWASP Top 10.

Keep in mind that these are “top” lists.  Dealing with those top items doesn’t mean you are done with security, it merely means you’re starting in the right direction.  Resist the temptation to use proprietary, closed security solutions, whether they are your own or ones you are buying them from someone else.  If you come up with something clever, by all means patent protect it.  But the effective security solutions tend to be ones that have had many smart eyes on it.  Whatever you do, don’t build in a product back door.  If you can get into it, chances are so can attackers. 

Also provide a secure update mechanism, so users can securely patch their systems.  Those mechanisms exist; you don’t have to invent things yourself.  You will, however, have to make an unpopular decision: Your devices must be updateable, and that means you can’t go with a write-once system.  That will make the hardware a few cents more expensive.  You will also need to place a unique private key inside each device and you need to keep that key private.  That will add a step to your test processes and some extra IT infrastructure and protection.  You will also need to plan the infrastructure to monitor for security and safety issues and release updates as necessary.  For how long you ask?  Well, the market will ultimately decide that for you.  But if you sell a product that claims longevity, e.g., smart lightbulbs good for 50,000 hours, you better plan to provide security support for at least that length of time. “How could lightbulbs be a threat?” One manufacturer used a never-changing password that could be compromised.  Attackers were able to exploit this to establish a beach-head onto a wireless network, and use that beach-head as a basis for launching attacks on other networked devices.

SN: Another point of agreement: good systems engineering with diligent requirements definition are necessary to build AND maintain security in an IoT offering.  I don’t see that we need new fundamentals or process here, but I see a need for increased leadership in checks and balances as well as full product lifecycle management from a security point of view.  Clearly this is not happening enough today, but as we stated above there are multiple forces driving that change.  I believe we will see customers forcing the change through a combination of buying behavior and legal actions.

Security and Privacy:  Different but inseparable

SN: Let’s shift gears to privacy vs. security.  We both apply “security triads.”  Mine focuses on the functional needs for security in IoT from the user perspective.  One of these needs is Privacy.  This triad speaks to how security supports brand management by establishing trust with customers by meeting their security needs. 

user needs triad Scott Nelson

Assurance: Knowing that the data provided or processes controlled by the data are timely, truthful, and accurate. 

Privacy: Knowing that access to data and information is controlled and limited to those who are allowed to have access.

Liability: Knowing that the value of the asset or process to which the data applies is protected.

Your triad is more technically functional and I think does a better job of showing the interdependencies of functions involving security in systems design. 

system function triad Scott Nelson

Integrity: Knowing that the data has truthful value from an authenticated source. 

Confidentiality: Controlling access to the data or maintaining accessibility to the desired parties.

Availability: Providing the data to those parties for home the data is intended.

I see value in both perspectives but feel your point-of-view helps more with an approach to security.

TC: The policies and regulations come from different sources, but the engineering and technologies to provide security and privacy are strongly intertwined.  Some lawyers and scientists like to argue semantics.  I tend to favor simple models that give me tools to get work done.  The security model I use is simple: systems provide varying degrees of confidentiality (of which privacy is a factor), integrity, and availability.  Some systems stress certain aspects of those properties over others.  Historically, for instance, avionics and industrial controls emphasized integrity (do the right thing) and availability (when needed), over confidentiality.  Attackers exploited the lack of confidentiality controls to figure out which celebrities were flying, or what the recipes were for industrial chemicals.  So we saw confidentiality controls start to improve.  This relates to your triad; mine does not count for the strong legal aspect of liability or accountability of yours.  We saw this happen in aviation and the medical industry, where the law and regulations were necessary to protect innocent people from unscrupulous players. I believe we can combine the two triads into four-sided model to show how everything relates and is connected.

security surface polygon Scott Nelson

These properties, while different, rely on each other.  For example, many modern integrity controls follow a chain of reasoning that ultimately leads to a private cryptographic key to determine whether or not the device is running what it should be.  The integrity relies on that private key remaining private.  The Stuxnet attack and the RSA attack a few years ago are examples of what can happen when those private keys are compromised.  Security and privacy also have safety and dependability implications for safety-critical applications.  My go-to reference for all these terms is the IEEE article, “Basic Concepts and Taxonomy of Dependable and Secure Computing.”

Since the properties are so intertwined, I encourage companies to develop good requirements and address security and privacy together. 

SN: That’s our third take away then — companies must address security and privacy together while understanding that they are different but inseparable

Practicing Cybersecurity fundamentals

So our three findings for reaching that appropriate balance of risk vs. reward for IoT companies are

  1. Be accountable to customers’ security needs even if they are not expectations.
  2. Practice system design fundamentals with good checks and balances.
  3. Develop in full awareness of user and functional needs for both security and privacy.

However, these are high level principles and we still suffer from whichever is the weakest link so let’s conclude with specifics.  What are the fundamentals you advocate for good cyber security, i.e. good IoT security?

TC: The Federal Trade Commission identifies 5 basic business principles you should follow:

  1. Take stock. Know what personal information you have in your files and on your computers.
  2. Scale down. Keep only what you need for your business.
  3. Lock it. Protect the information that you keep.
  4. Pitch it. Properly dispose of what you no longer need.
  5. Plan ahead. Create a plan to respond to security incidents.

This is a great start.  Many companies start to screw up with #2, and instead collect and store as much information as they can.  That creates a liability for you, and can harm your customers.

SN: I think many readers will find #2 counter-intuitive in the IoT context where data is the objective and big data is the assumed pot of gold at the end of the road.  I think this is a tough one because I do not believe that we always know what data will be valuable and to what other data there might be correlations.  I know of one customer who kept all the raw data from an application where we had reduced the data to a single actionable number.  Two years later they discovered that there was another signal in the data stream and we were able to re-design the data filter to go back and re-calculate all the scores providing immediate validation of a completely new value proposition.

The other thing about good systems design that has particular importance in IoT development is that Systems Engineers have to be excellent “make vs. buy” decision makers.  Development in the IoT is more dependent on the ecosystem than any other product space today.  I would argue today that data transfer, ingestion, storage, and access are all a “buy” decisions.  How do we make our system more secure with those decisions?

TC: With modern security architectures one can design a device to plug into a somewhat unclean network (power and computer) and make the device behave well (good privacy/confidentiality, integrity, availability, the latter limited by reliability of the network).  As soon as you put a cloud provider in there, however, you have to work harder.  If you’re just using cloud storage, then you can still have all your Confidentiality-Integrity-Availability (CIA), as long as you build a solid crypto architecture, and never ever release the keys into the cloud.

If you want the cloud provider to also host processing, which means they have access to your customer’s unencrypted data, now it is far more difficult.  If you can de-identify that data, which implies you have an expert statistician review your design, you might still be able to maintain some privacy on that data.  Engineers and managers typically overestimate the amount of effort it takes to piece clues together. That’s what Big Data is all about, in fact.  De-identification will have to be done in your shop, on your machines, before that data goes to the cloud.  It includes masking IP and MAC addresses. 

SN: So even the “buy” decisions in IoT systems design require additional discipline, process, and policy to manage the security risk.

TC: Yes, just as it is for software code quality and safety.  Companies make choices about risk tolerance and how much they want to invest in development and operations.  Some are choosing quick profit over quality, sustainability, and long term brand.  The Deepwater Horizon report is an example of that sort of culture on a large scale, but it appears to be prevalent in this new flurry of IoT devices.

SN: Your last point may be true today, but it does not have to be the norm.  Indeed, Adventium has a new contract to help make sure that security, safety, and privacy continue to be the norm in medical devices.  I also know you are proud of the performance in these regards of avionics industry as a whole.

TC: Yes, the aviation industry in particular has a phenomenal quality record.  In fact, 2015 would be hard to improve upon.  They’ve shown you can be regulated, profitable, and global while still achieving fantastic safety performance.

Regarding medical devices, DHS’s Cyber Security Division (CSD) recently launched the Cyber Physical Systems Security (CPSSEC) program that aims to “build security into” emerging Cyber Physical System (CPS) designs. Adventium is developing a high-confidence cyber-physical architecture for medical devices.  Part of our motivation is that in the Twin Cities alone, we have hundreds of small medical device manufacturers.  They are great at identifying therapies to improve quality of life.  The average size of these companies is 50 or so people.  Rather than having hundreds of these companies all reinvent the wheel and figure out how to develop a secure platform, we will provide an open-source exemplar.  Security is a system property, so each device company will still have to make an argument to the FDA about the safety, efficacy, and security of their complete solution.  But this will provide them with a starting point and guidelines, so they can focus on their core competencies.

Following the path

TC: Security threats are always evolving.  The attackers are smart so they continue to improve their skills.  Technology must continue to advance to stay ahead of those attackers.  One key research area we are exploring is providing both safety (e.g., give positive control authority to a control systems operator) while remaining secure (not giving it to an attacker).  That’s a tough problem with no general solution in place yet.  But the basic stuff on those top 10 and 20 lists?  Yeah, the technology and processes are there.

SN:  Thanks for helping define a path to better security through better security practices, Todd.  Hopefully these insights on designing for security in IoT applications will help designers create more balance in the marketplace and the approaches described herein offer good fundamentals on how to meet customer expectations.  The continuous drum beat of attacks and consequences thereof are creating increasing expectations with both customers and shareholders.  I believe we will see many more systems designers and business leaders be accountable for the security of their IoT offerings in the near future. 

We can’t afford to wait 10 years.