Should the board press pause on public cloud after Intel’s Meltdown and Spectre vulnerabilities?

5 thoughts to help separate the signal from the noise before you upend your strategy.

boardroom meeting
Thinkstock

Over the past few days, there have been grumbles about the viability of the public cloud because of recently discovered issues with the hardware that Intel provides to all suppliers of all technology systems. Some commentators, like the respected Russel Brown of Verge, are calling out that this “CPU catastrophe will hit hardest in the cloud.”

What might that mean for the board’s strategic investment decisions on the cloud? Here are five thoughts to help process what has happened.

1. Understand that this incident affects all on-premise and off-premise systems built over the past 20 years as well.

This affliction will hit current on-premise/hybrid systems as hard as it will hit the cloud suppliers. Buckley and Metzger, have issued their warning that "we also recommend in the strongest terms that no-one should assume that systems residing behind firewalls are safe." (Disclosure: Buckley and Metzger are colleagues of mine at Virtual Clarity.)

So, the question is not whether an organization could be affected but how much work will it be to repair, monitor and manage for the next few years? The smart money is on the hyper-scale cloud suppliers to solve this (and other future traumatic occurrences) better and faster than most on-premise teams or hybrid system suppliers.

2. Speed and capability is of the essence

All major hyper-scale cloud suppliers, AWS, Google, Azure, and Oracle, have already started if not completed their first round of patching. Close on their heels will be the scale providers of hybrid cloud such as DXC and ATOS. Their speed of execution strengthens the argument that the skills required to manage an extensive infrastructure exist primarily in the hands of these suppliers, and not in on-premise teams. Google has announced that all their systems are now secure, expect the others to follow suit if it has not already occurred.

Now, business leaders should ask themselves, how much confidence do they have in the organization’s ability to manage vulnerability in their owned legacy systems? Many large enterprises do not accurately know where they have equipment stored, how old it is or what applications are running on them.

3. Mitigation is more than a fix. It requires long-term investment

While the first triage is underway, the collateral damage from incidents like Meltdown and Spectre will also affect budgets for years to come. Some systems are showing performance degradation of around 17 percent and some as high as 30% after the necessary patches are applied. The board must consider whether the organization has the luxury of compensating for such degradation on its on-premise/hybrid-core systems? Is the budget available to make the necessary investments? We know that the hyper-scale public cloud providers have both the financial resources and the organization to replace hardware at incredible speeds. While some of this remediation may lead to a temporary increase of costs and prices, ultimately their scale advantage will probably see them revert to lower competitive market pricing as soon as possible. On the other hand, to fix on-premise and hybrid systems, the board might be left with depreciating investment for years to come.

4. Vulnerabilities are a way of life in all sectors

With the sheer volume of actors involved in the hacking and exploiting of weaknesses, it is essential that the organization be on the front foot of discovery of potential vulnerabilities or breaches. Let’s remember that it was Google who discovered this vulnerability first and they then informed Intel. This finding illustrates the concentration of highly skilled technical expertise in public cloud suppliers. A lot of these capabilities also exist in the scale hybrid players. They, therefore, have the resources, the procurement clout, and the financial incentive to be first to the resolution of any identified vulnerabilities. How confident are business leaders that they have the resources and expertise to achieve this for their on-premise/hybrid systems?

5. Isn’t cloud infrastructure more vulnerable because it is shared?

Good question! The cloud suppliers are an enormous target for hackers and could theoretically be more exposed to vulnerabilities when exploited. However, paradoxically, they are practically less likely to suffer the brunt of weaknesses; firstly because of their hardware refresh programs, secondly, because of their massive investment in security, and lastly because they will probably be fastest to remediate should it happen. Let’s remember that most of the notorious security breaches over the past year have involved proprietary on-premise or hybrid systems. Those that have appeared via the public cloud were mostly due to configuration or lax security by the customers, not the cloud suppliers. In this respect, these exposed companies would have been just as vulnerable using on-premise systems. Precisely because the public cloud suppliers understand their susceptibilities, they are better prepared and equipped to address them.

Conclusion

Separate the signal from the noise. Hyper-scale cloud is still the long-term future of technology. Big hybrid suppliers like Atos and DXC continue to offer solid intermediate routes to this destination. This horrible security breach does not change the fundamental advantages that the hyper-scale cloud providers AWS, Azure, Google, and Oracle have over most enterprise technology teams: Knowledge, Financial deep pockets, global scale, procurement clout, state-of-art processes, and tools. This bouquet of capability means that accelerating a substantial commitment to the hyper-scale cloud makes even more sense than it did last week.

Advice: Don’t hit pause but if your cloud programs are well crafted, proceed at pace.

This article is published as part of the IDG Contributor Network. Want to Join?

Participate in the 2019 State of the CIO survey. Make your voice heard.