by Bernard Golden

How Cloud Computing Will Change IT: 6 New Predictions

Jul 29, 201110 mins
CIOCloud ComputingDeveloper's Virtualization and Cloud blogger Bernard Golden presents a clear picture of what the IT world will look like when cloud computing becomes the status quo. The future of IT has never looked more promising.

IT is in a time of disruptive transition, caused by the rise of cloud computing. CIOs are in the midst of a maelstrom, and—like Ulysses, the fabled hero from Homer’s Odyssey—are torn between the Scylla of established IT practices and the Charybdis of the future, both of which loom dangerously and portend trouble. Also like Ulysses, many CIOs must inure themselves to the din of tempting Sirens: the vendors who sing a sweet song of painless cloud transformation, made possible by the purchase of some software, or hardware, or a set of cloud services.

One can predict that, CIOs, like Ulysses, will eventually pass into calm waters—the future in which new processes and products will replace the legacy activities that make up today’s IT world. The shorthand term for these new entities is cloud computing.

It’s hard to envision that new world, of course, caught up as we are in the turmoil of today. Nevertheless, in my opinion, one can make confident predictions about how the cloud revolution will materialize. The light emanating from the cloud is strong enough that the outlines of the post-cloud future may be discerned.

By post-cloud, I mean when cloud is no longer an option to be compared with today’s IT conventions, when cloud computing has become the accepted, standard way of doing things. Today, cloud computing is viewed as a perturbation of the established order, but one day—and not so far off, by my reckoning—it will represent the status quo.

What will that status quo look like? Here are a few trends we can expect:

Enormous scale is quotidian. Every system will be designed to process huge amounts of data. Every application is elastic and will be able to respond to changing currents in the flood of bytes. When systems are designed, no one will ask a question about capacity, because everyone will assume that potential capacity may be infinite. Therefore, design efforts will assume that, no matter how much data an application may be managing or how many virtual machines the application topology contains, it must be able to expand to handle more. In essence, we can assume that systems will be architected for a world of “the illusion of infinite capacity.”

The Internet of Things Comes to Pass. Cisco’s CTO has predicted that, in the near future, one trillion devices will be attached to the Internet. We’ve had many people predict we’re entering the post-PC world.

Guess what? We’re actually going to be entering the post-human-centered device world. There will be devices that humans interact with that are like general-purpose computers, of course. PCs. Smart phones. Tablets. In fact, we’ll be surrounded by far, far greater numbers of special purpose devices that execute one function and communicate to a centralized program running in the cloud, which in turn will interact with something we (or someone acting as our proxy) will find valuable.

For example, we won’t be looking at our watch to read our blood pressure. The watch will take the blood pressure, send it off to the blood pressure monitoring system, which will raise an alert for a health care professional, if warranted based on typical medical experience and the specifics of our individual health situation. We will be surrounded by these kind of devices and won’t even pay attention to them, unless we need to.

It’s not easy to understand how this will really play out. Even people in the industry, who should understand this dynamic, consistently underestimate what will happen. A decade ago, I was discussing with the CEO of an analog chip company a prototype smart refrigerator then being touted by an appliance manufacturer. He noted that in the future, the refrigerator would have an interface on which you could enter your shopping list based on your observation of the levels of different groceries in the fridge. I responded, no—the milk carton will determine that a low level of milk is present and contact the fridge to add milk to the list. No, he replied, it would be too expensive to have that functionality in the milk carton. He was carrying his traditional assumptions about cost/functionality into the discussion, instead of extrapolating the trend.

In retrospect, it’s clear he was underestimating how things have played out. Actually, it’s clear that I was underestimating things. Today it’s clear that the milk carton would contact your in-cloud shopping list app and that app would contact your selected grocery store to arrange for your weekly order to contain milk. That’s why the Internet of Things will result in “applications” well beyond what we can imagine. Just as I was writing this post, I came across this video from Toyota, showing how car windows might become interface devices. We are not far off when non-human oriented computing devices will far outnumber devices humans interact with.

The cost of IT components declines precipitously. I’m not referring to chips or disk drives. I mean every part of the IT supply chain. Operating systems, middleware, application software—today’s holdouts against commoditization—will become much cheaper. If not, they’ll be replaced by free open source software components.

Why can I make such a prediction? It’s obvious: If we’re going to get to the scale predicted earlier, the individual components have to get cheaper. It can’t work any other way. Today, I hear many individuals opining that software vendors “won’t allow” the shift to cloud computing to erode their pricing or profitability. I’ve got news for those who hold this opinion: The vendors won’t have a choice. If the incumbents resist this trend, new entrants with market-friendly pricing will replace them.

Paradoxically, total spend on IT will increase—by a lot. Within certain sectors of people involved in cloud computing, there is lots of discussion of Jevon’s Paradox, which holds that cost reductions in a good or service, rather than reducing total spend, actually increase it. This increase will be driven by the fact that IT functionality infuses today’s business offerings. Every new business offering contains IT, so growth initiatives will, ipso facto, increase IT investment. The difference between this situation and today’s circumstances is that IT won’t be a back office support function but rather will be a customer-facing prerequisite. IT will achieve its long-voiced goal of being a partner of the business units rather than a denigrated afterthought.

IT Restructures IT. The flip side of being part of the business is running like a business. Part of that will require transparency of costs. The rise of public cloud providers has provided a benchmark that internal IT will be compared against. Not being able to offer comparable transparency will be the kiss of death.

With transparency of cost will come a deployment decision approach in which cost is one of several factors (including privacy, compliance, application bandwidth/latency requirements, and so on) that determine whether an application is deployed internally or externally. The assumption that the default deployment decision for applications is internal, with occasional deployments made externally is a fantasy born of outdated assumptions. Smart CIOs will come to recognize their role is managing infrastructure, not owning assets. Less informed CIOs will find themselves being bypassed by user organizations.

Along those lines, the biggest challenge IT organizations will find on their road to the post-cloud world is legacy systems. These systems represent an enormous drag on the ability of IT to align with the demands of business users who want a partner in developing new IT-infused offerings. For the post-cloud world, it won’t be enough to manage legacy applications with as little additional spend as possible. Even with low additional investment, these applications carry a cost structure of maintenance, etc., far higher than today’s offerings. For IT to be relevant, it must reduce total legacy spend. Certainly organizations are looking at this today, by, for example, moving email to an external provider. But IT organizations have to be much more aggressive on this; otherwise, not nearly enough budget will be available to do the necessary things. Every CIO needs to evaluate existing systems and come up with a plan to reduce their cost, whether by replacing with a SaaS equivalent or outsourcing operations to a cheaper provider.

PaaS is where it’s at. Too many people think of cloud computing as virtual machines on demand. The industry is rapidly moving beyond that. Application developers waste their time when they have to architect apps to implement scalability and elasticity. The infrastructure should handle that, freeing app developers to focus on business functionality, not plumbing. The path to that is platform-as-a-service (PaaS). The post-cloud IT organization will rely heavily on PaaS, using an internal or external organization to manage the underlying functionality and infrastructure. Little value and no differentiation is available at those levels, so find a way to manage them the most efficient and cost-effective way possible—and provide an environment to ease the productivity of application developers.

Application developer shortage. Jevon’s Paradox means an explosion of IT demand. In particular, a demand for application creators. People who know how to build business offerings, can integrate multiple applications into the new one, can adroitly implement calls to external APIs and services will be in high, high demand. Even the rise of PaaS will—paradoxically—increase demand for app developers. Higher productivity means lower unit costs, which leads to increased demand.

But these aren’t any app developers. This isn’t about knowing a language or a framework. This kind of application development is akin to being a general contractor, assembling a set of internal and external components and services to deliver functionality. Think mashups on steroids. With the shift of IT investment toward apps, there will be a shortage of people who can create post-cloud apps, so as a Cloud CIO, start thinking about your strategy to obtain this talent.

I hope you’ve enjoyed this tour through post-cloud IT. I expect I’ll get a ton of comments. Some of them will be along the lines of “cloud is great, but it’s just one option for us in IT to select among.” These are people unable to break free of their past and unable to recognize the rapidly changing world around them. Another set of comments will be along the lines of “we’ve always had cloud, it was just [timeshare, mainframe, etc.].” These are people who fail to understand how similar concepts, when executed in different circumstances, are completely different beasts. A final set of comments will be “cloud is great, we’re building one by making a service catalog of predefined virtual machines available.” These people fail to comprehend how rapidly the world is moving beyond the virtual machine as the unit of application deployment. If you have a comment, feel free to weigh in.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.

Follow Bernard Golden on Twitter @bernardgolden. Follow everything from on Twitter @CIOonline