My recent post, The Internet of Things and the Cloud CIO of the Future garnered a lot of attention and comments. One tweet by @abbielundberg said “agree w priorities but there’s more to CIO role.” Abbie, by the way, is former Editor-in-Chief of CIO Magazine, so she definitely knows whereof she speaks.
A good friend who spent years selling to CIOs once commented about CIO priorities “They can only focus on three big things, and two of them are budgets and people, so don’t expect that it’s easy making your widget a top priority within the organization.”
There’s a lot of wisdom in Abbie’s and my friend’s perspectives, and it’s instructive to think about what the people side of cloud computing is going to look like—or, to put it another way, how will cloud computing change the various roles within an IT organization and how will it change their importance, relative to one another?
We believe it’s impossible to understand these questions without understanding the environment in which IT personnel will be working in the cloud computing future. Our prediction is scale: big data, more (virtual) servers, more applications, much larger applications, and many more highly elastic applications. In the past, growth in computing capacity was mirrored by a linear growth in headcount. It’s clear that this phenomenon, if it ever made sense, is unsustainable at the scale IT will have to operate in the future. Companies can’t—and won’t—support the scale growth with headcount as in the past.
The solution is quite clear: the substitution of software automation for what was manual interaction. This is the only possible way that IT will be able to cope with the one, two, or three order of magnitude growth of scale the future will hold. And the personnel of the IT organization will need to implement and support that automation. Essentially, what was heretofore implemented manually must be standardized, captured in rules, and executed without human interaction. So what will it mean for IT people when automation is infused within the systems and processes? Here are five of the likely implications:
Enterprise architects become more important
The prerequisite for automation is standardization, and the bane is customization. IT organizations will be forced to implement standardized infrastructures, application architectures, and system automation. Developing, implementing, and enforcing standardized architectures requires skilled technical architects, and every IT organization will need this role desperately. Applications will become functionality added onto a collection of standardized components assembled in common configurations. Of course, many organizations have enterprise architects today, but their influence is often muted by “the needs of the business,” which causes non-standard applications to be implemented.
For company IT organizations to operate at cloud scale, those kind of one-offs need to stop. On the other hand, the availability of public cloud providers tempts business units to pursue “shadow IT” initiatives, so it’s hard to predict how this will play out in specific companies. One thing is for sure though: scale demands automation, which demands standardization. Which leads to the next changed role in a cloud IT organization.
Hands-off operations personnel come to the fore
There’s a lot of talk about devops, a terms that means that operations personnel and operations requirements are involved earlier in the application development cycle to ensure that the resulting application is scalable and supportable. That’s fine, but it implies that operations has designed infrastructure systems that can be operated in an automated fashion and that operations insists in operating in a hands-off manner. Devops is a shorthand term that subsumes many assumptions, including the ability of operations personnel to be involved early in order to avoid manual interaction later.
Any process that requires a human touch carries a bottleneck that will hamper operating at scale. As a side note, many of the private cloud plans I’ve seen envision whiz-bang infrastructure being operated in the same old way — so a resource request portal is offered for application developers, but all that happens is that the portal spits out an email for an operations person to provision some resources in the same old manual way. Any plan that envisions implementing a cloud infrastructure to dissuade application groups from using public providers without including an operations process re-engineering effort as well is doomed.
Security engineers design measures for a porous data center environment
The Jericho Forum refers to the new model of computing as “deperimeterized,” meaning that security can no longer focus on implementing measures at the boundary of the data center. This trend is exacerbated by the elastic, transitory nature of computing resources in cloud computing environments. Security must be applied at every computing endpoint, and must be implemented automatically as part of a virtual machine initiation. This requires a rethink of the security products used, the methods by which they’re installed and configure, and how security is monitored. Security personnel need to develop a new strategy and, similar to the devops concept, get involved early so that the appropriate security measures and processes are automatically injected into every instantiated computing endpoint.
IT financial analysts provide real-time, sophisticated data
IT and business units need to make resource allocation decisions quickly and cost-effectively. My last post addressed the challenges of capacity planning in a cloud environment; I believe that in the near future, IT organizations will need the equivalent of what airlines refer to as yield management—financial analysts capable of developing pricing structures and offerings to allocate scarce internal resources and guide appropriate applications toward external providers. Amazon implements this today with its reserved instances and spot pricing—but its efforts are designed to increase use in order to raise utilization rates. For internal IT groups, the challenge is likely to be in the other direction: with so much demand and a limited resource pool, measures must be devised to reduce demand. A complementary requirement to this financial capability is an operational and system management capability to support hybrid cloud environments with real-time application deployment options possible.
Legal and regulatory compliance personnel become part of cloud infrastructure teams
The vision of automated resource availability clashes with after-the-fact compliance review. For cloud computing to achieve its vision, the legal and regulatory compliance requirements for applications must be part of the provisioning process. The insights of compliance personnel must be integrated into the service catalog that is provided to resource consumers, which means that these skills need to be part of the infrastructure and operations group. Part of the decision tree for that real-time deployment decision has to be the compliance implications of deployment location, which requires integration of these requirements into the automated provisioning process.
You’ll notice that only one of these five changes relates to infrastructure personnel. It’s an unfortunate reality that many people view cloud computing as purely a infrastructure modernization project without recognizing the further ramifications of running a cloud computing environment. It’s not surprising that this is so, but it presages a lengthy disillusionment when the other ramifications of running a cloud computing infrastructure begin to sink in. What one can say with confidence is that when infrastructure changes, so too must the superstructure in order to align with the underlying foundation.
This is familiar territory for those acquainted with Clayton Christensen. He extensively addresses the challenges institutions face when attempting to apply an innovation without modifying the general practices of the organization. His prescription for this is for the existing institution to set up a separate, segregated organization chartered with implementing the innovation and achieving the necessary operational and financial results. Applied to IT organizations, this would advocate setting up a cloud “subsidiary,” chartered with creating a new mode of operating. The challenges of doing such a thing for most IT organizations are obvious, but it’s a thought.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.
Follow Bernard Golden on Twitter @bernardgolden. Follow everything from CIO.com on Twitter @CIOonline