by Bernard Golden

How Cloud Computing Puts Adverse Selection in Its Place

Opinion
Oct 30, 20138 mins
Cloud ComputingDeveloperSystem Management

For years, operations departments have used adverse selection principles to allocate resources, often deeming small projects unworthy of enterprise computing power. Today, though, the cloud makes computing so cheap that there's no reason to deny any project, no matter how small. Doing so will simply push users to the public cloud -- and beyond IT's control.

My last blog post used the phrase policies vs. permissions to address one of the thorniest, and most difficult, aspects of cloud computing. Automation, especially as represented by resource user self-service, is one of the key underpinnings of cloud computing.

As I noted in a recent piece on agility, its cloud computing’s automation that makes agility economic. Absent automation, agility would be an unaffordable pipedream.

I find fascinating the feedback I often get from readers and audiences. On this topic, it can essentially be boiled down to this: Automation is fine for simple applications, but when you get to “real” production applications, skilled tuning is necessary to ensure sufficient performance and response times. This can be understood as a statement that operations talent is still required, even in the world of cloud computing.

This feedback is often communicated in rather condescending terms, as though I’m unable to comprehend the weighty complexities that operations groups must deal with each day in their heroic efforts to keep the IT factory humming.

Actually, I’m pretty familiar with the skills required to tune the complicated admixture of infrastructure, hardware, middleware, application software and configurations that affect the performance and uptime of a complex application. During one part of my career, I worked at a database software company; we devoted enormous effort to helping customers wring sufficient performance from their applications. My group was responsible for the networking portion of the architecture and we with mind-numbing details of packet configurations and software stack performance analysis.

Quocirca: Application Performance Management Top Priority for CIOs

Also: Amazon Hails Era of ‘Utility Supercomputing’

I’m never really sure how to respond to those who raise the issue of managing complex applications. They’re undoubtedly correct. There’s no question that some applications require skilled talent to diagnose and treat performance bottlenecks to assure optimum application health.

Operations Can’t Put All Its Eggs in Big Baskets

The question, really is, this: Of a company’s total application portfolio, what proportion is represented by such complex applications, and what proportion is represented by less complex applications that can be fully satisfied by self-service, less capable computing environments?

This is a critical issue for those who assert that simple self-service environments are insufficient to address application needs. Many who criticize my advocacy of self-service seem to imply that things need to remain as they have traditionally been — that application groups request resources, and operations groups manually provision those resources, making them available once the provisioning process is complete. This undoubtedly enables skilled operations personnel to perform custom configuration and tuning, thereby assuring applications can achieve optimum performance.

The problem with that approach is that, in the phrase of Clayton Christensen, it overserves those who don’t require such complex capability. Someone who needs a virtual machine to perform a quick prototype of a website doesn’t need someone to analyze database throughput and total network traffic.

Someone who’s going to run some analytics on a standard Hadoop setup doesn’t need anyone to hand configure a cluster. For people such as these, automated self-service is more than capable of addressing their needs. If manual procedures designed to address more complex requirements are imposed on them, they’ll seek out a more convenient alternative to solve their problem.

Research: Employees Engage in Rogue Cloud Use Regardless of Security Policies

Here’s an analogy. If I have an extremely high fever, I definitely want to see a doctor — a skilled professional with enormous knowledge and experience. On the other hand, if I have a headache, I’ll just go to a drugstore and buy some ibuprofen. If someone tried to impose a rule that I need to see a doctor in order to get a prescription for ibuprofen, I’d just ignore it and make my own way to the drugstore, thereby solving my problem in a way that’s convenient to me.

As I see it, the biggest threat to existing IT groups is that they misunderstand the demand profile for computing resources. Because it’s been so hard for application groups to obtain resources, only the most important — that is, the most complex production applications — have traditionally been able to get their needs addressed.

Simple, quick-and-dirty applications can’t justify the cost and overhead of the traditional provisioning process and, inevitably, are ignored. They are, in a sense, not important enough to warrant the attention of the high-touch practices associated with traditional operations practices.

Adverse Selection Will Turn Off Would-be Corporate Cloud Users

Cloud computing now offers those users with overserved, “unimportant” applications a way to get their needs addressed. It’s the computing equivalent of the self-serve drugstore. Failing to satisfy these users with streamlined, self-service resource provisioning practically ensures they will flee to public cloud computing environments.

As I said, the critical question is how large a proportion of total demand these kind of applications represent. It’s difficult to answer that question; historical practices mean that most of this type of demand is unexpressed due to the difficulty of justifying it. I believe, however, that it in fact represents a very large majority of total potential demand — and that failing to address it via automation poses a real problem for IT.

This is because of what might be termed the problem of IT adverse selection. This concept, as you’re probably aware, is one associated with insurance — and much in the air recently as a result of the launch of the Affordable Care Act. No matter what perspective one holds regarding the Act, all parties agree that one of the critical issues that health care must address is the makeup of the insured population — how to ensure that a large enough number of subscribers are in the insured population and that they represent a mix of healthiness.

Commentary: How IT Can Become a Cloud Service Provider

More: What Cloud Computing Means For the Future of IT Organizations

The key challenge in health insurance is motivating a sufficient number of healthy people to participate in the program to enable coverage of less healthy, more costly subscribers. Unless there are a number of users in the population who are in good health, the high cost of treating too large a proportion of sick people will ruin the economics of the plan and require extremely high payments on the part of the smaller, sicker population.

Without this mix of healthy and less-healthy subscribers, the insurance program inevitably fails, because the high cost applied to the remaining sickly subscribers causes some to drop out, which leaves an even smaller population of subscribers to absorb the cost of running the system, and so on. In other words, retaining healthy people is the key to be able to supporting treating people who require a lot of medical attention.

If the IT analogy isn’t clear, let me spell it out. Failing to address the needs of low-touch users raises the likelihood that they will exit the IT system by using public cloud computing on their own, leaving IT with only the high-cost, high-touch users and applications. Without the subsidy of the low-touch users contributing to the overall budget of IT, an increasingly large cost will fall upon those users who require skilled support.

As costs go up, fewer applications will be able to justify themselves and will be removed from the portfolio &mdahs; or, perhaps, flee to low-cost outsource arrangements such as managed service providers, which one might regard as the equivalent of medical tourism). Eventually, the pool of internal applications will fall to such an extent that the cost of internal IT will be unsupportable.

It’s fair to say that few IT organizations recognize how critical it is to enable automation and self-service. It’s not an overstatement to say that satisfying application group needs for agility and scale is vital for IT to have a viable future in corporations. Falling back on easy truisms such as “Automation can’t solve the needs of applications that require tuning” as a way to avoid empowering users who are overserved by today’s manual processes is a dangerous path to follow and a foolish strategy to pursue.

Bernard Golden is senior director of Cloud Computing Enterprise Solutions group at Dell. Prior to that, he was vice president of Enterprise Solutions for Enstratius Networks, a cloud management software company, which Dell acquired in May 2013. He is the author of three books on virtualization and cloud computing, including Virtualization for Dummies. Follow Bernard Golden on Twitter @bernardgolden.

Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.