Gartner released its annual “Top 10 Strategic Technologies for 2009” last week and pride of place goes to virtualization, put right at the top of the list. More surprising, perhaps, is the fact that Gartner placed Cloud Computing directly below virtualization in the second spot.
You’ve probably seen coverage of the list and feel you’ve gotten the gist of it from the commentary. That’s what I thought, too—until I read it, which I recommend you do, too here. Gartner’s discussion of each of the trends is illuminating, both for what they say—and what they don’t.
In discussing virtualization, Gartner notes that while server consolidation has been a huge growth area for the technology, 2009 will see storage and client virtualization become strong trends as well. Gartner rolls data de-duplication in under storage virtualization as well, although I must say that I view de-duplication as separate from virtualization—de-duplication is an initiative that makes sense and should be undertaken (possibly) in partnership with storage virtualization, but is not a prerequisite for it. It might be more accurate to say that storage virtualization makes de-duplication possible, as the various copies of the data which formerly resided on physically separate machines with no opportunity to identify duplicative data available can now be mapped and reduced to one copy.
Turning to client virtualization, Gartner uses some curious language to discuss the phenomenon: “instead of the motherboard function being located in the data center as hardware [i.e., as individual blades] , it is located there as a virtual machine bubble.” I’m not sure that using the term bubble really clarifies what client virtualization is: the move from putting an end user operating environment on a dedicated piece of hardware, whether a local PC or a data center-based blade, to putting an end user operating environment into a virtual machine which resides on (and co-exists with other virtual client machines) on shared hardware. Or, to put it more in Gartner’s terms, multiple client virtual operating environments cooperatively existing on a single motherboard.
Gartner goes on to deemphasize the client virtualization trend, stating that “despite ambitious deployment plans from many organizations, deployments of hosted virtual desktop capabilities will be adopted by fewer than 40 percent of target users by 2010.”
Without trying to criticize Gartner, it is an enormous disservice to the importance of this trend to characterize it with language that appears to downplay its strength. Simply stated, the move to client virtualization is, from an organizational impact perspective, far greater than server consolidation. Server consolidation is a back-room technology primarily of importance to IT operations. In Clayton Christensen (author of The Innovator’s Dilemma) terms, server virtualization is a sustaining innovation, in that it improves an existing product.
Client virtualization, by contrast, dramatically changes the entire end user value chain delivery. Many discussions of client virtualization focus on the fact that, when done right, the end user sees no difference between his or her screen whether it is delivered via a traditional “thick” client or delivered through a virtualized environment. That is all to the good, and, frankly, if client virtualization imposed a significant difference from the traditional thick client, it would most likely be a non-starter.
However, the method by which that identical screen is delivered to the end user is significantly different in a client virtualization scenario. This means that the processes and operations of delivering the client environment must change—a lot—to achieve the benefits of client virtualization (more on that in a bit).
To begin with, new hardware must be placed in the data center to run the virtualized machines. So the cost of creating the operating environment is a necessary investment to put client virtualization in place.
Second, individual virtualized machines must be created and made available on the new data center-located machines. In other words, there must a migration of the existing physical machines into new virtual machines.
Third, the organization’s network capability with regard to capacity and latency must be tested and, if necessary, upgraded to support the flow of data between the client devices and the data center. A network that was previously very capable of carrying the data traffic between thick clients and server-based applications may not be robust enough to carry the increased traffic characteristic of client virtualization.
Fourth, the established processes the organization uses to manage client machines will need to be modified. Simply put, a lot of the work formerly necessary to keep client machines up and running goes away with client virtualization. No more worrying about whether the antivirus software is up to date. No more having to make “truck rolls” (i.e., in-person visits) to figure out what’s wrong with the machine. The client environment is created on-the-fly back in the data center and served up fresh each time the user logs in. Some (but not, crucially, all) of this work is displaced back to the data center, which needs to have people manage administration of user environments, updating the images from which new virtual machines are created, and so on.
So, it’s easy to see that there is a lot of churn in moving to client virtualization —which is why Gartner’s statement should have been “as much as 40% of companies will undertake client virtualization by 2010.” To my mind, the fact that four out of ten companies will take on the work I outlined above in order to implement client virtualization indicates that it must offer significant—nay, remarkable—payoff to make that 40% ready to undergo that burden.
So what is that payoff? Why is client virtualization a big deal?
Number one, depending on how it’s implemented, client virtualization can operate on lower spec hardware at the end user location, which offers hardware savings for new machines as well as the opportunity to stretch out the useful lives of already-existing client machines. So right off the bat, there’s some capital expenditure avoidance possible with client virtualization. While the savings on each machine may not be huge, when applied over hundreds or thousands of end users, the money can add up fast. Naturally, some of those savings must be applied to the additional hardware necessary in the data center, but net-net client virtualization should offer savings in this arena.
Number two, remember what I said about some—but not all—of the cost savings from less client-side work being transferred to additional work in the data center? It’s true that some of the savings are spent, but the rest of the avoided IT operations costs aren’t spent. It’s hard to estimate what that percentage will be, but considering the amount of money spent on help desks, personal visits on-site to deal with software problems, and so on, it could come to a pretty penny, indeed.
Finally, and perhaps most important, there is the money saved through end users who are no longer stuck sitting doing nothing when their PC gets hosed. Every time someone has to stop working because their machine breaks represents lost productivity. This lost labor cost far outweighs the cost of hardware and software devoted to employees, so using client virtualization to keep client machines up and running can provide enormous financial returns.
Given the financial benefits client virtualization offers, why doesn’t everyone take advantage of it at once? As I mentioned earlier, server consolidation has taken off because it offers a sustaining innovation: it can be applied with very little change in behavior or processes. By contrast, realizing the benefits of client virtualization requires significant change in those areas—and behavior and process change is always more difficult than technology change. Furthermore, the financial benefits of client virtualization don’t really kick in when only a portion of the infrastructure is migrated—because you continue carrying the costs of the help desk, being able to do on-site work, etc.—so in fact, a partial client virtualization implementation actually adds to your costs. It’s only when the majority of the client machines are migrated that the cost savings start to accrue.
So I’m actually impressed with Gartner’s prediction that 40 percent of organizations will make the move to client virtualization by 2010. For that percentage of organizations to do so demonstrates the magnitude of the financial rewards client virtualization provides, given the organizational challenge presented by the necessary behavior and process changes. Gartner actually may be optimistic in their forecast, but only in the timescale, not in the ultimate adoption.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.