The early days of x86 server virtualisation were full of promise. Consolidation of physical servers often led to immediate hardware cost savings and a reduction in administrative overhead.
Around three years ago, however, our research surveys started telling us that much of the early adoption activity was hitting a wall.
Despite the initial benefits, we were hearing reports of issues arising from the uncontrolled proliferation of virtual machine images.
In effect, the ease with which images could be created and cloned meant the physical sprawl, that consolidation initiatives were originally helping to tackle, was raising its ugly head again in a different guise: virtual server sprawl.
As a result, some were reporting that challenges around activities such as configuration management, server side security and software asset management were actually being aggravated as virtualisation activity was scaled up.
Not only were there more servers to manage, albeit virtual rather than physical ones, you also had the problem of tracking, patching and otherwise maintaining dormant images, and wading through the complexities and uncertainties of licencing software in a virtual environment.
The lesson often learned was that you can only take virtualisation so far without running into complexity issues, which then start working against you unless you directly address them.
At some point, this generally translates to revisiting your management environment, and beefing up both your tools and processes to deal with a combined physical and virtual server estate in a coherent manner.
But this is easy to say and hard to do if you are not starting from a sound management footing in the first place.
The truth is that even before virtualisation-related challenges are taken into account IT professionals are already typically relying on a fragmented and disjointed set of facilities and procedures to keep things running and to implement changes.
Ironically, new management solutions introduced specifically to deal with virtual servers therefore often just add to the tooling tangle.
Apart from increased management friction over time, it also becomes more difficult to identify candidate applications for virtualising as consolidation initiatives progress.
Once you have picked the low hanging fruit of small-footprint departmental and workgroup related applications, you might move on to some of the larger virtualisation-friendly packages and systems.
But the more you push into the bigger and more critical systems space, the more dubious the returns, and there’s no point in virtualising for sake of it.
No wonder then that we see so many organisations reaching a certain point then essentially stalling with their virtualisation programmes.
Whether it’s at the 50, 60 or 70 per cent fully-virtualised level, management complexity together with tools-related constraints and fewer obvious targets means the law of diminishing returns ultimately kicks in and puts a stop to significant further activity.
So where do you go once things start to stall?
More to the point for most who are still in the relatively early days of virtualisation, how to you prevent things getting to that stage?
A lot of people are talking about private cloud as the natural next step that follows traditional x86 virtualisation initiatives.
The idea is to pool servers and storage to form a single logical resource that can be used to support everything from small-footprint apps that would historically run on a single server, to large scale workloads requiring the power of many servers.
With an ability to rapidly allocate resources to any given workload, or reclaim resources from it, private cloud architecture enables genuinely dynamic workload management, resource optimisation and resiliency.
It is beyond the scope of this article to go into the anatomy of a private cloud, but suffice it to say that a lot of it boils down to clever management and automation of provisioning, configuration and resource optimisation when complex dependencies exist between servers, storage, networking, platform software and applications.
And this is not dissimilar to the situation people stumble into a short time before their traditional virtualisation activity becomes complexity bound and stalls.
Cut to the chase
The smart money is therefore now is on appreciating that moving as swiftly as possible to dynamic workload management with as much automation as possible (which is really what private cloud represents) is the real key to achieving sustainable results.
It really isn’t necessary, or even helpful, to finish your consolidation-centric virtualisation programme before getting into private cloud.
It’s arguable that you could save yourself a lot of interim hassle by just cutting to the chase.
The pre-requisite is a willingness to adopt a new and more holistic approach to systems design and management, which often means creating a parallel environment that lives alongside traditionally architected systems.
And while we might think of running multiple virtual machines on a single physical server as new or modern, it isn’t really all that different in principle to way things were done before.
Most existing virtual environments can therefore be put into the ‘traditionally architected’ bracket when viewed from either a management or execution perspective.
So, no matter where you are with virtualisation, even if you haven’t done that much of it so far, it’s worth taking look at what private cloud has to offer, and whether it’s worth jumping to that stage.
Dale Vile is CEO of Freeform Dynamics
Pic: Keng Susumpowcc2.0