by Eric Knorr

A Data Center That Takes Care of Itself

News
Oct 01, 20029 mins
Data Center

Let’s say you decide to go for a run. After a few minutes, your breath quickens, your heart rate increases, and you start to perspire. All this happens whether you think about it or not?because your autonomic nervous system has roused the right organs to respond to the increased load on your body.

Autonomic computing, a phrase coined by IBM, describes technology that self-regulates and even heals itself much as the human body would do. “When I say technology, I’m including all of the software, all of the applications, all of the storage, all the pieces of the infrastructure,” explains Irving Wladawsky-Berger, vice president of technology and strategy for IBM’s server group. “Now, I don’t mean any far out AI project. What I mean is that…instead of the technology behaving in its usual pedantic way and requiring a human being to do everything for it, it starts taking care of its own needs.”

According to Wladawsky-Berger, the mind-numbing complexity of today’s data centers makes the need for self-managing systems acute. At current growth rates, IBM projects that demand for skilled IT personnel will increase by more than 100 percent in the next six years?to the point where there simply won’t be enough available talent to maintain the infrastructure. To avert this Malthusian crisis, IBM launched its eLiza initiative last year, with the ongoing objective of spreading autonomic features across IBM’s hardware and software product lines. As these enhanced products roll out and begin working together, IBM declares that eLiza-enabled systems will display four distinct attributes: self-optimization, self-configuration, self-protection and self-healing.

But IBM is hardly the only company answering the need for self-managing systems. Its big hardware rivals have similar schemes: Hewlett-Packard touts its utility-based computing, while Sun Microsystems’ amorphous N1 initiative describes a computing environment free from the drudgery of manually allocating resources. And several startups, notably Terraspring and Think Dynamics, are already shipping solutions that monitor the data center and automatically provision servers and deploy applications.

“Autonomic computing is just the IBM term,” says Duncan Hill, CTO of Think Dynamics. “There are a lot of terms out there that ultimately mean the same thing: Computing infrastructure that adapts to meet the demands of the applications that are running in it.” In other words, IT develops and deploys applications, and the infrastructure more or less takes care of itself?adjusting automatically as applications and workloads change. The ultimate effect is not only less work but also much more effective utilization of data center resources.

From Here to Autonomy

How far away are we from this spectacular simplicity? That depends on how you define autonomic computing. IBM’s eLiza project encompasses everything from server hardware to self-healing databases to security management. Some of those elements, such as IBM eServers that detect and isolate bad memory chips, have already arrived, but the key software?IBM’s Enterprise WorkLoad Management (eWLM) suite?will remain in beta until next year. In any event, analysts agree that pulling such a wide range of IBM products into an organic, self-managing whole will take years.

The narrower definition of autonomic computing?put forth by such startups as PlateSpin, Sychron, Terraspring, Think Dynamics and others?addresses neither security nor vendor-specific hardware features, instead focusing on server provisioning and workload management that function across platforms. All these solutions, including IBM’s eWLM, share three characteristics: virtualization of resources, network monitoring and automated response to change based on rules established by data center administrators.

Rather than just virtualizing storage, autonomic solutions virtualize the whole infrastructure so that data center resources can be divvied up dynamically to meet the needs of applications. Autonomic systems will pool processing, memory, storage, bandwidth and so on, and then dole them out on the fly. In a sense, self-management software plays the role of a data center operating system, allocating the resources of the data center as if it were one big machine.

While data center virtualization is a fairly new concept, network monitoring?the input side of the autonomic computing model?is as old as the hills. In fact, autonomic solutions typically rely on data collected by such established products as HP OpenView or Tivoli Enterprise. “There are lots of ways of reading an environment to see if it’s broken,” observes Al Wasserberger, CEO of Spirian, a software distribution company that also specializes in adding automatic deployment and self-healing capabilities to other companies’ software. “There are not very many ways of automatically fixing it so that it’s no longer broken.” Whether self-healing or throwing computing power at needy apps, the capability of the data center to take action on its own?based on network monitoring data and rules set by administrators?forms the indivisible core of all autonomic solutions.

Autonomy Meets Reality

Every IT manager likes the idea of providing high service levels to the enterprise at a low cost and with far less maintenance hassle. That is especially true with data center utilization hovering at a remarkably low 20 percent, according to many analysts, mainly because those corporations must reserve significant capacity for disaster recovery or spikes in demand.

“The challenge in the data center is to make better use of all of those devices that we bought during the heyday, when acquisitions were easy, when money was flush,” says Vernon Turner, an analyst and group vice president for IDC (a sister company to CIO’s publisher). “Now we’re in a situation where we don’t have the money, but we have to provide the same amount of service. We don’t have the resources to do that, so by far the easiest route is to automate as much of that function as possible.”

On the other hand, no self-respecting CIO would hand over the whole data center to an unproven autonomic solution. “It’s like HAL, where the data center has a mind of its own?and maybe ends up setting up an Internet gambling site,” jokes Huw Morgan, CTO of Canadian service provider Bell Globemedia, which is currently evaluating Think Dynamics’ ThinkControl software. In fact, Globemedia and other early adopters, such as IT outsourcing company InFlow, plan to roll out autonomic solutions incrementally, outside the production infrastructure.

Terraspring’s CTO Ashar Aziz notes that in addition to the production infrastructure, there are many “shadow infrastructures” to consider. He takes the example of financial services companies, many of which have recently upped their IT budgets in only one area?disaster recovery. If a company has 50 servers in production and 50 in disaster recovery, IT should be able to commandeer the redundant infrastructure for such noncritical applications as quality assurance or staging. “Should a disaster occur,” says Aziz, “you simply load the disaster recovery template for that infrastructure, and you’re ready to go. It’s a great way to leverage wartime assets in peacetime, because wartime is a rare event.”

On a nuts-and-bolts level, those changes in data center personality amount to automated server provisioning. For example, an autonomic data center might respond to a surge in Internet traffic by grabbing servers from the application tier and reprovisioning them as frontline Web servers?automatically loading them with all the software necessary for their new role. “This is what autonomic computing is really about,” says IDC’s Turner. “It’s about the ability to provision resources.” Last year, IBM unveiled a technology demo dubbed Project OcŽano, in which a farm of Linux servers showed off automated provisioning capabilities.

Quick provisioning is a big selling point for Ed Denison, former director of global operations for outsourcing giant Computer Sciences Corp. (CSC). Denison likes the speed with which Terraspring’s software can respond to trouble and build a new server from scratch. In a demo Denison does for customers, “I tell the guy to pull any cable he wants. Before we can get back to the room where the screen is up, the pager on my belt goes off. And by the time we get back, the guy doing the demonstration has already queued up the replacement server and is probably five minutes into reinitializing it.”

Using Terraspring’s solution, Denison estimates that he could save as much as $300 per month in labor costs per server. For customers such as J.P. Morgan Chase, for which CSC maintains 16,000 servers, that’s hardly chump change. And Denison sees an opportunity to dramatically increase the number of servers his administrators can handle over the long haul. “Normally you get 18 servers per administrator,” says Denison. “Our goal is to get closer to 40 or 80 or 100.”

Enabling the Future

Listening to the needs of IT execs such as Denison was the main impetus behind IBM’s autonomic computing initiative, according to Van Symons, global marketing executive for eLiza. But the company has loftier goals in mind than merely reducing IT head count.

In IBM’s view, the autonomic data center goes hand in glove with the company’s “grid computing” initiative, in which distributed networks of data centers can be virtualized to work as a single computer. Although largely confined to scientific applications so far, grid computing for commercial applications may one day enable the “utility” or “service” model of computing, where customers will tap into computing power much as they plug in a toaster and tap into the electric power grid today. The ultimate goal, according to Wladawsky-Berger, is to turn the Internet into one giant virtual computer.

Who knows, however, when such grand concepts will touch the ground. Meanwhile, IBM deserves credit for reminding us that the flood of fabulous new IP-based technology that has arrived in the past few years is worth very little if you can’t make it work without breaking the bank. Worse, if we spend too much time scrambling to shore up what’s already been deployed, we’ll stop forging ahead. As Symons says, the cost savings promised by autonomic computing isn’t the only attraction. “The real benefit is the speed with which I can now deliver more applications?and not spend all my time building and managing the infrastructure,” he says.