Virtualization Throws Wrench into Software Licensing Debate

By Rich Green, Sun Microsystems

The advent of multi-core processors spurred a vigorous debate in the IT industry about how to charge for software licenses—by processor core or by socket. When dual-core processors started to hit the big-time in 2005, enterprises were ecstatic to get more work from a single chip. But when faced with paying for software licenses on a per-CPU basis for both cores on that chip, CIOs found themselves in a two-steps-forward, one-step-back predicament—a problem that only escalates as newer four- and eight-core chips become available.

Recognizing this problem, some software vendors switched to per-socket pricing. Market pressure increased and now most software vendors have adopted this model. Market share for dual-core servers shot from single-digits in early 2005 to more than 25 percent of all server sales by the end of that year, and the growth continued unabated with double-digit quarterly unit shipment growth rates throughout 2006. The crisis was averted, for the moment.

That was Round 1 of the software licensing debate. Round 2 centers on the rapid growth and adoption of virtualization technologies. It is raising an even more profound issue for the technology industry and its enterprise customers.

Head-Scratching Arithmetic

Virtualization has many benefits—better resource utilization in the data center leads to consolidation to fewer servers. That means lower capital costs, lower maintenance costs, lower power and cooling bills and less real-estate used. In short, virtualization can save you a heck of a lot of time and money.

But if you are paying for multiple licenses of software in a virtualized environment... not so much. Alternatively, economizing on software licenses can limit where and how much work gets done.

Which leads us to Round 2 of the software licensing debate. Because server virtualization allows you to safely and securely run multiple workloads or applications on a single system or processor by creating partitions, you may only be using a fraction of that processor for a particular piece of software, be it an operating system, middleware or an application. Here’s the rub: With most vendors you’re still paying for the full processor socket.

It works like this. As an example, in the typical “one application per server” datacenter model, if you have five single-socket servers, each running an individual application, each server may only run at only 10-15 percent of capacity. To save money, you could consolidate to one single-socket server with a more powerful processor, most likely with multiple cores. Server virtualization technologies allow you to run five virtualized OS instances on this system, each hosting one of the original applications; now you’re running one server at 50 percent to 75 percent of capacity.

You’ve improved resource utilization, you’re saving on power and cooling costs, and you’re taking up less space in the datacenter. Unfortunately, you’re not paying one-fifth for software licensing costs, even though you’re only using one-fifth of the processor. Most vendors view each instance of the OS as using the full processor socket and therefore you pay for each license separately. In essence, you’re paying a premium for the right to run a virtualized server. And it can get worse. If you’ve outsourced the management of your datacenter, you may be paying extra to your service provider for the use of technologies that save them money. The situation gets more complicated when you think about virtualization in the context of very large servers with 16, 32, 64 or more processors, or in a horizontally scaled environment with multiple servers. To handle fluctuating workloads for a particular application (for example, purchase orders in a retail business), you could set up your application to run across multiple processors (or servers), expanding and contracting in size to match demand.

In this case, how do you pay for the software on a per-socket basis? Based on how many sockets it uses at peak times? Or for some average usage over a set period? Must you set strict boundaries for your virtualized application—determining in advance that it runs on a fixed number of processors to control costs—and then manually reconfigure (and re-license) as needed? There are time and labor costs associated with reconfiguration, renegotiation, etc., as well, and depending on your contract, buying more licenses doesn’t necessarily get you any more support.

There Must Be a Simpler Way

All of this flies in the face of one the principal desires of CIOs when making purchasing decisions and CTOs when making product decisions—simplicity. Simplicity in pricing is essential for a CIO to accurately estimate how much a particular workload is going to cost over, say, a three-year lifespan. One of the great benefits of virtualization is that you shouldn’t—and in fact don’t need to—know in advance how much capacity you’ll use.

So what are vendors doing to resolve this conundrum?

Some vendors are looking to overlay a utility pricing scheme on top of virtualization. IBM, for example, introduced Tivoli software to track usage of processing power in May last year. Others have advocated this approach as well. While this is in principle a fair method of charging for software, it is also complex.

For a utility scheme to work, we as an industry could establish a standard unit of processing power to account for data centers with heterogeneous server and software environments. However, in the world of many different multi-core architectures, that’s not easy.

Back in the mainframe era, companies charged for processing based on compute cycles at a particular clock speed. But clock speed is no longer the be-all-end-all metric it once was. Nowadays, you need to weigh performance of x64 processors like AMD’s Opteron or RISC processors such as Sun’s UltraSPARC; you need to consider power consumption; and you need to assess single-core versus multi-core. There is also the risk that software vendors could reward themselves with higher licensing revenue for consuming more compute power, even if there is no corresponding benefit to the customer.

In this scenario, there is no analog to the basic unit of electricity, the watt, because there are so many factors involved in what is now considered processing “power.’”

Out of the Quagmire

One way to escape this dilemma is to move away from pricing by processor usage, core or socket altogether.

Subscription pricing models have become increasingly popular in the past few years. Open source software companies typically charge for support on an annual basis, rather than for one-time software licenses. But those contracts are still often based on the number of processors involved, leaving CIOs in the same bind. For our part, we at Sun want to simplify pricing and planning even more: Subscriptions for software are still actually subscriptions for support, as Sun’s software is now free and open-source. But pricing is based on the number of employees a company has. To simplify how to determine what that number is, we base it on what the company reports annually to the Securities and Exchange Commission (SEC).

What does subscription pricing mean for CIOs considering the implications of virtualization?

It means they can focus on building their datacenter for their workloads. They can consolidate servers for maximum efficiency and virtualize at will, without concern for the ripple effect of software costs, the fluctuating number of processors used or, worse, the fluctuating fractions of processors used. As usage of an application increases, its licensing cost does not change. Of course, if your company’s head-count grows you pay more per year (or if it shrinks you pay less) but the costs change at a reasonable—and perhaps more importantly, predictable—rate.

Employee-based pricing is probably not the only solution to the increasingly complex challenge of paying for software. But if the changes wrought by multi-core processors and virtualization are any indication of future trends, the old ways of doing business will become increasingly difficult and outmoded. Gone are the days of one application on one system with one OS and one CPU.

We hope more software vendors will see the benefit of simpler pricing models but we also recognize that not every supplier is willing to strike out in a different direction on its own. That’s why it’s up to you as a customer to make your needs known. Demand true simplicity in software pricing so you can take unmitigated advantage of virtualization right away—and at the same time be ready for the inevitable, unforeseen changes to come.

Rich Green is executive vice president, software, at Sun Microsystems.

Related:

Copyright © 2007 IDG Communications, Inc.

7 secrets of successful remote IT teams