By Rich Green, Sun Microsystems\n\n\n\nThe advent of multi-core processors spurred a vigorous debate in the IT industry about how to charge for software licenses\u2014by processor core or by socket. When dual-core processors started to hit the big-time in 2005, enterprises were ecstatic to get more work from a single chip. But when faced with paying for software licenses on a per-CPU basis for both cores on that chip, CIOs found themselves in a two-steps-forward, one-step-back predicament\u2014a problem that only escalates as newer four- and eight-core chips become available. \n\nRecognizing this problem, some software vendors switched to per-socket pricing. Market pressure increased and now most software vendors have adopted this model. Market share for dual-core servers shot from single-digits in early 2005 to more than 25 percent of all server sales by the end of that year, and the growth continued unabated with double-digit quarterly unit shipment growth rates throughout 2006. The crisis was averted, for the moment. \n\nThat was Round 1 of the software licensing debate. Round 2 centers on the rapid growth and adoption of virtualization technologies. It is raising an even more profound issue for the technology industry and its enterprise customers. \n\n\n\nHead-Scratching Arithmetic \n\nVirtualization has many benefits\u2014better resource utilization in the data center leads to consolidation to fewer servers. That means lower capital costs, lower maintenance costs, lower power and cooling bills and less real-estate used. In short, virtualization can save you a heck of a lot of time and money. \n\nBut if you are paying for multiple licenses of software in a virtualized environment... not so much. Alternatively, economizing on software licenses can limit where and how much work gets done. \n\nWhich leads us to Round 2 of the software licensing debate. Because server virtualization allows you to safely and securely run multiple workloads or applications on a single system or processor by creating partitions, you may only be using a fraction of that processor for a particular piece of software, be it an operating system, middleware or an application. Here\u2019s the rub: With most vendors you\u2019re still paying for the full processor socket. \n\nIt works like this. As an example, in the typical \u201cone application per server\u201d datacenter model, if you have five single-socket servers, each running an individual application, each server may only run at only 10-15 percent of capacity. To save money, you could consolidate to one single-socket server with a more powerful processor, most likely with multiple cores. Server virtualization technologies allow you to run five virtualized OS instances on this system, each hosting one of the original applications; now you\u2019re running one server at 50 percent to 75 percent of capacity. \n\nYou\u2019ve improved resource utilization, you\u2019re saving on power and cooling costs, and you\u2019re taking up less space in the datacenter. Unfortunately, you\u2019re not paying one-fifth for software licensing costs, even though you\u2019re only using one-fifth of the processor. Most vendors view each instance of the OS as using the full processor socket and therefore you pay for each license separately. In essence, you\u2019re paying a premium for the right to run a virtualized server. And it can get worse. If you\u2019ve outsourced the management of your datacenter, you may be paying extra to your service provider for the use of technologies that save them money. The situation gets more complicated when you think about virtualization in the context of very large servers with 16, 32, 64 or more processors, or in a horizontally scaled environment with multiple servers. To handle fluctuating workloads for a particular application (for example, purchase orders in a retail business), you could set up your application to run across multiple processors (or servers), expanding and contracting in size to match demand. \n\nIn this case, how do you pay for the software on a per-socket basis? Based on how many sockets it uses at peak times? Or for some average usage over a set period? Must you set strict boundaries for your virtualized application\u2014determining in advance that it runs on a fixed number of processors to control costs\u2014and then manually reconfigure (and re-license) as needed? There are time and labor costs associated with reconfiguration, renegotiation, etc., as well, and depending on your contract, buying more licenses doesn\u2019t necessarily get you any more support. \n\n\n\nThere Must Be a Simpler Way \n\nAll of this flies in the face of one the principal desires of CIOs when making purchasing decisions and CTOs when making product decisions\u2014simplicity. Simplicity in pricing is essential for a CIO to accurately estimate how much a particular workload is going to cost over, say, a three-year lifespan. One of the great benefits of virtualization is that you shouldn\u2019t\u2014and in fact don\u2019t need to\u2014know in advance how much capacity you\u2019ll use. \n\nSo what are vendors doing to resolve this conundrum? \n\nSome vendors are looking to overlay a utility pricing scheme on top of virtualization. IBM, for example, introduced Tivoli software to track usage of processing power in May last year. Others have advocated this approach as well. While this is in principle a fair method of charging for software, it is also complex. \n\nFor a utility scheme to work, we as an industry could establish a standard unit of processing power to account for data centers with heterogeneous server and software environments. However, in the world of many different multi-core architectures, that\u2019s not easy. \n\nBack in the mainframe era, companies charged for processing based on compute cycles at a particular clock speed. But clock speed is no longer the be-all-end-all metric it once was. Nowadays, you need to weigh performance of x64 processors like AMD\u2019s Opteron or RISC processors such as Sun\u2019s UltraSPARC; you need to consider power consumption; and you need to assess single-core versus multi-core. There is also the risk that software vendors could reward themselves with higher licensing revenue for consuming more compute power, even if there is no corresponding benefit to the customer. \n\nIn this scenario, there is no analog to the basic unit of electricity, the watt, because there are so many factors involved in what is now considered processing \u201cpower.\u2019\u201d\n\n\n\nOut of the Quagmire \n\nOne way to escape this dilemma is to move away from pricing by processor usage, core or socket altogether. \n\nSubscription pricing models have become increasingly popular in the past few years. Open source software companies typically charge for support on an annual basis, rather than for one-time software licenses. But those contracts are still often based on the number of processors involved, leaving CIOs in the same bind. For our part, we at Sun want to simplify pricing and planning even more: Subscriptions for software are still actually subscriptions for support, as Sun\u2019s software is now free and open-source. But pricing is based on the number of employees a company has. To simplify how to determine what that number is, we base it on what the company reports annually to the Securities and Exchange Commission (SEC). \n\nWhat does subscription pricing mean for CIOs considering the implications of virtualization? \n\nIt means they can focus on building their datacenter for their workloads. They can consolidate servers for maximum efficiency and virtualize at will, without concern for the ripple effect of software costs, the fluctuating number of processors used or, worse, the fluctuating fractions of processors used. As usage of an application increases, its licensing cost does not change. Of course, if your company\u2019s head-count grows you pay more per year (or if it shrinks you pay less) but the costs change at a reasonable\u2014and perhaps more importantly, predictable\u2014rate. \n\nEmployee-based pricing is probably not the only solution to the increasingly complex challenge of paying for software. But if the changes wrought by multi-core processors and virtualization are any indication of future trends, the old ways of doing business will become increasingly difficult and outmoded. Gone are the days of one application on one system with one OS and one CPU. \n\nWe hope more software vendors will see the benefit of simpler pricing models but we also recognize that not every supplier is willing to strike out in a different direction on its own. That\u2019s why it\u2019s up to you as a customer to make your needs known. Demand true simplicity in software pricing so you can take unmitigated advantage of virtualization right away\u2014and at the same time be ready for the inevitable, unforeseen changes to come. \n\n Rich Green is executive vice president, software, at Sun Microsystems.