by Katherine Walsh

What the Chip War Means for Your Data Center

Jun 01, 20078 mins
Data CenterGreen ITServers

Should you standardize on AMD or Intel? It depends, say our experts.

As the competition intensifies between AMD and Intel, five server experts weigh in on whether one company’s servers should dominate the data center. There’s no easy answer. Here’s what they say you should think about:

Shane Rau

Program manager, PC Semiconductors


A market research company (sister to’s publisher)

The debate, to a point, is moot. There should be choices in the market; in any market where there is only one dominant supplier, there is a vacuum.

But the question is relevant when you consider that server processors have higher requirements than desktop PC and mobile PC processors. These requirements include performance, reliability and data integrity. After Opteron entered the market, AMD gained some market share, probably 10 to 15 percent, just by being there. But it’s clear that the market granted AMD additional share based on the virtues of their product. Today’s server market demands competitive performance, power consumption, heat dissipation, price and software compatibility.

AMD has had success in the high-performance computing segment, especially since HP’s adoption of Opteron for servers devoted to scientific and technical applications. AMD has also become credible in the view of server OEMs, because of reliability features, their long-term technology road map, support in Microsoft operating systems and their established infrastructure. AMD has built products according to customer requirements and has then made sure that OEMs agree about these requirements. CIOs are responding by putting AMD on their RFQs to OEMs and are looking into buying more than just Opteron servers. They are also looking into Athlon-based desktops and Turion-based mobile PCs for their corporate environments.

Intel has a broader product line, with Xeon, Itanium and 64-bit chips. The only segment for which AMD doesn’t have a product is the segment for processors used in ultraportable notebooks.

What you choose depends on the performance needs of your corporation. I think the validity of both product lines is testified to by the fact that most major OEMs have chips from both manufacturers in their offerings. Dell adopted Opteron for two- and four-processor servers and clients; Dell was prompted to support Opteron by pressure from its customers, competitive pressure from other system vendors like HP and certainly a conclusion that it would be profitable.

Todd Abrams

President and COO

Layered Technologies

Provider of on-demand utility computing services

The main difference between Intel and AMD is cost. Intel tends to be higher priced than AMD processors, yet the quality that separates the two is pretty equal. Right now, the only advantage I still see for Intel is maturity over the AMD products. The race for performance between the two will always be neck and neck and should not be the only consideration when purchasing hardware. One quarter Intel is on top, and next it will be AMD.

Both companies’ processors offer dependability and computing power. But AMD has excelled at making the processors depend on less power to operate, which means a savings to the customer, not just in acquisition and overall operating costs. Compared to AMD, some of the Pentium 4 and old Xeons we use run hot.

I believe both Intel and AMD will provide support, but AMD made a strong push earlier in the 64-bit market and built a solid reputation in the past few years around this. They provide great support with 64-bit OS vendors, such as CentOS, Red Hat, Fedora Core, Free BSD and now Windows. They are also great for anything database intensive or with high I/O, and they are active in virtualization communities. Intel has also done well with initiatives in this area. However AMD has been very vocal and active about developing technology to improve overall performance with virtualization.

Peter Jarvis

VP and CIO

NCsoft Publisher of entertainment software

Beyond the obvious arguments about whether or not caching schemes and bus architecture is better between either AMD or Intel processors, the primary concern at NCSoft is the total cost of ownership (TCO) of the entire server.

NCSoft is charged a premium for high-density power and cooling throughout our operating environments around the world, so keeping the server TCO at a minimum is priority for us. Research has shown that over the minimum three-year lifetime we try and get out of a processor, AMD’s Opterons have provided the lowest possible TCO to date.

While the processor type for our live games was set a few years back, we continuously evaluate new processors by both Intel and AMD in order to ensure we’re still getting the best TCO for each product and project. While AMD Opteron processors have proven to be the best fit in most game server situations, we are not phasing out Intel processor-based servers; we have them installed in our data centers today at a lower quantity and will continue to consider them going forward for all types of applications.

NCSoft is currently having very good luck with Intel’s quad-core technology when used in SQL servers. The Intel quad-core technology looks promising and is definitely a step in the right direction as far as processing power goes. However, many of the TCO issues are not addressed with the new Intel server architecture. Even though these processors are much more energy efficient, the memory choices available in servers keep the overall power consumption higher than optimal and keep TCO up in general.

While I look forward to seeing what the server and motherboard vendors can do to bring down TCO, it appears that from an architecture point of view AMD continues to focus more on server TCO than Intel.

Overall, I don’t recommend standardization on one processor over another. From a performance point of view, you should keep each application you plan to use in mind, and not assume that just because a benchmark test shows one thing that you’ll experience the same results with your application in production. As an example, when testing our game City of Heroes we found that the game server itself ran much more efficiently on AMD-based servers than Intel servers without optimization of any type, while several of the benchmark tests showed the processors close to equal. The end result for City of Heroes ended up being that AMD Opteron was the fastest and the most energy efficient setup, which is definitely a winning combination.

. Bruce Taylor

Chief Analyst

The Uptime InstituteProvider of research on data center operations, design and engineering

With microprocessor developers leap-frogging each other in terms of performance and energy efficiency improvements at the chip level, it’s hard to keep up with what such developments may mean in terms of overall server performance and, more importantly, server performance per watt of electricity- an increasingly critical metric for CIOs to watch.

No particular performance advantage is to be gained by standardizing on one chip developer, anymore than there is to one server manufacturer. Today’s server optimization and virtualization technologies help obviate the need to for such standardization. Having a mix of technology in the modern computer room is probably advisable. For one thing, it helps IT analysts monitor true performance and efficiency. For another, it may help keep the box makers honest and provide IT buyers with more leverage to demand energy efficiency improvements at the chip level.

The issues of power density in the data center are already acute and rapidly escalating. While an important consideration, the microprocessor architecture in the box is only a piece of a “whole-systems” problem relating to power supply and cooling capacity that’s gone way out of whack in high-density computing environments. The anticipated growth in demand for server computing coupled with the corporate imperative for “greening” the data center will heighten the attention of IT managers to the utility meter and the power bill.

Donald Becker


Penguin ComputingCo-inventor, Beowulf Linux cluster

I see pros and cons to each chipset. For AMD, the pros include proven performance on a range of workloads; more cost-effective 4-processor Symmetric Multiprocessor systems (these are multiple CPUs in a single system with equal access to I/O and memory) and strong memory subsystem design. The big weakness with AMD is that the company, unlike Intel, has no internal compiler group, meaning AMD must depend on assembly programmers and compiler vendors to extract the best performance from changes and improvements they make to the CPU core.

Intel’s advantages include a new CPU microarchitecture that outlines the organization and types of functional blocks inside the CPU, and which shows good performance benchmarks; an aggressive road map of new processor introductions and server platform stability through this year. Against Intel is the new memory module for its high-end server chipsets and systems, which uses a lot of power.

Historically, the high-performance computing market on which we focus has been using AMD, but it’s not as simple as saying just choose X brand. With the new Core Microarchitecture from Intel, both AMD and Intel offer roughly equivalent performance for a wide variety of workloads. One exception is large SMP systems (where AMD 4-processor systems are generally cheaper and capable of more RAM), which are typically used for database servers.

As it stands today, the only reason to use only one manufacturer’s chips is if you want to have a standard configuration. Using a single processor family allows you to create an initially simplified environment: a single kernel, one set of compiler settings. But even new installations do not stay homogeneous for very long.