Just call the researchers at the Los Alamos National Laboratory’s Advanced Computing Laboratory “blade runners.” That’s because the New Mexico-based facility has replaced its conventional parallel cluster supercomputer with blade servers, compact card-based devices that promise to replace standalone servers.
For Los Alamos, blades offer a less costly and more reliable way of handling massive, parallel processing-based computations, such as simulations of galaxy formations and supernova explosions. “In general, it’s a more efficient solution with respect to space and energy consumption,” says Wu-chun Feng, leader of the laboratory’s Research and Development in Advanced Network Technology (Radiant) team.
As blades slice into the IT mainstream, the technology may mark an important new stage in server evolution, one that will save space, boost reliability, streamline management and, ultimately, cut costs. Vendors are also hoping that blades will breathe fresh air into a dismal server market. Proprietary technologies, a nagging storage integration problem and old-fashioned vendor hype, however, all threaten to dull blades’ appeal.
Blades and Razors
A blade resembles an ordinary PC add-in card. In this case, however, the card is the computer. A typical blade features one or two processors, memory, storage?everything you would find inside a typical standalone server, minus the power supply, fan, network cables and other standard support components.
Individual blades sit inside a rack-mountable enclosure (sometimes called a razor). Those enclosures provide the blades’ support apparatus, such as I/O and power. A blade enclosure is usually called a U, which stands for a unit of vertical rack space?approximately 1.75 inches. Vendors generally discuss blade density in terms of the number of blades that can fit inside a standard 19-inch-wide 42U rack.
By sharing a common high-speed bus and U-mounted support components, blades achieve a degree of efficiency in size and power that standalone servers can’t match. Shrinking a server to the size of an add-in card permits impressive densities. Woodlands, Texas-based blade-maker RLX Technologies, for example, can wedge 24 blades in a 3U, which adds up to 336 independent servers in a single 42U rack with as much as 40 terabytes of storage and 336GB of memory.
Blades also let enterprises take advantage of the efficiency and reliability of clustered technology. Like most cluster-based systems, blade servers can be configured to include load-balancing and failover capabilities. Individual blades are usually hot-pluggable, which makes it easy to swap out a board with a new one in the event of system failure. Additionally, placing servers near each other and managing them under a single application can streamline administration.
Although blade technology has existed for several years, 2003 marks the first time that full-fledged blade products are available from the three major Intel server makers: Dell Computer, Hewlett-Packard and IBM. Those vendors?along with Sun Microsystems and its Intel- and UltraSparc-based blades, and lesser-known RLX Technologies, which produces blades based on both Intel and Transmeta processors?all hope that the technology will help jump-start a moribund server market.
Blades’ roots reach back to the halcyon days of the dotcom boom, when space-squeezed Internet data centers began looking for compact servers they could pack together as closely as possible. “Simply put, with blade servers, they could put more servers per square foot than they could using conventional rack-mounted servers,” says John Enck, an analyst who covers the blade market for Gartner.
Today, vendors are looking to expand the number of potential customers by marketing blades to just about any enterprise that needs to operate multiple servers. Most observers, however, believe that the technology is still best suited for organizations with server-intensive IT shops, such as Internet server providers, online merchants, financial institutions, research labs and media streamers.
For vendors, blades provide a unique opportunity to lock customers into a long-term relationship. Just as shaver-makers design razors to accept only one type of blade?their own?blade-server vendors aim to enmesh customers inside a highly proprietary environment. “There’s no ability to interchange anything between products,” notes Enck.
Getting enterprises to commit to a particular type of blade technology is crucial to vendors at this early stage, since the market is projected to skyrocket over the next few years. Imex Research predicts global sales will soar from $50 million in 2002 to $3.5 billion in 2005, according to Anil Vasudeva, president of the San Jose, Calif.-based IT research company.
Vendors realize that blade sales made today will continue to pay off for many years. They understand that in addition to hooking organizations to their technology, the large enterprises will drag smaller organizations along in their wake. “Typically, large enterprises tend to lead the way, and what they do impacts the market further down the line,” says Charles King, infrastructure hardware research director for The Sageza Group, a computer industry research company in Mountain View, Calif.
A Hot Technology
Los Alamos ventured into blade technology during the summer of 2001, when it began replacing a 128-processor parallel cluster supercomputer that was failing on an almost weekly basis. Part of the problem was the laboratory’s less-than-ideal physical plant. “Our entire cluster environment sits in a dusty, hot?80 degrees Fahrenheit to 85 degrees Fahrenheit?and confined work area inside a warehouse,” says Feng. By consuming less power, blades generate less heat and are therefore less likely to fail, particularly when operating in such a subpar environment.
Blade technology lies at the heart of Los Alamos’s year-old Green Destiny cluster, which is designed to combine supercomputer power with size and energy efficiency. Using 240 blades supplied by RLX, the system is capable of 160 gigaflops (billions of computations per second). That’s not particularly fast for a supercomputer; the Q supercomputer, Green Destiny’s conventional server-based counterpart, is almost 200 times as fast. Green Destiny, however, costs 640 times less than Q?$335,000 compared with $215 million. Q also requires a costly temperature-controlled, dust-free environment.
Los Alamos’s old 128-processor standalone cluster consumed 48 square feet of space. Green Destiny, on the other hand, requires only 6 square feet. Feng says Green Destiny also provides “significantly better reliability” while cutting costs. “Our belief is that we will save roughly two to three times in terms of total cost of ownership,” he says. His team is so impressed with blades that it will use the technology in its next supercomputer?a 480-processor monster cluster, dubbed The Green Machine. Looking to the future, Feng says, “I would view our Green Destiny cluster as the Toyota Camry of blade-server clusters.”
All Blades Aren’t Alike
While many enterprises shun emerging technologies, Los Alamos had no choice but to be a trailblazer, says Feng. “With the type of research that we do, we have always been inventors, pre-adopters or early adopters of technology.” Feng admits, however, that blades can be a tough buy for CIOs who have just begun studying the market. “Although blade servers are a new technology, there are already many different flavors of blades,” he says. “Make sure you buy the blade that’s appropriate for your application and your environment.”
Yet finding the right type of blade is hardly a simple matter, since vendors are approaching the technology from various directions. The three major market players?Dell, HP and IBM?are focusing their efforts on general business customers by providing blades based on mainstream Intel processors, ranging from the antiquated Pentium III (on Dell and HP systems) to the high-octane, dual-processor Xeon (on IBM blades). Sun, meanwhile, is following a tactical approach that targets specific customer needs such as network edge applications, telecommunications and enterprise applications. RLX, for its part, is paying special attention to technical customers.
It’s still too early to tell which blade strategy, if any, will ultimately dominate the market. Unfortunately, the proprietary nature of current blade products means that managers will likely have a difficult time shifting to another vendor’s technology should their current vendor’s blade fall out of favor. “Everybody calls their product a blade, although there’s a lot of differentiation in terms of what processors they use, how physically big they are…and what capabilities [they have],” says Gartner’s Enck.
A Management Problem
Beyond the problem of sorting through various blade designs, early adopters face the headache of dealing with management software. “Every blade today has memory, processors and disks on it, so you have to manage all those resources on each blade,” notes Enck. A blade server usually comes complete with an operating system (typically Windows or Linux) and a dedicated?and usually proprietary?management application.
The problem with those programs is that they preclude the mixing of blades from various vendors. Fortunately, at least some degree of standardization is likely to arrive within a few years. “Standards are coming, but I think it will stop at the point of being able to have complete interoperability of all the blades between all the vendors,” says Enck.
IBM is hoping to smooth the path to standardization?and promote its blade strategy?by making its Director management software available for resale by other blade-server vendors. “The dream, of course, is that there will be software solutions that will allow the blending of server platforms in these squirrelly heterogeneous environments that a lot of companies have,” says Sageza’s King.
But there will be limits to standardization, adds Enck. “I don’t think we will ever see a situation where we will be able to take an IBM blade and put it into a Dell enclosure,” he says.
The Next Hurdle
Another major hurdle facing blade vendors is getting their products to work with advanced storage systems. “There are limits to integrating blades into SAN and other direct access storage structures,” says Enck, who says some blade vendors have added SAN and other support, but only in a limited fashion thus far. Storage integration is gradually becoming a reality, however. IBM, for example, is offering optional fibre channel and gigabit Ethernet capabilities, giving its blades the flexibility to support both storage area network and network-attached storage environments. Other vendors are likely to follow IBM’s lead.
Many observers believe that as blades become more capable, it’s only a matter of time before they begin challenging standalone server sales. “At some point, businesses are going to have to start buying a technology to replace boxes that are simply getting out of date,” says King. Enck concurs: “Looking out five years, I think this will be a very strong technology.”