Efficiency is a priority at the core of Facebook's capability to scale, according to Jason Taylor, the company's vice president of infrastructure. There are never more than five types of servers in use at any of Facebook's data centers, located in five different regions, and no servers or infrastructure sit dormant waiting for services to launch, Taylor told analysts yesterday at the Credit Suisse Technology Conference.\nConsidering the volume of new data that Facebook processes every day \u2014 930 million photo uploads, 6 billion likes and 12 billion messages \u2014 cost savings and efficiency gains compound at virtually unparalleled magnitudes.\u00a0\nTaylor says Facebook is focused on cutting costs at its data centers, on servers and on software. That effort begins with managing the heat that emanates from the servers that power its 1.35 billion monthly users.\u00a0\n[Related News: Open Compute Project Chief Frankovsky Quits Facebook]\nData centers housed in facilities without dedicated heat management systems can easily incur up to 90 percent more energy costs for every watt delivered to each server, Taylor says. "Now at Facebook, because we've designed both our own servers and data centers, that heat tax is only seven percent\u2026 We are using cold air from the outside. We're not chilling air at all. We're passing it across the servers, mixing it in a hot aisle and then evacuating out the other side of the building."\n"In terms of raw thermal efficiency, our data centers are second to none," says Taylor.\n\nFacebook's Adherence to Open Source\nFacebook's data center design, which is open source as part of its Open Compute Project, also delivers savings through a deliberately homogeneous approach to computing. Advantages include volume pricing, repurposing, easier operations and simpler repairs, and servers can be allocated in hours rather than months. Homogeneous infrastructure also makes it easier for technicians to optimize systems on the fly, according to Taylor.\u00a0\nFacebook's third major efficiency win comes from its software, which Taylor says is "absolutely critical" in delivering efficient infrastructure. "Software is far more flexible than hardware. You pay for hardware and your hardware becomes inefficient when you have a lot of variation," he says, adding that most of Facebook's core software is also open source.\u00a0\n[Related Feature: How to Use Facebook's Open Sourced Data Design to Cut Costs]\n"We really believe that the entire industry can benefit from efficiency work that we do, and that we can benefit from the industry feeding back and contributing new ideas and designs," Taylor says. "Fundamentally our company is going to win or lose based on our product, the cost of our infrastructure, and cost efficiency wins on infrastructure is something we like the entire industry to benefit from."\nWhile the technician-to-server ratio at a typical data center is around 1 to 450, the ratio in Facebook's facilities falls somewhere between 1 to 15,000 and 1 to 20,000, according to Taylor.\n\nFacebook's Exponential Rise in Network Bandwidth\nTaylor says that the amount of network available for a reasonable price is increasing dramatically. The standard 1GB servers that Facebook used when he joined the company in 2009 were upgraded to 10GB in 2011, and they'll be swapped out again with 25GB boxes within the next two years.\n[Related News: Facebook-Led Open Compute Project Tackles Network Switches]\n"I'd say within three years we'll have 100GB servers," Taylor says. "So over around a six-year period, going 100 times up in the amount of bandwidth that's available. I think network and improvements in networking is going to be the largest driver toward changes in how large-scale Internet companies work."