Growth is normally a boon for any business. Servers hum faster when an ecommerce site attracts more customers (and more credit card transactions). When storage requirements for a new business that handles documentation for large companies suddenly escalate, executives high-five each other.
Scaling can be so costly, though, that fast growth isn't always a positive. Fortunately, new technologies can help a company ramp up quickly and efficiently, removing some of the pain of having to expand a data center. Instead of being faced with a major capital outlay that offsets new revenue, these innovations make the impact of scaling up a data center to meet demand less of a drain.
1. Modular Data Center Additions — But Not Modules
Most large companies know they can turn to Microsoft, HP and others to purchase entire data center modules. They provide a way to scale quickly, but usually a high cost.
A new addition might consist of multiple racks and as few as 30, assembled remotely so that the cooling systems, power, cabinets and servers are all ready to go by the time they are installed. (The company can also install racks with up to 10,000 servers for larger companies.)
One key component: A power busway that provides greater flexibility for adding modules. Major electrical manufactures, including Siemens, Snyder, PDI and Universal Electric, offer busway solutions, Cantrell says. Verne Global standardized on Universal Electric Starline infrastructure for its modular data centers, he says, given the company's track record and its monitoring options, which let the company provide "very granular feedback" to customers.
2. Power Enterprise Pools: 'Elastic Capacity'
One challenge in scaling a data center is knowing when to invest in servers and how many to add. There are often spikes in demand, but it's difficult to predict when they will occur — and what to do when you don't need the extra capacity.
One answer: Enterprise Pools, a scaling infrastructure from IBM that works with the IBM Power servers. "Demands for data application is driving clients to look at continuous availability," says Steve Sibley, the director of IBM Power Systems. "When new apps roll out, they need to be able to scale rapidly or recover IT resources and scale down. There needs to be an elastic capability similar to the cloud and a way to not overpay for capabilities."
Commentary: The Dangers of Disconnected Data
The idea of managing down to the processor level isn't a new concept. What is new is that data centers can add move, and remove virtual processors and memory for spikes in usage or maintenance. They don't have to pay for extra capacity but only pay for the servers they need. They also pay only for a portion of the full cost of processors and memory up front. IBM estimates the cost of these pools is $0.67 per hour, based on per-day costs for processor and memory allocations. Data center operators can manually adjust the service levels for an application as often as they want, then use those service levels for automation.
3. Object Storage: No More Playing With Blocks
When it comes to data center scale, traditional file storage systems can be limiting. Think of an upstart social network. When there are a few hundred users, the storage system can keep up with the number of images and video posted online. Scaling to a few million users suddenly becomes a management chore — data center managers have to manage multiple volumes.
"File systems are designed for people to collaborate on the same data without modifying it at the same time," says Tom Leyden, a spokesman for DataDirect Networks. "If two people access a Word document at the same time, they will lock the file. Those locking mechanisms make it complex to scale the file system. A file system is slow when it's locked."
The answer, says Leyden, is object storage. The idea is to use a simplified ID system for files. The ID crosses multiple storage volumes and refers to where that object is stored. Metadata is also attached to the file to make it more searchable across volumes. There's no hierarchy and no locking mechanism, says Leyden. This helps with scaling because object storage can create "clusters" of data that scale as a company grows. Object storage creates a single storage management system — one that's easier to manage.
4. Auto-tiering: Scale Up, Scale Down
Data center managers need to automatically adjust storage as application needs change. The goal is to accommodate high-performance apps, but the challenge is knowing when to scale up for demand and then when to scale down.
Auto-tiering analyzes actual app data frequency of use. In an infrastructure that uses Dell EqualLogic arrays, for example, 80 percent of data becomes inactive after a month. Auto-tiering matches this legacy data for the lowest-cost storage option, rather than keeping it on faster drives too long.
One of the most recent changes is how auto-tiering uses storage to take advantage of flash speed boosts. "Rather than using spinning disk drives as the latest greatest drive technology, which are mechanical and cause heat and vibration, we use the latest class of solid-state drives," says Bob Fine, director of storage product management at Dell. "The new technology leverages solid state drives — data coming in that's performance oriented, we place on solid state."
The innovation: The auto-tiering uses only a small amount of flash for the high-performance apps. Dell auto-tiering can also distinguish between higher cost single-level cell (SLC) flash and slower, higher capacity multiple-level cell (MLC). These kinds of smart adjustments occur automatically and reduce the cost of using flash.
John Brandon is a former IT manager at a Fortune 100 company who now writes about technology. He has written more than 2,500 articles in the past 10 years. You can follow him on Twitter @jmbrandonbb. Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.