How to Implement Next-Generation Storage Infrastructure for Big Data

Managing the petabyte-scale and larger data stores that are a fact of life with Big Data is a different beast than managing traditional large-scale data infrastructures. Online photo site Shutterfly--which manages more than 30 petabytes of data--shares its strategy for taming the storage beast.

By
Mon, April 16, 2012

CIO — Everyone is talking about Big Data analytics and associated business intelligence marvels these days, but before organizations will be able to leverage the data, they'll have to figure out how to store it. Managing larger data stores—at the petabyte scale and larger—is fundamentally different from managing traditional large-scale data sets. Just ask Shutterfly.

Shutterfly is an online photo site that differentiates itself by allowing users to store an unlimited number of images that are kept at the original resolution, never downscaled. It also says it never deletes a photo.

"Our image archive is north of 30 petabytes of data," says Neil Day, Shutterfly senior vice president and chief technology officer. He adds, "Our storage pool grows faster than our customer base. When we acquire a customer, the first thing they do is upload a bunch of photos to us. And then when they fall in love with us, the first thing they do is upload a bunch of additional photos."

To get an idea of the scale we're talking about, one petabyte is equivalent to 1 thousand terabytes or 1 million gigabytes. The archive of the first 20 years of observations by NASA's Hubble Space Telescope comes to a bit more than 45 terabytes of data, and one terabyte of compressed audio recorded at 128 kB/s would contain about 17,000 hours of audio.

Petabyte-Scale Infrastructures Are Different

"Petabyte-scale infrastructures are just an entirely different ballgame," Day says. "They're very difficult to build and maintain. The administrative load on a petabyte or multi-petabyte infrastructure is just a night and day difference from the traditional large-scale data sets. It's like the difference between dealing with the data on your laptop and the data on a RAID array."

When Day joined Shutterfly in 2009, storage had already become one of the company's biggest buckets of expense, and it was growing at a rapid clip—not just in terms of raw capacity, but in terms of staffing.

"Every n petabytes of additional storage meant we needed another storage administrator to support that physical and logical infrastructure," Day says. With such massive data stores, he says, "things break much more frequently. Anyone who's managing a really large archive is dealing with hardware failures on an ongoing basis. The fundamental problem that everyone is trying to solve is, knowing that a fraction of your drives are going to fail in any given interval, how do you make sure your data remains available and the performance doesn't degrade?"

Continue Reading

Our Commenting Policies