Helping Data Centers Cope With Big Data Workloads

Big data workloads tend to suck up enormous amounts of compute resources, which can create serious log jams in your data center if the workloads aren't scheduled optimally. Adaptive Computing's Big Workflow is designed to leverage HPC and cloud technologies to help data centers adapt to big data.

By
Tue, February 25, 2014

CIO — The demands of big data applications can put a lot of strain on a data center. Traditional IT seeks to operate in a steady state, with maximum uptime and continuous equilibrium. After all, most applications tend to have a fairly light compute load—they operate inside a virtual machine and use just some of its resource.

Big data applications, on the other hand, tend to suck up massive amounts of compute load. They also tend to feature spikes of activity—they start and end at a particular point in time.

"Big data is really changing the way data centers are operating and some of the needs they have," says Rob Clyde, CEO of Adaptive Computing, a specialist in private/hybrid cloud and technical computing environments. "The traditional data center is very much about achieving equilibrium and uptime."

"On the big data side, scheduling becomes crucial," he adds. "Without it, you end up with a real logjam."

On Tuesday, Adaptive Computing launched its Big Workflow solution, which is designed to leverage high-performance computing (HPC) and cloud technology in an effort to help large enterprises address that problem.

Clyde says Big Workflow draws on Adaptive Computing's Moab HPC Suite and Moab Cloud Suite to allow data centers to use all available resources—including bare metal and virtual machines, technical computing environments (like HPC and Hadoop), cloud (public, private and hybrid) and even agnostic platforms that span multiple environments (like OpenStack)—as a single ecosystem that adapts as workloads demand.

In turn, that allows the data center to optimize the analysis process to deliver an organized workflow that increases throughput and productivity while reducing cost, complexity and errors. It also allows data centers to guarantee services that ensure SLAs, maximize uptime and prove services were delivered and resources were fairly allocated.

"The explosion of big data, coupled with the collisions of HPC and cloud, is driving the evolution of big data analytics," Clyde says. "A Big Workflow approach to big data not only delivers business intelligence more rapidly, accurately and cost effectively, but also provides a distinct competitive advantage. We are confident that Big Workflow will enable enterprises across all industries to leverage big data that inspires game-changing, data-driven decisions."

DigitalGlobe Meets SLAs During Disasters

One customer that has benefited from Big Workflow is high-resolution satellite imagery solutions provider DigitalGlobe.

DigitalGlobe's archived Earth imagery contains more than 4.5 billion square kilometers of global coverage. Each year it adds two petabytes of raw imagery to its archives that turns into eight petabytes of new product.

Continue Reading

Our Commenting Policies