by Laurianne McLaughlin

CERN’s Search for God (Particles) Drives Massive Storage Needs

Feature
Jul 20, 20075 mins
Data CenterVirtualization

Think your storage headaches are big? Try being the guy in charge of storing the 1GB of data per second every day for a month coming off CERN's large hadron collider (LHC).

Maybe you last read about CERN (the European Organization for Nuclear Research) and its massive particle accelerators in Angels & Demons by Dan Brown of The Da Vinci Code fame. In that book, the lead character travels to the cavernous research institute on the border of France and Switzerland to help investigate a murder. In real life, one of CERN’s grisliest problems is finding storage for the massive amounts of data derived from its four high-profile physics experiments making use of the institute’s large hadron collider (LHC). Due for operation in May 2008, the LHC is a 27-kilometer-long device designed to accelerate subatomic particles to ridiculous speeds, smash them into each other and then record the results.

The LHC experiments will study everything from the tiniest forms of matter to the questions surrounding the Big Bang. The latter subject provided Pierre Vande Vyvre, a project leader for data acquisition for CERN, with a particularly thorny challenge: He had to design a storage system for one of the four experiments, ALICE (A Large Ion Collider Experiment). It’s one of the biggest physics experiments of our time, boasting a team of more than 1,000 scientists from around the world.

For one month per year, the LHC will be spitting out project data to the ALICE team at a rate of 1GB per second. That’s 1GB per second, for a full month, “day and night,” Vande Vyvre says. For this month, that data rate is an entire order of magnitude larger than each of the other three experiments being done with the LHC. In total, the four experiments will generate petabytes of data.

CERN believes that the LHC will let scientists re-create how the universe behaved immediately after the Big Bang. At that time, everything was a “sort of hot dense soup…composed of elementary particles,” the project’s webpage explains. The LHC can trigger “little bangs” that let ALICE scientists study how the particles act and come together, helping answer questions about the actual structure of atoms.

“The data is what the whole experiment is producing,” Vande Vyvre says. “This is the most precious thing we have.”

Vande Vyvre is charged with managing the PCs, storage equipment, and custom and homegrown software surrounding the ALICE project’s data before it hits the data center and gets archived. The ALICE group’s experiments will start running in May 2008, but the storage rollout began in September 2006.

The ALICE experiment grabs its data from 500 optical fiber links and feeds data about the collisions to 200 PCs, which start to piece the many snippets of data together into a more coherent picture. Next, the data travels to another 50 PCs that do more work putting the picture together, then record the data to disk near the experiment site, which is about 10 miles away from the data center. “During this one month, we need a huge disk buffer,” Vande Vyvre says.

The solution he chose was a 4Gbps Fibre Channel SAN, using a clustered file system. Why a clustered file system? “We didn’t want storage strictly linked with a hardware vendor,” he says.

For the clustering, the team is using Quantum’s StorNext software as its file system. “Performance was our number-one concern,” he says. The second concern was flexibility. “This large buffer means a lot of hardware,” he says. “StorNext makes our SAN much more flexible. You can work with different hardware technologies, and it’s completely vendor independent.” Finally, CERN had to ensure scalability. “It’s often the case that physics experiments are upgraded and the system must be able to evolve,” he says.

File systems such as StorNext, which let users share files across multiple platforms (i.e., they don’t care if a Windows server or a Solaris server needs to access a particular file), are growing in popularity, after being more common in the research and university environments, says Noemi Greyzdorf, a research manager who follows storage software for IDC (a sister company of CIO.com’s publisher, CXO Media). Clustered file systems are still an evolving category, she says, but enterprise IT is warming up to it.

“There’s been a push toward clustered file systems in the enterprise,” she says. “With the growth of unstructured data, there’s an increasing need for a centralized way to manage it.”

Other options in this category include IBM’s General Parallel File System and Symantec’s Veritas Storage Foundation Cluster File System. StorNext is known in the category for its performance, data sharing across operating systems and moving data across tiers, Greyzdorf says.

The data acquisition team for ALICE particularly valued StorNext’s affinity feature, Vande Vyvre says, because it lets the team avoid data being written to the same disks at the same time, which would cause performance issues. “This is what we used to keep the data traffic separated in streams, to avoid any slowdowns,” he notes.

As for Vande Vyvre’s tips to CIOs considering clustered file systems, he says don’t underestimate the value of the hardware vendor flexibility for drives and arrays. “We are quite happy to have stuck to our wish for a system that is vendor independent. The experiment will have a lifetime of 10 years. We know in principle we can keep the file system during that time,” he says.