In scientific and engineering research, the search for knowledge never ends. Every time researchers find answers to one set of questions, they open the door to even bigger questions. Increasingly, these questions can be answered only with the processing power of high-performance computing (HPC) systems.
That’s the way it is at the MIT Lincoln Laboratory Supercomputing Center (LLSC). At LLSC, thousands of scientists and engineers leverage HPC clusters to tackle ever-harder problems in fields like space observation, autonomous systems, robotic vehicles, machine learning and cyber security. This list of examples could go on and on, because LLSC is in the business of helping people explore and solve a virtually unlimited range of national and global problems.
Scientists and engineers who leverage the lab’s resources got a big lift in the fall of 2016 with the arrival of a new petascale system — or a system that can perform an astounding one quadrillion floating point operations per second. The supercomputer was built with 648 Dell EMC compute nodes based on many-core Intel® Xeon Phi™ processors and tied together with the Intel® Omni-Path Architecture network fabric. It debuted as the most powerful supercomputer in New England and the third most powerful at a U.S. university.1
The value of the new LLSC supercomputer is rooted deeply in the needs of the research scientists and engineers who continually take on bigger and harder problems, according to LLSC Manger Albert Reuther.2
“The goal we had for this installation and this project was to reach a petaflop — a real High Performance Linpack petaflop — because many of our applications map reasonably well to HPC,” Reuther says. “That, in turn, challenges our users to think big in terms of what they might be able to do with that kind of capability at that scalability.”
For the LLSC team and the scientists and engineers it serves, challenges feed on challenges. “When our users are challenged, they start thinking big, and that challenges us back,” Reuther says. “And that is exactly the situation we want to be in, to do the really hard problems, the MIT hard problems, that will have an impact on our national security.”
This is the way it is in university and government research labs around the world. When scientists and engineers have access to next-generation supercomputing systems, they can think bigger — and take on problems that might otherwise be all but unapproachable.
A few examples:
- Researchers affiliated with the Texas Advanced Computing Center (TACC) at the University of Texas at Austin leverage the power of a supercomputer to identify brain tumors using machine learning technologies.3
- At NASA’s Goddard Space Flight Center in Greenbelt, Maryland, one of the world’s largest contingents of Earth scientists relies heavily on HPC systems to investigate weather and climate phenomena at time scales ranging from days to centuries.4
- Researchers affiliated with the University of Pisa leverage HPC resources and machine learning to better understand DNA sequencing data. This work requires encoding DNA sequence data as an image dataset and then using deep learning image classification and training solutions.5
Research explorations like these demand computational capabilities that go far beyond those of even the fastest desktop workstations. They require the massive parallel processing power of supercomputers — and that’s another reason why HPC matters.
To learn more about the use of HPC resources at the MIT Lincoln Laboratory Supercomputing Center, read the Dell EMC case study “Petaflop Performance.”
Making a difference with HPC
High performance computing touches virtually every aspect of our lives. HPC is making weather forecasts more accurate, cancer therapies more precise, fraud protection more foolproof and products more efficient. In this series of articles, we explore these and other use cases that capitalize on HPC and its convergence with data analytics to illustrate why HPC matters to all of us.
1 Lincoln Laboratory, “Lincoln Laboratory’s supercomputing system ranked most powerful in New England,” November 2016.
2 Dell EMC customer case study, “Petaflop Performance: MIT Lincoln Laboratory Supercomputing Center unveils Intel-based Dell EMC system to accelerate the Nations’ research needs in autonomous systems, device physics, and machine learning,” November 2016.
3 Dell EMC news release, “New Dell EMC Solutions Bring Machine and Deep Learning to Mainstream Enterprises,” Nov. 13, 2017.
4 Dell EMC customer case study, “NASA Center for Climate Simulation propels research science with a homegrown cluster based on repurposed servers,” April 2017.
5 HPCwire, “Machine Learning Gets HPC Treatment at University of Pisa,” March 13, 2017.