In scientific and engineering research, the search for knowledge never ends. Every time researchers find answers to one set of questions, they open the door to even bigger questions. Increasingly, these questions can be answered only with the processing power of high-performance computing (HPC) systems.\nThat\u2019s the way it is at the MIT Lincoln Laboratory Supercomputing Center (LLSC). At LLSC, thousands of scientists and engineers leverage HPC clusters to tackle ever-harder problems in fields like space observation, autonomous systems, robotic vehicles, machine learning and cyber security. This list of examples could go on and on, because LLSC is in the business of helping people explore and solve a virtually unlimited range of national and global problems.\nScientists and engineers who leverage the lab\u2019s resources got a big lift in the fall of 2016 with the arrival of a new petascale system \u2014 or a system that can perform an astounding one quadrillion floating point operations per second. The supercomputer was built with 648 Dell EMC compute nodes based on many-core Intel\u00ae Xeon Phi\u2122 processors and tied together with the Intel\u00ae Omni-Path Architecture network fabric. It debuted as the most powerful supercomputer in New England and the third most powerful at a U.S. university.1\u00a0\nThe value of the new LLSC supercomputer is rooted deeply in the needs of the research scientists and engineers who continually take on bigger and harder problems, according to LLSC Manger Albert Reuther.2\n\u201cThe goal we had for this installation and this project was to reach a petaflop \u2014 a real High Performance Linpack petaflop \u2014 because many of our applications map reasonably well to HPC,\u201d Reuther says. \u201cThat, in turn, challenges our users to think big in terms of what they might be able to do with that kind of capability at that scalability.\u201d\nFor the LLSC team and the scientists and engineers it serves, challenges feed on challenges. \u201cWhen our users are challenged, they start thinking big, and that challenges us back,\u201d Reuther says. \u201cAnd that is exactly the situation we want to be in, to do the really hard problems, the MIT hard problems, that will have an impact on our national security.\u201d\nThis is the way it is in university and government research labs around the world. When scientists and engineers have access to next-generation supercomputing systems, they can think bigger \u2014 and take on problems that might otherwise be all but unapproachable.\nA few examples:\n\nResearchers affiliated with the Texas Advanced Computing Center (TACC) at the University of Texas at Austin leverage the power of a supercomputer to identify brain tumors using machine learning technologies.3\nAt NASA\u2019s Goddard Space Flight Center in Greenbelt, Maryland, one of the world\u2019s largest contingents of Earth scientists relies heavily on HPC systems to investigate weather and climate phenomena at time scales ranging from days to centuries.4\nResearchers affiliated with the University of Pisa leverage HPC resources and machine learning to better understand DNA sequencing data. This work requires encoding DNA sequence data as an image dataset and then using deep learning image classification and training solutions.5\n\nResearch explorations like these demand computational capabilities that go far beyond those of even the fastest desktop workstations. They require the massive parallel processing power of supercomputers \u2014 and that\u2019s another reason why HPC matters.\n\u00a0\nTo learn more about the use of HPC resources at the MIT Lincoln Laboratory Supercomputing Center, read the Dell EMC case study \u201cPetaflop Performance.\u201d\n\u00a0\n_______________________________________________\nMaking a difference with HPC\nHigh performance computing touches virtually every aspect of our lives. HPC is making weather forecasts more accurate, cancer therapies more precise, fraud protection more foolproof and products more efficient. In this series of articles, we explore these and other use cases that capitalize on HPC and its convergence with data analytics to illustrate why HPC matters to all of us.\n_______________________________________________\n1 Lincoln Laboratory, \u201cLincoln Laboratory\u2019s supercomputing system ranked most powerful in New England,\u201d November 2016.\n2 Dell EMC customer case study, \u201cPetaflop Performance: MIT Lincoln Laboratory Supercomputing Center unveils Intel-based Dell EMC system to accelerate the Nations\u2019 research needs in autonomous systems, device physics, and machine learning,\u201d November 2016.\n3 Dell EMC news release, \u201cNew Dell EMC Solutions Bring Machine and Deep Learning to Mainstream Enterprises,\u201d Nov. 13, 2017.\n4 Dell EMC customer case study, \u201cNASA Center for Climate Simulation propels research science with a homegrown cluster based on repurposed servers,\u201d April 2017.\n5 HPCwire, \u201cMachine Learning Gets HPC Treatment at University of Pisa,\u201d March 13, 2017.