The Southern California Earthquake Center relies on the power of supercomputers to simulate why and how earthquakes occur, evaluate their effects, and help us all prepare for the inevitable. Credit: iStock For California residents, the threat of earthquakes is part of everyday life. While the state experiences small quakes on a daily basis, the threat of a catastrophic quake is always there. And the hardest part of this problem is that scientists can’t predict when and where an earthquake will strike. They do, however, have the ability to help us prepare for quakes by developing early-warning systems, conducting earthquake preparedness drills, engineering buildings for earthquake resilience, and conducting scientific simulations to shed light on what will happen when the earth starts shaking. This is where the Southern California Earthquake Center (SCEC) enters the picture. The SCEC, based at the University of Southern California, coordinates fundamental research on earthquake processes using Southern California as its principal natural laboratory. It supports core research and education in seismology, tectonic geodesy, earthquake geology and computational science. “We study earthquakes — why they occur, how they occur and what kind of impact they have,” says Christine Goulet, Ph.D., Executive Director for Applied Science at SCEC. “We can’t predict earthquakes at this time, but we can prepare better if we know what to expect.” For their scientific explorations, Dr. Goulet and her colleagues draw on the computational power of supercomputers to simulate what happens when geological faults rupture and the ground starts moving. “We have a sense of how many large earthquakes will occur, and what their magnitudes will be, and so on, over the next several hundred years,” Dr. Goulet says. “But we don’t know exactly when they will occur and what they will look like and what they will do to our infrastructure. And that’s where we can conduct simulations to really help with this. We run simulations to better understand what happens with earthquakes over time and also in a specific location.” The power of HPC For their earthquake simulations, the researchers at the SCEC use some of our nation’s most powerful supercomputers. These high performance computing resources include multiple systems from Dell Technologies, including the Frontera supercomputer at Texas Advanced Computing Center (TACC). Fueled by a $60 million award from the National Science Foundation (NSF), Frontera debuted on the TOP500 list in June 2019 as the nation’s fastest academic supercomputer — with a peak-performance rating of 38.7 petaFLOPS.[1]Dell Technologies provided the primary computing system for Frontera, based on Dell EMC PowerEdge C6420 servers. In all, the system has more than 8,000 two-socket nodes, more than 16,000 Intel® Xeon® Scalable Processors and 448,448 cores. Dr. Goulet and her colleagues run two classes of simulations on Frontera — a big-picture long-range view and a near-term view. The big-picture view models earthquakes as they happen over hundreds of thousands of years, while the near-term view zeroes in on what happens in a single event. “The only way we can achieve these types of simulations is by using high-performance computers,” Dr. Goulet says. “And a lot of our computational research at SCEC is driven by lots of researchers working together, and we need a fast turnaround to be able to discuss the results and improve them. So it’s important for us to have access to these high-performance computers, such as Frontera.” To Learn More For a closer look at SCEC, read the case study. Explore the capabilities of the Frontera supercomputer at Texas Advanced Computing Center, see the Dell Technologies case study “A New ‘Frontera’.” And to learn about other critical workloads running on the HPC systems at TACC, see the blog “Urgent Computing at the Texas Advanced Computing Center from Dell Technologies and Intel.” Explore more HPC Solutions from Dell Technologies and Intel. [1] TOP500 List, June 2019. Related content brandpost The steep cost of a poor data management strategy Without a data management strategy, organizations stall digital progress, often putting their business trajectory at risk. Here’s how to move forward. By Jay Limbasiya, Global AI, Analytics, & Data Management Business Development, Unstructured Data Solutions, Dell Technologies Jun 09, 2023 6 mins Data Management brandpost Democratizing HPC with multicloud to accelerate engineering innovations Cloud for HPC is facilitating broader access to high performance computing and accelerating innovations and opportunities for all types of organizations. By Tanya O'Hara Jun 01, 2023 6 mins Multi Cloud brandpost Solving 3 key IT challenges to unlock business innovation Dell and Microsoft are integrating strengths to help organizations unlock innovation with cloud-like agility across on-premises, edge, and cloud environments. By Vikram Belapurkar, Product Marketing, Multicloud, and Software-defined Infrastructure Platforms, Dell Technologies May 23, 2023 4 mins Hybrid Cloud brandpost How to Make the Quantum (Computing) Leap Three steps to start deploying quantum computing applications. By Mike Robillard, Senior Distinguished Engineer, Office of the CTO, Dell Technologies and Victor Fong, Distinguished Engineer, Office of the CTO, Dell Technologies May 08, 2023 7 mins Digital Transformation Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe