The Australian National University (ANU) will begin performance testing of the country’s most powerful supercomputer on October 10 following a system build that began in mid-August. Earth system and climate change scientists and researchers will be the first recipients of the supercomputer’s massive 1.2 petaflops of processing power – provided by 57,000 Intel-based cores – when the machine goes into production by early 2013. The machine uses Fujitsu’s PRIMERGY x86 high performance computing (HPC) clustered design and Intel Xeon E5 CPUs. It has 176 terabytes of memory and 12 petabytes of disk storage. It is eight rows deep in the data centre and each row is 14 metres wide. Professor Lindsay Botten, director at National Computational Infrastructure (NCI) – an initiative between the ANU and the Australian government – said around half of the machine’s power will be dedicated to modelling Earth systems such as weather and long-term climate change. The Bureau of Meteorology will share the machine with the CSIRO and several universities to do climate and Earth system science work, said Professor Botten. “It will assist in solving a lot deeper problems; [researchers] will have more elaborate calculations, which will enable them to consider more elaborate research questions.” Professor Botten said researchers want to use the supercomputer to provide seasonal weather modelling over a months rather than a few days. “You can see the economic impact of [weather changes]. If you can start to get [longer-term weather] models that work accurately, you can make some economic decisions like, ‘Do I put a crop in the ground or not?’” he said. He added that a machine of this size and memory capacity will also enable weather forecasters to work at much higher resolution to gain more accurate results about the potential impact of severe thunderstorms. Other government agencies, universities and a small number of private enterprises will access the supercomputer in the future using a Linux-based terminal or through a browser to access several Web services. Technicians from Fujitsu in Australia and Japan have assembled the supercomputer, which was delivered to a dedicated data centre at the ANU “in three semi-trailer loads per weekend for four weekends,” said Professor Botten. “By the end of this month, it will essentially be fully built and start undergoing performance testing from October 10 to the end of October,” he said. In November, Fujitsu will run acceptance testing to demonstrate the robustness of the machine using various bench marking tools. The ANU will then load the CentOS operating system software – derived from Red Hat Linux – onto the system. The machine consumes 1.5 megawatts of power or the equivalent of up to 500 electric ovens switched on around-the-clock, according to Professor Botten. “[We] are looking at [spending] $3 million to $4 million per year for electricity,” he said. According to Botten, the machine is eight times larger than its predecessor at NCI and a “factor of 10” in terms of performance behind the fastest machine in the world – dubbed Sequoia – at Lawrence Livermore National Laboratory at the Department of Energy in the United States. This $100 million, four-year supercomputing project is a partnership between the ANU and other universities, CSIRO, Bureau of Meteorology, Geoscience Australia and the Australian government. “There was $50 million [allocated] for the infrastructure, about $26 million for the machine, $23 million in the building and a couple of million dollars to do some upgrades,” he said. The supercomputer will be “50 times” more powerful than the clustered machine launched yesterday by eResearch in South Australia., according to Professor Botten. Follow Byron Connolly on Twitter: @ByronConnolly Related content brandpost Sponsored by SAP When natural disasters strike Japan, Ōita University’s EDiSON is ready to act With the technology and assistance of SAP and Zynas Corporation, Ōita University built an emergency-response collaboration tool named EDiSON that helps the Japanese island of Kyushu detect and mitigate natural disasters. By Michael Kure, SAP Contributor Dec 07, 2023 5 mins Digital Transformation brandpost Sponsored by BMC BMC on BMC: How the company enables IT observability with BMC Helix and AIOps The goals: transform an ocean of data and ultimately provide a stellar user experience and maximum value. By Jeff Miller Dec 07, 2023 3 mins IT Leadership brandpost Sponsored by BMC The data deluge: The need for IT Operations observability and strategies for achieving it BMC Helix brings thousands of data points together to create a holistic view of the health of a service. By Jeff Miller Dec 07, 2023 4 mins IT Leadership how-to How to create an effective business continuity plan A business continuity plan outlines procedures and instructions an organization must follow in the face of disaster, whether fire, flood, or cyberattack. Here’s how to create a plan that gives your business the best chance of surviving such an By Mary K. Pratt, Ed Tittel, Kim Lindros Dec 07, 2023 11 mins Small and Medium Business IT Skills Backup and Recovery Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe