Image
Peter Lindstrom HPC screenshot

ZFP is a compressed representation of multidimensional floating-point arrays that are ubiquitous in high-performance computing. 

Image
Computing Facility Construction

To meet the needs of tomorrow’s supercomputers, NNSA's LLNL has broken ground on its ECFM project, which will substantially upgrade the mechanical and electrical capabilities of the Livermore Computing Center.

Image
Magma Supercomputer

With 5.4 petaflops of peak performance crammed into 760 compute nodes, that is a lot of computing capability in a small space generating a considerable amount of heat. 

Image
podcast screenshot

Elaine Raybourn interviews Todd Gamblin about the Spack project's experience working remotely.

Image
Todd Gamblin

Spack is an open-source product that has become very well known in the high-performance computing (HPC) community because of the value it adds to the software deployment process.

Image

Under a new agreement, AMD will supply upgraded graphics accelerators for Lawrence Livermore National Laboratory’s Corona supercomputing cluster, expected to nearly double the system’s peak compute power. 

Image
Coronavirus model

The White House announced the launch of the COVID-19 HPC Consortium to provide COVID-19 researchers worldwide with access to the world’s most powerful high performance computing resources that can significantly advance the pace of scientific discovery in the fight to stop the virus.

Image
Magma Supercomputer

This week, LLNL highlighted one of the latest additions to its computing arsenal: Magma. Magma is a Penguin Computing “Relion” system comprised of 752 nodes with Intel Xeon Platinum 9242 (Cascade Lake-AP) processors.

Image

Lawrence Livermore National Laboratory (LLNL), Hewlett Packard Enterprise (HPE) and Advanced Micro Devices Inc. (AMD) today announced the selection of AMD as the node supplier for El Capitan, projected to be the world’s most powerful supercomputer when it is fully deployed in 2023.

Image
Cosmin Petra and Ignacio Aravena

LLNL bested more than two dozen teams to place first overall in Challenge 1 of the DOE Grid Optimization Competition, aimed at developing a more reliable, resilient, and secure U.S. electrical grid.

Image
Podcast title and audio image

Led by LLNL's Tzanio Kolev, the Center for Efficient Exascale Discretizations (CEED) is a hub for high-order mathematical methods to increase application efficiency and performance.

Image
TFinity Spectra Tape Library

LLNL is now home to the world’s largest Spectra TFinity system, following a complete replacement of the tape library hardware that supports Livermore’s data archives.

Image
Ulrike Meier Yang being interviewed

This episode of Let’s Talk Exascale takes a brief look at the xSDK4ECP and hypre projects within the Software Technology research focus area of the US Department of Energy’s Exascale Computing Project (ECP).

Image
Kathryn and Elsa at R&D100 awards

In this Let’s Talk Exascale podcast, researchers involved with the Scalable Checkpoint/Restart (SCR) Framework describe how it will enable users of high-performance computing systems to roll with the rapid pace of change.

Image
Kathryn Mohror and fellow researcher in front of scientific poster

A software product from the Exascale Computing Project (ECP) called UnifyFS can provide I/O performance portability for applications, enabling them to use distributed in-system storage and the parallel file system.

Image
Spack session

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference.

Image
DOE booth

The 2019 International Conference for High Performance Computing, Networking, Storage, and Analysis— better known simply as SC19 — returned to Denver, and once again Lawrence Livermore National Laboratory (LLNL) made its presence known as a force in supercomputing.

Image
Awards ceremony for Best Paper

A panel of judges at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC19) on Thursday awarded a multi-institutional team led by Lawrence Livermore National Laboratory computer scientists with the conference’s Best Paper award.

Image
Bronis accepts HPCWire Award

Lawrence Livermore National Laboratory (LLNL), along with the Oak Ridge and Argonne national laboratories and Cray Inc., garnered HPCwire Readers’ and Editors’ Choice Awards for Top Supercomputing Achievement for 2019.

Image
Corona Supercomputer

Lawrence Livermore National Laboratory (LLNL) is collaborating with Penguin Computing Inc. and graphics card manufacturer AMD to upgrade its unclassified computing cluster Corona to roughly double the amount of graphics processors (GPUs) the system previously had.

Image
top500 announcement by Eric Strohmaier

The latest TOP500 List of the world's most powerful computers was released today at the 2019 International Supercomputing Conference for High Performance Computing, Networking, Storage and Analysis (SC19) in Denver.

Image
penguin logo

Lawrence Livermore National Laboratory (LLNL) is welcoming the newest addition to its already powerful supercomputing lineup, a commodity cluster system built by Penguin Computing Inc. that will perform vital calculations for the National Nuclear Security Administration (NNSA).

Image

Computational Scientist Ramesh Pankajakshan came to LLNL in 2016 directly from the University of Tennessee at Chattanooga. But unlike most recent hires from universities—students, grad students, or postdocs—Ramesh made a mid-career switch from research professor to professional researcher.

Image

Spack is an open-source package manager for HPC. Its simple, templated Python DSL allows the same package to be built in many configurations, with different compilers, flags, dependencies, and dependency versions.