Next Platform TV Interviews Ian Karlin and Brian VanEssen
Dr. Ian Karlin (Livermore National Lab) on AI hardware integration into HPC systems, workflows, followed by a talk about software integration of AI accelerators in HPC with Dr. Brian Van Essen (LLNL)
Reducing the Memory Footprint and Data Movement on Exascale Systems
ZFP is a compressed representation of multidimensional floating-point arrays that are ubiquitous in high-performance computing.
Lab breaks ground for exascale facility upgrades
To meet the needs of tomorrow’s supercomputers, NNSA's LLNL has broken ground on its ECFM project, which will substantially upgrade the mechanical and electrical capabilities of the Livermore Computing Center.
Cooling Magma is a Challenge that LLNL Can Take On
With 5.4 petaflops of peak performance crammed into 760 compute nodes, that is a lot of computing capability in a small space generating a considerable amount of heat.
Flexible Package Manager Automates Supercomputer Software Deployment
Spack is an open-source product that has become very well known in the high-performance computing (HPC) community because of the value it adds to the software deployment process.
Upgrades for LLNL supercomputer from AMD, Penguin Computing aid COVID-19 research
Under a new agreement, AMD will supply upgraded graphics accelerators for Lawrence Livermore National Laboratory’s Corona supercomputing cluster, expected to nearly double the system’s peak compute power.
New partnership to Unleash U.S. Supercomputing Resources in the Fight Against COVID-19
The White House announced the launch of the COVID-19 HPC Consortium to provide COVID-19 researchers worldwide with access to the world’s most powerful high performance computing resources that can significantly advance the pace of scientific discovery in the fight to stop the virus.
LLNL Highlights Magma’s Role in NNSA’s Computing Arsenal
This week, LLNL highlighted one of the latest additions to its computing arsenal: Magma. Magma is a Penguin Computing “Relion” system comprised of 752 nodes with Intel Xeon Platinum 9242 (Cascade Lake-AP) processors. The cluster has 293 terabytes of memory, liquid cooling provided by CoolIT Systems and an Intel Omni-Path interconnect.
LLNL and HPE to partner with AMD on El Capitan, Projected as World’s Fastest Supercomputer
Lawrence Livermore National Laboratory (LLNL), Hewlett Packard Enterprise (HPE) and Advanced Micro Devices Inc. (AMD) today announced the selection of AMD as the node supplier for El Capitan, projected to be the world’s most powerful supercomputer when it is fully deployed in 2023.
Lab Team Sizzles at DOE Grid Optimization Competition
LLNL bested more than two dozen teams to place first overall in Challenge 1 of the DOE Grid Optimization Competition, aimed at developing a more reliable, resilient, and secure U.S. electrical grid.
Podcast: Future Architectures, First-Rate Discretization Libraries
Led by LLNL's Tzanio Kolev, the Center for Efficient Exascale Discretizations (CEED) is a hub for high-order mathematical methods to increase application efficiency and performance.
Upgraded Data Archives Position Lab for Exascale Era
LLNL is now home to the world’s largest Spectra TFinity system, following a complete replacement of the tape library hardware that supports Livermore’s data archives.
Optimizing Math Libraries to Prepare for Exascale Computing
This episode of Let’s Talk Exascale takes a brief look at the xSDK4ECP and hypre projects within the Software Technology research focus area of the US Department of Energy’s Exascale Computing Project (ECP).
Podcast: SCR Scalable Checkpoint/Restart Paves the Way for Exascale
In this Let’s Talk Exascale podcast, researchers involved with the Scalable Checkpoint/Restart (SCR) Framework describe how it will enable users of high-performance computing systems to roll with the rapid pace of change.
Software Enables Use of Distributed In-System Storage and Parallel File System
A software product from the Exascale Computing Project (ECP) called UnifyFS can provide I/O performance portability for applications, enabling them to use distributed in-system storage and the parallel file system.
LLNL’s presence in HPC shines bright at SC19
The 2019 International Conference for High Performance Computing, Networking, Storage, and Analysis— better known simply as SC19 — returned to Denver, and once again Lawrence Livermore National Laboratory (LLNL) made its presence known as a force in supercomputing.
LLNL-led team awarded Best Paper at SC19 for modeling cancer-causing protein interactions
A panel of judges at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC19) on Thursday awarded a multi-institutional team led by Lawrence Livermore National Laboratory computer scientists with the conference’s Best Paper award.
LLNL wins 2019 HPCwire Readers’ and Editors’ Choice Awards for El Capitan
Lawrence Livermore National Laboratory (LLNL), along with the Oak Ridge and Argonne national laboratories and Cray Inc., garnered HPCwire Readers’ and Editors’ Choice Awards for Top Supercomputing Achievement for 2019.