Spack: a Package Manager for Supercomputers, a Podcast
An interview with Todd Gamblin from the Lawrence Livermore National Laboratory about the Spack project, discussing his current research project along with his involvement in Spack.
Using models, 3D printing to study common heart defect
Lawrence Livermore researchers and collaborators have combined machine learning, 3D printing and high performance computing simulations to accurately model blood flow in the aorta.
Simulations, videos help researchers see crack formation in 3D-printed tungsten
By combining simulations with high-speed videos taken during the laser powder-bed fusion process, LLNL scientists were able to visualize the ductile-to-brittle transition in 3D-printed tungsten in real-time.
Lassen Plus Cerebras chip to advance machine learning, AI research
LLNL and Cerebras Systems have installed the company’s CS-1 artificial intelligence (AI) computer into Lassen, making LLNL the first institution to integrate the cutting-edge AI platform with a large-scale supercomputer.
Laboratory team completes highest-ever resolution quake simulations using Sierra supercomputer
Simulated strength of shaking from a magnitude 7.0 Hayward Fault earthquake showing peak ground velocity (colorbar) and seismograms (blue) at selected locations (triangles).
Doctoral fellow improves codes used to simulate interactions between particles moving at high velocities
LLNL physicist Alison Saunders and doctoral fellow Marco Echeverria meet online to discuss their progress enhancing codes used during high-energy experiments.
Next Platform TV Interviews Ian Karlin and Brian VanEssen
Dr. Ian Karlin (Livermore National Lab) on AI hardware integration into HPC systems, workflows, followed by a talk about software integration of AI accelerators in HPC with Dr. Brian Van Essen (LLNL)
Reducing the Memory Footprint and Data Movement on Exascale Systems
ZFP is a compressed representation of multidimensional floating-point arrays that are ubiquitous in high-performance computing.
Lab breaks ground for exascale facility upgrades
To meet the needs of tomorrow’s supercomputers, NNSA's LLNL has broken ground on its ECFM project, which will substantially upgrade the mechanical and electrical capabilities of the Livermore Computing Center.
Cooling Magma is a Challenge that LLNL Can Take On
With 5.4 petaflops of peak performance crammed into 760 compute nodes, that is a lot of computing capability in a small space generating a considerable amount of heat.
Flexible Package Manager Automates Supercomputer Software Deployment
Spack is an open-source product that has become very well known in the high-performance computing (HPC) community because of the value it adds to the software deployment process.
Upgrades for LLNL supercomputer from AMD, Penguin Computing aid COVID-19 research
Under a new agreement, AMD will supply upgraded graphics accelerators for Lawrence Livermore National Laboratory’s Corona supercomputing cluster, expected to nearly double the system’s peak compute power.
New partnership to Unleash U.S. Supercomputing Resources in the Fight Against COVID-19
The White House announced the launch of the COVID-19 HPC Consortium to provide COVID-19 researchers worldwide with access to the world’s most powerful high performance computing resources that can significantly advance the pace of scientific discovery in the fight to stop the virus.
LLNL Highlights Magma’s Role in NNSA’s Computing Arsenal
This week, LLNL highlighted one of the latest additions to its computing arsenal: Magma. Magma is a Penguin Computing “Relion” system comprised of 752 nodes with Intel Xeon Platinum 9242 (Cascade Lake-AP) processors. The cluster has 293 terabytes of memory, liquid cooling provided by CoolIT Systems and an Intel Omni-Path interconnect.
LLNL and HPE to partner with AMD on El Capitan, Projected as World’s Fastest Supercomputer
Lawrence Livermore National Laboratory (LLNL), Hewlett Packard Enterprise (HPE) and Advanced Micro Devices Inc. (AMD) today announced the selection of AMD as the node supplier for El Capitan, projected to be the world’s most powerful supercomputer when it is fully deployed in 2023.
Lab Team Sizzles at DOE Grid Optimization Competition
LLNL bested more than two dozen teams to place first overall in Challenge 1 of the DOE Grid Optimization Competition, aimed at developing a more reliable, resilient, and secure U.S. electrical grid.
Podcast: Future Architectures, First-Rate Discretization Libraries
Led by LLNL's Tzanio Kolev, the Center for Efficient Exascale Discretizations (CEED) is a hub for high-order mathematical methods to increase application efficiency and performance.
Upgraded Data Archives Position Lab for Exascale Era
LLNL is now home to the world’s largest Spectra TFinity system, following a complete replacement of the tape library hardware that supports Livermore’s data archives.
Optimizing Math Libraries to Prepare for Exascale Computing
This episode of Let’s Talk Exascale takes a brief look at the xSDK4ECP and hypre projects within the Software Technology research focus area of the US Department of Energy’s Exascale Computing Project (ECP).