
The Tools Working Group delivers debugging, correctness, and performance analysis solutions at an unprecedented scale.

Backed by Spack’s robust functionality, the Packaging Working Group manages the relationships between user software and system software.

Compilers translate human-programmable source code into machine-readable code. Building a compiler is especially challenging in the exascale era.

The high performance computing publication HPCwire has selected LLNL computer scientist Todd Gamblin as one of its “People to Watch” in HPC for 2024.

The system will enable researchers from the National Nuclear Security Administration weapons design laboratories to create models and run simulations, previously considered challenging, time-intensive or impossible, for the maintenance and modernization of the United States’ nuclear weapons stock

MuyGPs helps complete and forecast the brightness data of objects viewed by Earth-based telescopes.

Can novel mathematical algorithms help scientific simulations leverage hardware designed for machine learning? A team from LLNL’s Center for Applied Scientific Computing aimed to find out.

An LLNL-led team has developed a method for optimizing application performance on large-scale GPU systems, providing a useful tool for developers running on GPU-based massively parallel and distributed machines.

New research reveals subtleties in the performance of neural image compression methods, offering insights toward improving these models for real-world applications.

Johannes Doerfert, a computer scientist in the Center for Applied Scientific Computing, was one of three researchers awarded the honor at SC23 in Denver.

Leading HPC publication HPCwire presented Spack developers with the Editor's Choice Award for Best HPC Programming Tool or Technology at SC23.

The MFEM virtual workshop highlighted the project’s development roadmap and users’ scientific applications. The event also included Q&A, student lightning talks, and a visualization contest.

The debut of the NNSA Commodity Technology Systems-2 computing clusters Dane and Bengal on the Top500 List of the world’s most powerful supercomputers brings the total of LLNL-sited systems on the list to 11, the most of any supercomputing center in the world.

Over several years, teams have prepared the infrastructure for El Capitan, designing and building the computing facility’s upgrades for power and cooling, installing storage and compute components and connecting everything together.

LLNL is participating in the 35th annual Supercomputing Conference (SC23), which will be held both virtually and in Denver on November 12–17, 2023.

Alpine/ZFP addresses analysis, visualization, data reduction needs for exascale science applications
The Data and Visualization efforts in the DOE’s Exascale Computing Project provide an ecosystem of capabilities for data management, analysis, lossy compression, and visualization.

Hosted at LLNL, the Center for Efficient Exascale Discretizations’ annual event featured breakout discussions, more than two dozen speakers, and an evening of bocce ball.

The Center for Efficient Exascale Discretizations has developed innovative mathematical algorithms for the DOE’s next generation of supercomputers.

With this year’s results, the Lab has now collected a total of 179 R&D 100 awards since 1978. The awards will be showcased at the 61st R&D 100 black-tie awards gala on Nov. 16 in San Diego.

A team from LLNL and seven other DOE labs is a finalist for the new ACM Gordon Bell Prize for Climate Modeling for running an unprecedented high-resolution global atmosphere model on the world’s first exascale supercomputer.

LLNL's Ian Lee joins a Dots and Bridges panel to discuss HPC as a critical resource for data assimilation and numerical weather prediction research.

LLNL's zfp and Variorum software projects are winners. LLNL is a co-developing organization on the winning CANDLE project.

Livermore Computing is making significant progress toward siting the NNSA’s first exascale supercomputer.

Innovative hardware provides near-node local storage alongside large-capacity storage.

Siting a supercomputer requires close coordination of hardware, software, applications, and Livermore Computing facilities.