
LLNL's zfp and Variorum software projects are winners. LLNL is a co-developing organization on the winning CANDLE project.

People across the Lab are pulling in the same direction for what will be one of the best computing systems in the world. — Bronis de Supinski | LC chief technology officer

Integrating Rabbits into the early access systems, and ultimately into El Capitan, is a huge co-design effort. — Brian Behlendorf | I/O lead

The functionality that El Capitan is going to unlock for our users and for the programs is the most exciting aspect. — Adam Bertsch | Integration Project lead

We are providing more capable resource management through hierarchical, multi-level management and scheduling schemes. — Becky Springmeyer | LC division leader

It's becoming more and more obvious to everyone that we really can do this. — Jim Foraker | Systems Software and Security Group leader

A Laboratory-developed software package management tool, enhanced by contributions from more than 1,000 users, supports the high performance computing community.

LLNL researchers ran HiOp, an open-source optimization solver, on 9,000 nodes of Oak Ridge National Laboratory’s Frontier exascale supercomputer in the largest simulation of its kind to date.

We have these digital tools to help us decide what to make, but we still have to figure out how to make it — Anna Hiszpanski | Materials scientist

Adding new color maps for color-vision deficient users provided us the opportunity to address our color map usability. — Eric Brugger | Project leader

Learn how to use LLNL software in the cloud. Throughout August, join our tutorials on how to install and use several projects on AWS EC2 instances. No previous experience necessary.

When you think about software, it’s almost as important as oxygen to the functioning of the Lab. — John Grosh | Deputy associate director for mission development

A research team from Oak Ridge and Lawrence Livermore national labs won the first IPDPS Best Open-Source Contribution Award for the paper “UnifyFS: A User-level Shared File System for Unified Access to Distributed Local Storage.”

The report lays out a comprehensive vision for the DOE Office of Science and NNSA to expand their work in scientific use of AI by building on existing strengths in world-leading high performance computing systems and data infrastructure.

LLNL CTO Bronis de Supinski talks about how the Lab deploys novel architecture AI machines and provides an update on El Capitan.

Suppose we provision each node with a much smaller amount of memory, and at times when they need more, they can use the memory of a memory server. — Maya Gokhale | Co-author

Lori Diachin will take over as director of the DOE’s Exascale Computing Project on June 1, guiding the successful, multi-institutional high performance computing effort through its final stages.

Livermore CTO Bronis de Supinski joins the Let's Talk Exascale podcast to discuss the details of LLNL's upcoming exascale supercomputer.

Unique among data compressors, zfp is designed to be a compact number format for storing data arrays in-memory in compressed form while still supporting high-speed random access.

Variorum provides robust, portable interfaces that allow us to measure and optimize computation at the physical level: temperature, cycles, energy, and power. With that foundation, we can get the best possible use of our world-class computing resources.

The Compiler-induced Inconsistency Expression Locator tool is recognized at ISC23

The Lab was already using Elastic components to gather data from its HPC clusters, then investigated whether Elasticsearch and Kibana could be applied to all scanning and logging activities across the board.

The addition of the spatial data flow accelerator into LLNL’s Livermore Computing Center is part of an effort to upgrade the Lab’s cognitive simulation (CogSim) program.

Computer scientist Vanessa Sochat talks to BSSw about a recent effort to survey software developer needs at LLNL.