To keep employees abreast of the latest tools, two data science–focused projects are under way as part of Lawrence Livermore’s Institutional Scientific Capability Portfolio.
This issue highlights some of CASC’s contributions to the DOE's Exascale Computing Project.
Release the codes! With a dynamic developer community and a long history of encouraging open-source software, LLNL has reached quadruple-digit GitHub offerings.
Discover how the software architecture and storage systems that will drive El Capitan’s performance will help LLNL and the NNSA Tri-Labs push the boundaries of computational science.
Unveiled at the International Supercomputing Conference in Germany, the June 2024 Top500 lists three systems with identical components as registering 19.65 petaflops on the High Performance Linpack benchmark, ranking them among the world’s 50 fastest.
In a groundbreaking development for addressing future viral pandemics, a multi-institutional team involving LLNL researchers has successfully combined an AI-backed platform with supercomputing to redesign and restore the effectiveness of antibodies whose ability to fight viruses has been compromi
Throughout the workshop, speakers, panelists and attendees focused on algorithm development, the potential dangers of superhuman AI systems and the importance of understanding and mitigating the risks to humans, as well as urgent measures needed to address the risks both scientifically and politi
LLNL participates in the ISC High Performance Conference (ISC24) on May 12–16.
The El Capitan Center of Excellence provides a conduit between national labs and commercial vendors, ensuring that the exascale system will meet everyone’s needs.
The advent of accelerated processing units presents new challenges and opportunities for teams responsible for network interconnects and math libraries.
The Tools Working Group delivers debugging, correctness, and performance analysis solutions at an unprecedented scale.
Backed by Spack’s robust functionality, the Packaging Working Group manages the relationships between user software and system software.
Compilers translate human-programmable source code into machine-readable code. Building a compiler is especially challenging in the exascale era.
The high performance computing publication HPCwire has selected LLNL computer scientist Todd Gamblin as one of its “People to Watch” in HPC for 2024.
The system will enable researchers from the National Nuclear Security Administration weapons design laboratories to create models and run simulations, previously considered challenging, time-intensive or impossible, for the maintenance and modernization of the United States’ nuclear weapons stock
MuyGPs helps complete and forecast the brightness data of objects viewed by Earth-based telescopes.
Can novel mathematical algorithms help scientific simulations leverage hardware designed for machine learning? A team from LLNL’s Center for Applied Scientific Computing aimed to find out.
An LLNL-led team has developed a method for optimizing application performance on large-scale GPU systems, providing a useful tool for developers running on GPU-based massively parallel and distributed machines.
New research reveals subtleties in the performance of neural image compression methods, offering insights toward improving these models for real-world applications.
Johannes Doerfert, a computer scientist in the Center for Applied Scientific Computing, was one of three researchers awarded the honor at SC23 in Denver.
Leading HPC publication HPCwire presented Spack developers with the Editor's Choice Award for Best HPC Programming Tool or Technology at SC23.
The debut of the NNSA Commodity Technology Systems-2 computing clusters Dane and Bengal on the Top500 List of the world’s most powerful supercomputers brings the total of LLNL-sited systems on the list to 11, the most of any supercomputing center in the world.
The MFEM virtual workshop highlighted the project’s development roadmap and users’ scientific applications. The event also included Q&A, student lightning talks, and a visualization contest.
Over several years, teams have prepared the infrastructure for El Capitan, designing and building the computing facility’s upgrades for power and cooling, installing storage and compute components and connecting everything together.