A multi-institutional team involving LLNL researchers has successfully combined an AI-backed platform with supercomputing to redesign and restore the effectiveness of antibodies whose ability to fight viruses has been compromised by viral evolution.
Throughout the workshop, speakers, panelists and attendees focused on algorithm development, the potential dangers of superhuman AI systems and the importance of understanding and mitigating the risks to humans, as well as urgent measures needed to address the risks both scientifically and politi
LLNL participates in the ISC High Performance Conference (ISC24) on May 12–16.
Researchers at Lawrence Livermore National Laboratory (LLNL) have achieved a milestone in accelerating and adding features to complex multi-physics simulations run on Graphics Processing Units (GPUs), a development that
The El Capitan Center of Excellence provides a conduit between national labs and commercial vendors, ensuring that the exascale system will meet everyone’s needs.
The advent of accelerated processing units presents new challenges and opportunities for teams responsible for network interconnects and math libraries.
The Tools Working Group delivers debugging, correctness, and performance analysis solutions at an unprecedented scale.
Backed by Spack’s robust functionality, the Packaging Working Group manages the relationships between user software and system software.
Compilers translate human-programmable source code into machine-readable code. Building a compiler is especially challenging in the exascale era.
The high performance computing publication HPCwire has selected LLNL computer scientist Todd Gamblin as one of its “People to Watch” in HPC for 2024.
The system will enable researchers from the National Nuclear Security Administration weapons design laboratories to create models and run simulations, previously considered challenging, time-intensive or impossible, for the maintenance and modernization of the United States’ nuclear weapons stock
MuyGPs helps complete and forecast the brightness data of objects viewed by Earth-based telescopes.
Can novel mathematical algorithms help scientific simulations leverage hardware designed for machine learning? A team from LLNL’s Center for Applied Scientific Computing aimed to find out.
An LLNL-led team has developed a method for optimizing application performance on large-scale GPU systems, providing a useful tool for developers running on GPU-based massively parallel and distributed machines.
New research reveals subtleties in the performance of neural image compression methods, offering insights toward improving these models for real-world applications.
Johannes Doerfert, a computer scientist in the Center for Applied Scientific Computing, was one of three researchers awarded the honor at SC23 in Denver.
Leading HPC publication HPCwire presented Spack developers with the Editor's Choice Award for Best HPC Programming Tool or Technology at SC23.
The MFEM virtual workshop highlighted the project’s development roadmap and users’ scientific applications. The event also included Q&A, student lightning talks, and a visualization contest.
The debut of the NNSA Commodity Technology Systems-2 computing clusters Dane and Bengal on the Top500 List of the world’s most powerful supercomputers brings the total of LLNL-sited systems on the list to 11, the most of any supercomputing center in the world.
Over several years, teams have prepared the infrastructure for El Capitan, designing and building the computing facility’s upgrades for power and cooling, installing storage and compute components and connecting everything together.
LLNL is participating in the 35th annual Supercomputing Conference (SC23), which will be held both virtually and in Denver on November 12–17, 2023.
Alpine/ZFP addresses analysis, visualization, data reduction needs for exascale science applications
The Data and Visualization efforts in the DOE’s Exascale Computing Project provide an ecosystem of capabilities for data management, analysis, lossy compression, and visualization.
Hosted at LLNL, the Center for Efficient Exascale Discretizations’ annual event featured breakout discussions, more than two dozen speakers, and an evening of bocce ball.
The Center for Efficient Exascale Discretizations has developed innovative mathematical algorithms for the DOE’s next generation of supercomputers.
With this year’s results, the Lab has now collected a total of 179 R&D 100 awards since 1978. The awards will be showcased at the 61st R&D 100 black-tie awards gala on Nov. 16 in San Diego.
