LLNL has signed a memorandum of understanding with HPC facilities in Germany, the United Kingdom, and the U.S., jointly forming the International Association of Supercomputing Centers.
LLNL's Greg Becker spoke with HPC Tech Shorts to explain how Spack's binary cache works. The video “Get your HPC codes installed and running in minutes using Spack’s Binary Cache” runs 15:11.
Learn how to use LLNL software in the cloud. In August, we will host tutorials in collaboration with AWS on how to install and use these projects on AWS EC2 instances. No previous experience necessary.
Computer scientist Kathryn Mohror is among LLNL's recipients of the Department of Energy’s Early Career Research Program awards.
An LLNL team will be among the first researchers to perform work on the world’s first exascale supercomputer—Oak Ridge National Laboratory’s Frontier—when they use the system to model cancer-causing protein mutations.
Livermore’s machine learning experts aim to provide assurances on performance and enable trust in machine-learning technology through innovative validation and verification techniques.
In a presentation delivered to the 79th HPC User Forum at Oak Ridge National Laboratory, LLNL's Terri Quinn revealed that AMD’s forthcoming MI300 APU would be the computational bedrock of El Capitan, which is slated for installation at LLNL in late 2023.
This year marks the 30th anniversary of the High Performance Storage System (HPSS) collaboration, comprising five DOE HPC national laboratories: LLNL, Lawrence Berkeley, Los Alamos, Oak Ridge, and Sandia, along with industry partner IBM.
After 30 years, the High Performance Storage System (HPSS) collaboration continues to lead and adapt to the needs of the time while honoring its primary mission of long-term data stewardship of the crown jewels of data for government, academic and commercial organizations around the world.
An update on early and mid-career recognition award recipients, including Livermore Computing's own Todd Gamblin.
Three testbed machines for Lawrence Livermore National Laboratory’s future exascale El Capitan supercomputer — nicknamed rzVernal, Tioga and Tenaya — all ranked among the top 200 on the latest Top500 List of the world’s most powerful computers.
LLNL and Amazon Web Services (AWS) have signed a memorandum of understanding to define the role of leadership-class HPC in a future where cloud HPC is ubiquitous.
Winning the best paper award at PacificVis 2022, a research team has developed a resolution-precision-adaptive representation technique that reduces mesh sizes, thereby reducing the memory and storage footprints of large scientific datasets.
LLNL participates in the International Parallel and Distributed Processing Symposium (IPDPS) on May 30 through June 3.
Join LLNL at the ISC High Performance Conference on May 29 through June 2. The event brings together the HPC community to share the latest technology of interest to HPC developers and users.
The U.S. Department of Energy’s (DOE) National Nuclear Security Administration (NNSA) today announced the award of an $18 million contract to Cornelis Network for collaborative research and development in next-generation networking for supercomputing systems at the NNSA laboratories.
The Exascale Computing Project (ECP) 2022 Community Birds-of-a-Feather Days will take place May 10–12 via Zoom. The event provides an opportunity for the HPC community to engage with ECP teams to discuss our latest development efforts.
Analyzing one of the largest databases of patients with cancer and COVID-19 with machine learning models, researchers from LLNL and the UC–San Francisco found previously unreported links between a rare type of cancer.
The Livermore Computing–developed Flux project addresses challenges posed by complex scientific research supercomputing workflows, and the team has played a major role in the ECP ExaWorks project.
An LLNL team has developed a comprehensive dynamic model of COVID-19 disease progression in hospitalized patients.
The Oppenheimer Science and Energy Leadership Program has selected materials scientist T. Yong Han and computer scientist Kathryn Mohror as 2022 fellows.
In the Multiscale Machine-Learned Modeling Infrastructure (MuMMI), the macroscale simulation runs a large system, with hundreds of proteins, at low resolution and machine learning decides which regions of the macro-model require investigation in a microscale simulation at much higher resolution.
Lawrence Livermore National Laboratory’s AI Innovation Incubator (AI3) will serve as the foundation for a cohesive view of AI for Applied Science, built upon LLNL’s “cognitive simulation” approach that combines state-of-the-art AI technologies with leading-edge high performance computing.
LLNL’s formidable presence at the annual Supercomputing Conference (SC21) included leadership of the Student Cluster Competition (SCC), which was held in a hybrid format. Computer scientist Kathleen Shoga served as this year’s SCC chair.