LLNL is participating in the 35th annual Supercomputing Conference (SC23), which will be held both virtually and in Denver on November 12–17, 2023.
Alpine/ZFP addresses analysis, visualization, data reduction needs for exascale science applications
The Data and Visualization efforts in the DOE’s Exascale Computing Project provide an ecosystem of capabilities for data management, analysis, lossy compression, and visualization.
Hosted at LLNL, the Center for Efficient Exascale Discretizations’ annual event featured breakout discussions, more than two dozen speakers, and an evening of bocce ball.
The Center for Efficient Exascale Discretizations has developed innovative mathematical algorithms for the DOE’s next generation of supercomputers.
With this year’s results, the Lab has now collected a total of 179 R&D 100 awards since 1978. The awards will be showcased at the 61st R&D 100 black-tie awards gala on Nov. 16 in San Diego.
A team from LLNL and seven other DOE labs is a finalist for the new ACM Gordon Bell Prize for Climate Modeling for running an unprecedented high-resolution global atmosphere model on the world’s first exascale supercomputer.
LLNL's Ian Lee joins a Dots and Bridges panel to discuss HPC as a critical resource for data assimilation and numerical weather prediction research.
LLNL's zfp and Variorum software projects are winners. LLNL is a co-developing organization on the winning CANDLE project.
The Tri-Lab Operating System Stack (TOSS) ensures other national labs’ supercomputing needs are met.
Livermore Computing is making significant progress toward siting the NNSA’s first exascale supercomputer.
Innovative hardware provides near-node local storage alongside large-capacity storage.
Siting a supercomputer requires close coordination of hardware, software, applications, and Livermore Computing facilities.
Flux, next-generation resource and job management software, steps up to support emerging use cases.
A Laboratory-developed software package management tool, enhanced by contributions from more than 1,000 users, supports the high performance computing community.
LLNL researchers ran HiOp, an open-source optimization solver, on 9,000 nodes of Oak Ridge National Laboratory’s Frontier exascale supercomputer in the largest simulation of its kind to date.
Using explainable artificial intelligence techniques can help increase the reach of machine learning applications in materials science, making the process of designing new materials much more efficient.
The Lab’s workhorse visualization tool provides expanded color map features, including for visually impaired users.
Learn how to use LLNL software in the cloud. Throughout August, join our tutorials on how to install and use several projects on AWS EC2 instances. No previous experience necessary.
2023’s Developer Day was a two-day event for the first time, balancing an all-virtual technical program with a fully in-person networking day.
A research team from Oak Ridge and Lawrence Livermore national labs won the first IPDPS Best Open-Source Contribution Award for the paper “UnifyFS: A User-level Shared File System for Unified Access to Distributed Local Storage.”
The report lays out a comprehensive vision for the DOE Office of Science and NNSA to expand their work in scientific use of AI by building on existing strengths in world-leading high performance computing systems and data infrastructure.
LLNL CTO Bronis de Supinski talks about how the Lab deploys novel architecture AI machines and provides an update on El Capitan.
Splitting memory resources in high performance computing between local nodes and a larger shared remote pool can help better support diverse applications.
Lori Diachin will take over as director of the DOE’s Exascale Computing Project on June 1, guiding the successful, multi-institutional high performance computing effort through its final stages.
Unique among data compressors, zfp is designed to be a compact number format for storing data arrays in-memory in compressed form while still supporting high-speed random access.