![HPSS 30th anniversary logo](/sites/default/files/styles/news_image_for_home_page/public/2022-06/HPSS-anniversary-hpc-news.png?itok=Qrx2YVgN)
This year marks the 30th anniversary of the High Performance Storage System (HPSS) collaboration, comprising five DOE HPC national laboratories: LLNL, Lawrence Berkeley, Los Alamos, Oak Ridge, and Sandia, along with industry partner IBM.
![Some of the LLNL HPSS team in front of one of our tape storage systems: Herb Wartens, Debbie Morford, and Todd Heer](/sites/default/files/styles/news_image_for_home_page/public/2022-06/HPSS-anniversary-hpc-news-team.png?itok=HMNJhzRh)
After 30 years, the High Performance Storage System (HPSS) collaboration continues to lead and adapt to the needs of the time while honoring its primary mission of long-term data stewardship of the crown jewels of data for government, academic and commercial organizations around the world.
![Picture of early career award winners](/sites/default/files/styles/news_image_for_home_page/public/2022-06/4-land_open.jpg?itok=5PISWMEq)
An update on early and mid-career recognition award recipients, including Livermore Computing's own Todd Gamblin.
![Three El Capitan Early Access Systems: Tioga, Tenaya, and RZVernal](/sites/default/files/styles/news_image_for_home_page/public/2022-06/elCapitan_875x500px.jpg?itok=K4Qd26d9)
Three testbed machines for Lawrence Livermore National Laboratory’s future exascale El Capitan supercomputer — nicknamed rzVernal, Tioga and Tenaya — all ranked among the top 200 on the latest Top500 List of the world’s most powerful computers.
![LLNL and Amazon Web Services LLNL and Amazon Web Services](/sites/default/files/styles/news_image_for_home_page/public/2022-05/mou_AWS_LLNL_875x50px_f.jpg?itok=ku9DVJ4B)
LLNL and Amazon Web Services (AWS) have signed a memorandum of understanding to define the role of leadership-class HPC in a future where cloud HPC is ubiquitous.
![square cutaway of rainbow-colored data reconstruction square cutaway of rainbow-colored data reconstruction](/sites/default/files/styles/news_image_for_home_page/public/2022-05/PacificVis%20Comp%20leaderboard.png?itok=jq5HXhVv)
Winning the best paper award at PacificVis 2022, a research team has developed a resolution-precision-adaptive representation technique that reduces mesh sizes, thereby reducing the memory and storage footprints of large scientific datasets.
![IPDPS 2022 twitter card IPDPS 2022 twitter card](/sites/default/files/styles/news_image_for_home_page/public/2022-05/IPDPS%202022%20twitter%20card.png?itok=KNKaHG0x)
LLNL participates in the International Parallel and Distributed Processing Symposium (IPDPS) on May 30 through June 3.
![Magma supercomputer in dramatic blue lighting, overlaid with LLNL logo and ISC22 logo](/sites/default/files/styles/news_image_for_home_page/public/2022-05/ISC-22-v1-hpc.png?itok=2YT7q-lI)
Join LLNL at the ISC High Performance Conference on May 29 through June 2. The event brings together the HPC community to share the latest technology of interest to HPC developers and users.
![hpc hpc](/sites/default/files/styles/news_image_for_home_page/public/2022-05/hpc_875x500px.jpg?itok=f4nr5_4o)
Lawrence Livermore National Laboratory (LLNL) and the United Kingdom’s Hartree Centre are launching a new webinar series intended to spur collaboration with industry through discussions on
![Cornelius system mockup](/sites/default/files/styles/news_image_for_home_page/public/2022-05/cornelis_875x500px.jpg?itok=LYDBIzKF)
The U.S. Department of Energy’s (DOE) National Nuclear Security Administration (NNSA) today announced the award of an $18 million contract to Cornelis Network for collaborative research and development in next-generation networking for supercomputing systems at the NNSA laboratories.
![Exascale Computing Project logo Exascale Computing Project logo](/sites/default/files/styles/news_image_for_home_page/public/2022-05/ecp-logo.png?itok=ysAAlIt5)
The Exascale Computing Project (ECP) 2022 Community Birds-of-a-Feather Days will take place May 10–12 via Zoom. The event provides an opportunity for the HPC community to engage with ECP teams to discuss our latest development efforts.
![Coronavirus model](/sites/default/files/styles/news_image_for_home_page/public/2022-03/virus_031022_875x500px.jpg?itok=LoOsEWIE)
Analyzing one of the largest databases of patients with cancer and COVID-19 with machine learning models, researchers from LLNL and the UC–San Francisco found previously unreported links between a rare type of cancer.
![collage of Flux team alongside the project logo](/sites/default/files/styles/news_image_for_home_page/public/2022-03/flux%20team%20hpc%20news.png?itok=LmqiItDk)
The Livermore Computing–developed Flux project addresses challenges posed by complex scientific research supercomputing workflows, and the team has played a major role in the ECP ExaWorks project.
![Being male is a known risk factor for adverse outcomes in hospitalized COVID-19 patients. However, new analysis reveals that when modeling the entire disease trajectory, the degree to which being male is a risk factor depends on the underlying disease severity of the patient. Foreground image credit: LLNL Principal Investigator Priyadip Ray; Background image credit: Adobe Stock images.](/sites/default/files/styles/news_image_for_home_page/public/2022-03/virus_malevsFemale_875X500PX.jpg?itok=lWi8BcCd)
An LLNL team has developed a comprehensive dynamic model of COVID-19 disease progression in hospitalized patients.
![Oppenheimer awards announcement with picture of Kathryn and Yong](/sites/default/files/styles/news_image_for_home_page/public/2022-01/oppenheimerAwards_3_875x500pxpsd%20copy.jpg?itok=McttpwQL)
The Oppenheimer Science and Energy Leadership Program has selected materials scientist T. Yong Han and computer scientist Kathryn Mohror as 2022 fellows.
![RAS protein in front of Sierra](/sites/default/files/styles/news_image_for_home_page/public/2022-01/MuMMI-RAS-Comp-leaderboard.png?itok=BkDSl5X9)
In the Multiscale Machine-Learned Modeling Infrastructure (MuMMI), the macroscale simulation runs a large system, with hundreds of proteins, at low resolution and machine learning decides which regions of the macro-model require investigation in a microscale simulation at much higher resolution.
![AI3 logo](/sites/default/files/styles/news_image_for_home_page/public/2022-01/ai3-news-comp.png?itok=M_xy_Thg)
Lawrence Livermore National Laboratory’s AI Innovation Incubator (AI3) will serve as the foundation for a cohesive view of AI for Applied Science, built upon LLNL’s “cognitive simulation” approach that combines state-of-the-art AI technologies with leading-edge high performance computing.
![Screenshot of Zoom meeting of SC21 SCC](/sites/default/files/styles/news_image_for_home_page/public/2022-01/ZoomScreenshot-ext.png?itok=JLHK46WB)
LLNL’s formidable presence at the annual Supercomputing Conference (SC21) included leadership of the Student Cluster Competition (SCC), which was held in a hybrid format. Computer scientist Kathleen Shoga served as this year’s SCC chair.
![Inclusions in steel PPT slide](/sites/default/files/styles/news_image_for_home_page/public/2022-01/inclusion_875x500px.jpg?itok=f3iYKIk7)
Under a newly funded HPC for Manufacturing project, LLNL will partner with steel and mining company ArcelorMittal to couple computer vision and machine learning methods with HPC resources to reduce emissions and defects from inclusions in steel manufacturing.
![Bronis at a podium speaking to SC audience](/sites/default/files/styles/news_image_for_home_page/public/2021-11/sc21-hpc-news.png?itok=d1JkJGqN)
For the first time ever, the 2021 International Conference for High Performance Computing, Networking, Storage and Analysis (SC21) went hybrid, with dozens of both in-person and virtual workshops, technical paper presentations, panels, tutorials and “birds of a feather” sessions.
![Ignacio accepting the award in front of a large projection screen](/sites/default/files/styles/news_image_for_home_page/public/2021-11/sc21-award-hpc-news.png?itok=GtjjPth3)
A suite developed by an LLNL team to simplify evaluation of approximation techniques for scientific applications has won the first-ever Best Reproducibility Advancement Award for approximation framework at SC21.
![stylized image of supercomputer racks](/sites/default/files/styles/news_image_for_home_page/public/2021-11/hpc4mfg-industrial-heating.png?itok=4X9zhp2P)
In a project with U.S. Steel, LLNL computational physicists built models of the hot-rolling process to run on LLNL’s HPC platforms.
![RZ Nevada system](/sites/default/files/styles/news_image_for_home_page/public/2021-11/RZNevadaSystem_875x500px_0_0.jpg?itok=wlCVKvcL)
The DOE's Exascale Computing Project compiled a video playlist for Exascale Day on October 18 (10^18).
![flux logo next to R&D 100 logo](/sites/default/files/styles/news_image_for_home_page/public/2021-11/flux-rnd100-winner_0.png?itok=WnTQBkDt)
The renowned worldwide competition announced the winners of the 2021 R&D 100 Awards, among them LLNL's Flux workload management software framework in the Software/Services category.
![HPV visual](/sites/default/files/styles/news_image_for_home_page/public/2021-11/hpv.png?itok=w48L9xwS)
LLNL will lend its expertise in vaccine research—most recently from designing new antibodies and antiviral drugs for COVID-19—and computing resources to the Human Vaccines Project consortium to aid development of a universal coronavirus vaccine and improve understanding of immune response.