![Mammoth](/sites/default/files/styles/news_image_for_home_page/public/mammoth-LLNL.png?itok=RgNdx7om)
Funded by the Coronavirus Aid, Relief and Economic Security Act, Lawrence Livermore's new 'big memory' performance computing cluster, Mammoth, will be used to perform genomics analysis, nontraditional simulations and graph analytics required by scientists working on COVID-19, including the devel
![SJEYH conference screen shot with John Gyllenhaal in the foreground](/sites/default/files/styles/news_image_for_home_page/public/SJEYH-LLNL.png?itok=E5lAShGm)
The San Joaquin Expanding Your Horizon’s Conference went virtual for the first time in its 28-year history. Pictured, LC's John Gyllenhaal presents The Magic of STEM workshop.
![Samba Nova on the expanded Corona system](/sites/default/files/styles/news_image_for_home_page/public/sambanova-LLNL.png?itok=3zVZcZR0)
LLNL has installed a new artificial intelligence accelerator from SambaNova Systems into the Corona supercomputing cluster, allowing Lab researchers to run scientific simulations for inertial confinement fusion, COVID-19 and other basic science, while offloading AI calculations from those simulat
![Corona Supercomputer](/sites/default/files/styles/news_image_for_home_page/public/corona-hpc-LLL.png?itok=QHHtlKiY)
Funding by the CARES Act enabled LLNL and industry partners to more than double the speed of the Corona supercomputing cluster to in excess of 11 petaflops of peak performance.
![Visualization, described in article summary](/sites/default/files/styles/news_image_for_home_page/public/Bulatove-HPC-llnl.png?itok=Q_-59LmE)
To understand exactly how metals respond to high-rate compression in molecular dynamics simulations, LLNL scientists use novel methods of in silico microscopy to reveal defects in the crystal lattice (green and red line objects and gray surface objects at the top) while removing all the atoms (ye
![PSSAP logo](/sites/default/files/styles/news_image_for_home_page/public/PSAAPlogo-HPC.png?itok=YInoTtvu)
Lawrence Livermore National Laboratory (LLNL) will provide significant computing resources to students and faculty from nine universities that were newly selected for participation in the NNSA’s Predictive Science Academic Alliance Program (PSAAP).
![3 lab's logos on puzzle piece: Berkeley, Oak Ridge, Livermore](/sites/default/files/styles/news_image_for_home_page/public/puzzle-hpc.jpg?itok=qBFD1-jN)
When it comes to solving complex technical issues for GPU-accelerated supercomputers, the national labs have found that tackling them is “better together”
![Spack podcast episode header](/sites/default/files/styles/news_image_for_home_page/public/FFS030_header-SPACK-LLNL.png?itok=gVgFZycR)
An interview with Todd Gamblin from the Lawrence Livermore National Laboratory about the Spack project, discussing his current research project along with his involvement in Spack.
![Shown is a simulation of arterial blood flow using HARVEY, a fluid dynamics software developed by Lawrence Fellow Amanda Randles. Visualization by Liam Krauss/LLNL.](/sites/default/files/styles/news_image_for_home_page/public/aorta-LLNL-hpc-news.png?itok=XcmX9MO3)
Lawrence Livermore researchers and collaborators have combined machine learning, 3D printing and high performance computing simulations to accurately model blood flow in the aorta.
![tungsten](/sites/default/files/styles/news_image_for_home_page/public/tungsten-LLNL.png?itok=LjwmfNUn)
By combining simulations with high-speed videos taken during the laser powder-bed fusion process, LLNL scientists were able to visualize the ductile-to-brittle transition in 3D-printed tungsten in real-time.
![Cerebras installation](/sites/default/files/styles/news_image_for_home_page/public/Cerebras-Install-LLNL_0.png?itok=9e3wqfGd)
LLNL and Cerebras Systems have installed the company’s CS-1 artificial intelligence (AI) computer into Lassen, making LLNL the first institution to integrate the cutting-edge AI platform with a large-scale supercomputer.
![Simulated strength of shaking from a magnitude 7.0 Hayward Fault earthquake showing peak ground velocity (colorbar) and seismograms (blue) at selected locations (triangles).](/sites/default/files/styles/news_image_for_home_page/public/seismic-LLNL.png?itok=kMNo5v4b)
Simulated strength of shaking from a magnitude 7.0 Hayward Fault earthquake showing peak ground velocity (colorbar) and seismograms (blue) at selected locations (triangles).
![2 researchers with GEM](/sites/default/files/styles/news_image_for_home_page/public/gem-LLNL.png?itok=lgsu5Z5W)
LLNL physicist Alison Saunders and doctoral fellow Marco Echeverria meet online to discuss their progress enhancing codes used during high-energy experiments.
![Screenshot of Nicole introducing Ian Karlin as talk show guest](/sites/default/files/styles/news_image_for_home_page/public/next-platform_0.png?itok=-zLu2ino)
Dr. Ian Karlin (Livermore National Lab) on AI hardware integration into HPC systems, workflows, followed by a talk about software integration of AI accelerators in HPC with Dr. Brian Van Essen (LLNL)
![Peter Lindstrom HPC screenshot](/sites/default/files/styles/news_image_for_home_page/public/pl-hpc.png?itok=-4lIHqH3)
ZFP is a compressed representation of multidimensional floating-point arrays that are ubiquitous in high-performance computing.
![Computing Facility Construction](/sites/default/files/styles/news_image_for_home_page/public/computing-Facility-HPC-news_0.png?itok=mjlh6nqc)
To meet the needs of tomorrow’s supercomputers, NNSA's LLNL has broken ground on its ECFM project, which will substantially upgrade the mechanical and electrical capabilities of the Livermore Computing Center.
![Magma Supercomputer](/sites/default/files/styles/news_image_for_home_page/public/magma.png?itok=OyZ5MSYh)
With 5.4 petaflops of peak performance crammed into 760 compute nodes, that is a lot of computing capability in a small space generating a considerable amount of heat.
![podcast screenshot](/sites/default/files/styles/news_image_for_home_page/public/spack_1.png?itok=9k4l4D9M)
Elaine Raybourn interviews Todd Gamblin about the Spack project's experience working remotely.
![Todd Gamblin](/sites/default/files/styles/news_image_for_home_page/public/todd.png?itok=bGTIEZzN)
Spack is an open-source product that has become very well known in the high-performance computing (HPC) community because of the value it adds to the software deployment process.
![Corona supercomputing cluster](/sites/default/files/styles/news_image_for_home_page/public/corona-hpc-LLNL_0.png?itok=G7no_Lb5)
Under a new agreement, AMD will supply upgraded graphics accelerators for Lawrence Livermore National Laboratory’s Corona supercomputing cluster, expected to nearly double the system’s peak compute power.
![Coronavirus model](/sites/default/files/styles/news_image_for_home_page/public/virus_hpc.png?itok=ziE72KvW)
The White House announced the launch of the COVID-19 HPC Consortium to provide COVID-19 researchers worldwide with access to the world’s most powerful high performance computing resources that can significantly advance the pace of scientific discovery in the fight to stop the virus.
![Magma Supercomputer](/sites/default/files/styles/news_image_for_home_page/public/Magma--2020-LLNL-news.png?itok=lgh82d19)
This week, LLNL highlighted one of the latest additions to its computing arsenal: Magma. Magma is a Penguin Computing “Relion” system comprised of 752 nodes with Intel Xeon Platinum 9242 (Cascade Lake-AP) processors.
![El Capitan](/sites/default/files/styles/news_image_for_home_page/public/el_capitan-hpc-news_0.png?itok=2gLeGdRc)
Lawrence Livermore National Laboratory (LLNL), Hewlett Packard Enterprise (HPE) and Advanced Micro Devices Inc. (AMD) today announced the selection of AMD as the node supplier for El Capitan, projected to be the world’s most powerful supercomputer when it is fully deployed in 2023.
![Cosmin Petra and Ignacio Aravena](/sites/default/files/styles/news_image_for_home_page/public/grid_hpc.png?itok=oqAftOZl)
LLNL bested more than two dozen teams to place first overall in Challenge 1 of the DOE Grid Optimization Competition, aimed at developing a more reliable, resilient, and secure U.S. electrical grid.
![Podcast title and audio image](/sites/default/files/styles/news_image_for_home_page/public/tzanio-podcast.png?itok=3-5CyDGx)
Led by LLNL's Tzanio Kolev, the Center for Efficient Exascale Discretizations (CEED) is a hub for high-order mathematical methods to increase application efficiency and performance.