IEEE, the world's largest technical professional organization, announced it has elevated Bronis de Supinski to the rank of fellow, recognizing LLNL's Livermore Computing chief technology officer (CTO) for his leadership in the design and use of large-scale computing systems.
To get a sense of the size and scope of the storage systems in use at Lawrence Livermore, we had a long conversation recently with Robin Goldstone, HPC strategist in the Advanced Technologies Office at Lawrence Livermore.
Sierra was one of six Lawrence Livermore National Laboratory supercomputers to make the latest TOP500 List of the most-powerful supercomputers in the world. Sierra held on to the No. 3 spot, achieving 94.6 petaflops on the High Performance LINPACK (HPL) benchmark.
Lawrence Livermore National Lab has been at the forefront of newer architectures over the last year in particular across multiple systems: Mammoth, expanded Corona, Lassen with Cerebras chip, and Ruby, a top 100-class all CPU (Intel Platinum) super.
Lawrence Livermore National Laboratory’s newest supercomputer, Ruby, a 6 petaflop Intel Xeon Platinum-based cluster, will be used for unclassified programmatic work in support of the National Nuclear Security Administration’s stockpile stewardship mission, open science and the search for therapeu
Listen to what’s coming in OpenMP 5.1 and beyond, how the C++ ecosystem is evolving, why Python in HPC, and have fun as these two razz each other.
Spack is now the deployment mechanism for the world’s top supercomputer with its Arm base, Fugaku, and has been and 1300 packages on the Summit machine were handled by Spack. It is expected to get further momentum with the future exascale systems as well.
The scientific computing and networking leadership of 17 Department of Energy (DOE) national laboratories will be showcased at SC20, the International Conference for High-Performance Computing, Networking, Storage and Analysis, taking place Nov.
Funded by the Coronavirus Aid, Relief and Economic Security Act, Lawrence Livermore's new 'big memory' performance computing cluster, Mammoth, will be used to perform genomics analysis, nontraditional simulations and graph analytics required by scientists working on COVID-19, including the devel
The San Joaquin Expanding Your Horizon’s Conference went virtual for the first time in its 28-year history. Pictured, LC's John Gyllenhaal presents The Magic of STEM workshop.
LLNL has installed a new artificial intelligence accelerator from SambaNova Systems into the Corona supercomputing cluster, allowing Lab researchers to run scientific simulations for inertial confinement fusion, COVID-19 and other basic science, while offloading AI calculations from those simulat
Funding by the CARES Act enabled LLNL and industry partners to more than double the speed of the Corona supercomputing cluster to in excess of 11 petaflops of peak performance.
To understand exactly how metals respond to high-rate compression in molecular dynamics simulations, LLNL scientists use novel methods of in silico microscopy to reveal defects in the crystal lattice (green and red line objects and gray surface objects at the top) while removing all the atoms (ye
Lawrence Livermore National Laboratory (LLNL) will provide significant computing resources to students and faculty from nine universities that were newly selected for participation in the NNSA’s Predictive Science Academic Alliance Program (PSAAP).
When it comes to solving complex technical issues for GPU-accelerated supercomputers, the national labs have found that tackling them is “better together”
An interview with Todd Gamblin from the Lawrence Livermore National Laboratory about the Spack project, discussing his current research project along with his involvement in Spack.
Lawrence Livermore researchers and collaborators have combined machine learning, 3D printing and high performance computing simulations to accurately model blood flow in the aorta.
By combining simulations with high-speed videos taken during the laser powder-bed fusion process, LLNL scientists were able to visualize the ductile-to-brittle transition in 3D-printed tungsten in real-time.
LLNL and Cerebras Systems have installed the company’s CS-1 artificial intelligence (AI) computer into Lassen, making LLNL the first institution to integrate the cutting-edge AI platform with a large-scale supercomputer.
Simulated strength of shaking from a magnitude 7.0 Hayward Fault earthquake showing peak ground velocity (colorbar) and seismograms (blue) at selected locations (triangles).
LLNL physicist Alison Saunders and doctoral fellow Marco Echeverria meet online to discuss their progress enhancing codes used during high-energy experiments.
Dr. Ian Karlin (Livermore National Lab) on AI hardware integration into HPC systems, workflows, followed by a talk about software integration of AI accelerators in HPC with Dr. Brian Van Essen (LLNL)
ZFP is a compressed representation of multidimensional floating-point arrays that are ubiquitous in high-performance computing.
To meet the needs of tomorrow’s supercomputers, NNSA's LLNL has broken ground on its ECFM project, which will substantially upgrade the mechanical and electrical capabilities of the Livermore Computing Center.
With 5.4 petaflops of peak performance crammed into 760 compute nodes, that is a lot of computing capability in a small space generating a considerable amount of heat.