HPC News

Exascale in motion on earthquake risks

October 12, 2017

Assessing large magnitude (greater than 6 on the Richter scale) earthquake hazards on a regional (up to 100 kilometers) scale takes big machines. To resolve the frequencies important to engineering analysis of the built environment (up to 10 Hz or higher), numerical simulations of earthquake motions must be done on today's most powerful computers.

read more

A quicker picker upper? Lab researchers eye papermaking improvements through HPC

October 4, 2017

Paper-making research, performed for an HPC4Manufacturing (HPC4Mfg) project with the papermaking giant, Proctor and Gamble, resulted in the largest multi-scale model of paper products to date, simulating thousands of fibers in ParaDyn with resolution down to the micron scale.

read more

Transforming electrical grid resiliency with distributed energy resources

September 28, 2017

Normally, in a large-scale emergency, distributed energy resources (DERs) -- such as the energy produced by solar panels at customers' homes -- are shut off to protect the greater electrical grid. But a new project headed by Lawrence Livermore National Laboratory (LLNL) aims to utilize these resources for restoration and recovery operations, boosting the grid's ability to bounce back from a blackout or cascading outage, and potentially reducing customer reconnection time to a matter of hours.

read more

Metamaterials: Using Supercomputers to Mold Electromagnetics

July 24, 2017

Sandia researchers modeled the electromagnetics of complex systems on two DOE supercomputers that can solve tens of millions of problems in hours: Trinity (LANL) and Sequoia (LLNL). This research has led to impressive advances in metamaterials research that will boost the substances’ flexibility, efficiency, adaptability and other properties.

read more

DOE's HPC4Mfg seeks industry proposals to advance energy tech

June 12, 2017

The U.S. Department of Energy's High Performance Computing for Manufacturing Program, designed to spur the use of national lab supercomputing resources and expertise to advance innovation in energy efficient manufacturing, is seeking a new round of proposals from industry to compete for $3 million.

read more

Accelerating Simulation Software with Graphics Processing Units

May 8, 2017

To address the challenges of transitioning to the next generation of high performance computing (HPC), Livermore is bringing together designers of hardware, software, and applications to rethink and redesign their HPC elements and interactions for the exascale era (i.e., systems capable of a billion billion floating point operations per second or 1018 flops).

read more

Preparing for Sierra, LLNL's next state-of-the-art supercomputer

April 21, 2017

In late 2017, IBM will begin delivery of Sierra, the latest in a series of leading-edge Advanced Simulation and Computing (ASC) Program supercomputers. Delivering peak speeds of up to 150 petaflops (1015 floating-point operations per second), Sierra is projected to provide at least four to six times the performance of Sequoia, Livermore’s current flagship supercomputer. To run efficiently on Sierra, applications must be modified to achieve a level of task division and coordination well beyond what previous systems demanded.

read more

Computational innovation boosts manufacturing

January 16, 2017

The DOE's High Performance Computing for Manufacturing (HPC4Mfg) Program aims to advance clean-energy technologies, increase the efficiency of manufacturing processes, accelerate innovation, reduce the time it takes to bring new technologies to market, and improve the quality of products. The program unites the world-class high-performance computing (HPC) resources and expertise of Lawrence Livermore and other national laboratories with U.S. manufacturers to deliver solutions that could revolutionize the manufacturing industry.

read more

Two LLNL CTS-1 cluster systems named to TOP100 supercomputer list

December 12, 2016

Two Penguin Computing systems installed at Lawrence Livermore National Laboratory, Quartz and Jade, were ranked 41st and 42nd on the TOP100 list of the world's fastest supercomputers. The announcement came during the SC16 supercomputing conference, held November 13–18, 2016, in Salt Lake City, Utah. The systems were procured under NNSA’s Tri-Laboratory Commodity Technology Systems program, or CTS-1, to bolster computing for national security at Los Alamos, Sandia, and Lawrence Livermore national laboratories.

read more

High-Performance Computing Takes Aim at Cancer

November 23, 2016

A historic partnership between the Department of Energy (DOE) and the National Cancer Institute (NCI) is applying the formidable computing resources at Livermore and other DOE national laboratories to advance cancer research and treatment. The effort will help researchers and physicians better understand the complexity of cancer, choose the best treatment options for every patient, and reveal possible patterns hidden in vast patient and experimental data sets.

read more

LLNL team’s molecular dynamics code among Gordon Bell Prize finalists

November 21, 2016

A Lawrence Livermore team's dramatically improved first-principles molecular dynamics code that promises to enable new computer simulation applications was one of the finalists for the 2016 Gordon Bell Prize. The team presented its ground-breaking project at the 2016 supercomputing conference (SC16) held in Salt Lake City, Utah, November 12-18, 2016. "Modeling Dilute Solutions using First-Principles Molecular Dynamics: Computing more than a Million Atoms with over a Million Cores," was the title of the Livermore team's submission for the competition. Using a robust new algorithm, the Livermore team has developed an O(N)complexity solver for electronic structure problems with fully controllable numerical error.

read more

Lawrence Livermore to lead "co-design" center for exascale computing ecosystem

November 11, 2016

Lawrence Livermore National Laboratory (LLNL) was one of four national labs selected to lead a "co-design" center by the Department of Energy's Exascale Computing Project (link is external) (ECP) as part of a four-year, $48 million funding award. Each co-design center will receive $3 million annually. LLNL's Tzanio Kolev is the director of the newly established ECP co-design Center for Efficient Exascale Discretizations (CEED). "Co-design" refers to the collaborative and interdisciplinary engineering process for developing capable exascale systems.

read more

Scientists selected to lead Exascale Computing Project software development

November 10, 2016

Lawrence Livermore scientists are among those who have been awarded funding to develop software for the Department of Energy's Exascale Computing Project (link is external) (ECP). The ECP selected 35 software development proposals representing 25 research and academic organizations; Livermore computer scientists will lead six of the projects and are collaborators on another seven. The awards, totalling $34 million for the first year of funding, cover many components of the software stack for exascale systems.

read more

ASML taps Lawrence Livermore to develop extreme EUV for chip manufacturing

October 25, 2016

Lawrence Livermore National Laboratory (LLNL) and ASML Holding NV (link is external) (ASML) have successfully established plasma simulation capabilities to advance extreme ultraviolet (EUV) light sources toward the manufacturing of next-generation semiconductors. Under a cooperative research and development agreement (CRADA), ASML is leveraging LLNL's expertise in lasers and plasma physics and the ability to perform complex, large-scale modeling and simulation using high-performance computing (HPC).

read more

Laboratory and Norwegian researchers collaborate to use HPC to improve cancer screening

October 5, 2016

Laboratory computer scientists and Norwegian researchers are collaborating to apply high performance computing (HPC) to the analysis of medical data to improve screening for cervical cancer. The convergence of high performance computing, big data, and life science is enabling the development of personalized medicine. "Delivering care tailored to the needs of the individual, rather than population averages, has the potential to transform the delivery of healthcare," says LLNL's Ghaleb Abdulla, Livermore lead on the collaboration with Norway and director of LLNL's Institute for Scientific Computing Research.

read more

National labs' researchers join effort to develop applications under Exascale Computing Project

September 9, 2016

The Department of Energy's (DOE) Exascale Computing Project (ECP) announced its first round of funding with the selection of 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations, including Lawrence Livermore National Laboratory (LLNL). The awards, totaling $39.8 million, target advanced modeling and simulation solutions to specific challenges supporting key DOE missions in science, clean energy, and national security.

read more

DOE HPC4Mfg Program to help jumpstart clean energy technologies

August 30, 2016

A U.S. Department of Energy (DOE) program designed to spur the use of high performance supercomputers to advance U.S. manufacturing has funded 13 new industry projects for a total of $3.8 million. Experts at Lawrence Livermore and other DOE national laboratories will work directly with manufacturing industry members to teach them how to adopt or advance their use of high performance computing (HPC) to address manufacturing challenges with a goal of increasing energy efficiency, reducing environmental impacts, and advancing clean energy technologies.

RAND and LLNL partner to demonstrate water resource management

August 25, 2016

Researchers from the RAND Corporation (link is external) and Lawrence Livermore National Laboratory (LLNL) have joined forces to combine high-performance computing with innovative public policy analysis to improve planning for particularly complex issues such as such as water resource management. Building on previous work conducted by RAND on the Colorado River Basin in 2012, RAND and the High Performance Computing Innovation Center (HPCIC) at LLNL hosted a joint workshop to employ high-performance computer simulations to stress-test several water management strategies over a vast number of plausible future scenarios in near real time.

read more

Energy Department to invest $16 million to accelerate computer design of materials

August 16, 2016

The Department of Energy will invest $16 million over the next four years to accelerate the design of new materials through the use of supercomputers. Two four-year projects will take advantage of superfast computers at DOE national laboratories by developing software to design fundamentally new functional materials destined to revolutionize applications in alternative and renewable energy, electronics, and a wide range of other fields. Lawrence Livermore will use its Vulcan supercomputer to study transition metal-oxides with the goal of developing new theoretical methods and simulation capabilities for the predictive calculation of the properties of complicated materials for energy applications.

read more

LLNL dedicates new unclassified supercomputer facility

June 29, 2016

On June 29, 2016, officials from the Department of Energy's National Nuclear Security Administration (NNSA) and local government representatives dedicated a new supercomputing facility at LLNL. Charles Verdon, LLNL principal associate director for Weapons and Complex Integration (WCI), presided over the ceremony. The $9.8 million modular and sustainable facility provides the Laboratory flexibility to accommodate future advances in computer technology and meet a rapidly growing demand for unclassified high-performance computing (HPC).

The new dual-level building consists of a 6,000-square-foot machine floor flanked by support space. The main computer structure is flexible in design to allow for expansion and the testing of future computer technology advances. In-house modeling and simulation expertise in energy-efficient building design was used in drawing up the specifications for the facility, including heating, ventilation, and air conditioning systems to meet federal sustainable design requirements to promote energy conservation. The flexible design will accommodate future liquid cooling solutions for HPC systems.

read more

HPC to play major role in Cancer Moonshot initiative

June 29, 2016

In January 2016, President Obama signed a Presidential Memorandum establishing a first-of-its-kind federal task force to end cancer as we know it. Led by Vice President Joe Biden, the Cancer Moonshot is a national effort to double the rate of progress—to make a decade's worth of advances in cancer prevention, diagnosis, treatment, and care in five years—and to ultimately end cancer. Lawrence Livermore National Laboratory's (LLNL) high-performance computing will play a major role in the initiative.

The Department of Energy (DOE) has launched three new pilot projects focused on bringing together nearly 100 cancer researchers, care providers, computer scientists and engineers to apply the nation's most advanced supercomputing capabilities to analyze data from preclinical models in cancer, molecular interaction data, and cancer surveillance data across four DOE national laboratories. The Collaboration of Oak Ridge, Argonne and Livermore (CORAL) supercomputing, led by Lawrence Livermore, which is typically used for stockpile stewardship, will be applied to biology to refine the understanding of the mechanisms leading to cancer development and accelerating the development of promising therapies that are more effective and less toxic.

read more

Preparing for the power demands of exascale systems

June 9, 2016

When considering the challenges of exascale computing, having sufficient power is right at the top of the list. Sequoia, which at 20 petaflops is currently our top HPC system, draws more than 9 MW of power—equivalent to the energy draw of more than 1,000 average homes. Systems like Sierra, Livermore’s next advanced technology HPC system that is spec’d at 120-150 petaflops peak, could potentially draw three times as much.

When tens of megawatts of power are on the line, advanced power management is needed to balance the highly fluctuant power demands and power availability. This requires orchestration of resources and real-time insight into the entire operational facility and energy grid. Even small interruptions during high-performance compute cycles can derail the job and disrupt power-grid management as well. To mitigate potential problems, LLNL has turned to OSIsoft, a company with deep roots in data collection, aggregation, and storage. OSIsoft helps LLNL track and analyze streams of operational data from computing racks, cooling systems, energy utilities and other equipment and stores it to central control point for the life of the assets.

read more

Lawrence Livermore to receive brain-inspired TrueNorth supercomputer

March 29, 2016

Lawrence Livermore will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery—a mere 2.5 watts of power.

read more

Shaving Time to Test Antidotes for Nerve Agents

March 1, 2016

Powered by Livermore Computing's world-class supercomputers, Vulcan & Sequoia, LLNL researchers are currently simulating the energy requirements for candidate drug molecules to permeate cell membranes — shaving weeks of compound testing by determining in advance how readily they’ll enter cells to perform their activity.

read more

DOE funds "HPC for manufacturing" projects

February 17, 2016

Lawrence Livermore National Laboratory (LLNL) and partners announced 10 new industry projects to advance manufacturing using high-performance computing (HPC) under a DOE program. Industry projects ranging from improved turbine blades for aircraft engines and reduced heat loss in electronics to waste reduction in paper manufacturing and improved fiberglass production are among the first to be selected for funding and partnerships with national labs under the U.S. Department of Energy's (DOE) High Performance Computing for Manufacturing (HPC4Mfg) Program. LLNL leads the program and partners with Lawrence Berkeley and Oak Ridge National Laboratories (LBNL and ORNL).

Each of the 10 Phase I projects will be funded at approximately $300,000 for a total of just under $3 million. Selected companies will partner with national labs, which will provide expertise in and access to high performance computing systems aimed at high-impact challenges. The Advanced Manufacturing Office (AMO) within DOE's Office of Energy Efficiency and Renewable Energy (EERE) created this program to advance clean energy technologies, increase the efficiency of manufacturing processes, accelerate innovation, shorten the time it takes to bring new technologies to market, and improve the quality of products.

read more

Sequoia enables Gordon Bell prize-winning simulation

December 7, 2015

The full power of Lawrence Livermore’s Sequoia supercomputer played a key role in the Earth mantle convection simulation that won the 2015 Gordon Bell Prize, announced at SC15 Supercomputing Conference. LLNL’s onsite IBM analyst Roy Musselman and Livermore Computing’s Scott Futral were acknowledged for their contributions to the project in carrying out the Sequoia calculations.

Published in Proceedings of the International Conference for High Performance Computing, Networking, Storage and AnalysisEarth simulation images courtesey of J. Rudi et al.

HPCWire Editor’s Choice Award given to tri-lab collaboration

November 19, 2015

The collaboration of Oak Ridge, Argonne and Lawrence Livermore (CORAL) that will bring the Sierra supercomputer to the Lab in 2018 has been recognized by HPCWire with an Editor’s Choice Award for Best HPC Collaboration between Government and Industry

read more

Sequoia simulation unveils whole-body blood flow

November 15, 2015

Livermore scientists and collaborators accomplish three-dimensional, high-resolution simulations of whole-body blood flow on 1,572,864 cores of Blue Gene/Q, also known as Sequoia. Blood flow simulations of hemodynamics in the systemic arterial tree can potentially have a tremendous impact on the diagnosis and treatment of patients suffering from vascular disease.

read more

Software Tags: 
Visualization

State grant enables energy-saving retrofit

November 12, 2015

Supercomputers at Lawrence Livermore National Laboratory (LLNL) will be retrofitted with liquid cooling systems under a California Energy Commission (CEC) grant to assess potential energy savings. Reducing power consumption and the associated costs at data centers and high performance computing facilities is a leading concern of the HPC community," says Anna Maria Bailey, LLNL high performance computing facility manager, "addressing this issue is critical as we develop ever more powerful next-generation supercomputers."

read more

Livermore awards contract to provide 7+ petaflops of capacity computing

October 21, 2015

Penguin Computing will receive $39 million to provide more than 7 petaflops of “capacity” systems, called the Commodity Technology Systems-1 (CTS-1), to Los Alamos, Sandia and Lawrence Livermore national laboratories. The new systems will leverage the Open Compute Tundra Extreme Scale (ES) series architecture to support national security workloads at the three national laboratories.

LLNL and Rensselaer Polytechnic Institute to promote industry adoption of supercomputing

September 16, 2015

Lawrence Livermore National Laboratory (LLNL) and the Rensselaer Polytechnic Institute (RFI) will combine decades of expertise to help American industry and businesses expand use of high performance computing (HPC) under a signed memorandum of understanding. Livermore and RPI will look to bridge the gap between the levels of computing conducted at their institutions and the typical levels found in industry. Scientific and engineering software applications capable of running on HPC platforms are a prime area of interest.

read more

LLNL breaks ground on unclassified supercomputing facility

May 28, 2015

Lawrence Livermore National Laboratory broke ground today on a modular and sustainable supercomputing facility that will provide a flexible infrastructure able to accommodate the Laboratory’s growing demand for high performance computing (HPC).

The $9.875 million building, located on the Laboratory’s east side, will ensure computer room space to support the Advanced Simulation and Computing(ASC) Program’s unclassified HPC systems. ASC is the high-performance simulation effort of the National Nuclear Security Administration’s (NNSA) stockpile stewardship program to ensure the safety, security and reliability of the nation’s nuclear deterrent without testing.

“Unclassified high performance computing is critical to the stockpile stewardship program’s success and the need for this capability will continue to grow,” said Laboratory Director Bill Goldstein. “Modernizing the Lab’s computing infrastructure will enable us to better exploit next-generation supercomputers for NNSA by tapping the talents of top academic and private sector partners.”

read more

Revolutionary processing-in-memory architecture on the horizon

March 1, 2015

EVERYTHING changes. Nowhere is that maxim more apparent than in the world of computing. From smartphones and tablets to mainframes and supercomputers, the system architecture—how a machine’s nodes and network are designed—evolves rapidly as new versions replace old. As home computer users know, systems can change dramatically between generations, especially in a field where five years is a long time. Computational scientists at Lawrence Livermore and other Department of Energy (DOE) national laboratories must continually prepare for the next increase in computational power so that the transition to a new machine does not arrest efforts to meet important national missions.

That next jump in power will be a big one, as new machines begin to approach exascale computing. Exascale systems will process 1018 floating-point operations per second (flops), making them 1,000 times faster than the petascale systems that arrived in the late 2000s. Computational scientists will need to address a number of high-performance computing (HPC) challenges to ensure that these systems can meet the rising performance demands and operate within strict power constraints.

read more

Research and visualization featured on Chemical Physics Letters cover

February 16, 2015

We report on the development of many-body density functional tight binding (DFTB) models for carbon, which include either explicit or implicit calculation of multi-center terms in the Hamiltonian. We show that all of our methods yield accurate eigenstates and eigenfunctions for both ambient diamond and transitions to molten, metallic states. We then determine a three-body repulsive energy to compute accurate equation of state and structural properties for carbon under these conditions. Our results indicate a straightforward approach by which many-body effects can be included in DFTB, thus extending the method to a wide variety of systems and thermodynamic conditions.

read more

LLNL trained a neural network with 15 billion parameters

February 16, 2015

We present a work-in-progress snapshot of learning with a 15 billion parameter deep learning network on HPC architectures applied to the largest publicly available natural image and video dataset released to-date. Recent advancements in unsupervised deep neural networks suggest that scaling up such networks in both model and training dataset size can yield significant improvements in the learning of concepts at the highest layers. We train our three-layer deep neural network on the Yahoo! Flickr Creative Commons 100M dataset.   

Software Tags: 
Data Management

Multi-center semi-empirical quantum models for carbon

February 16, 2015

We report on the development of many-body density functional tight binding (DFTB) models for carbon, which include either explicit or implicit calculation of multi-center terms in the Hamiltonian. We show that all of our methods yield accurate eigenstates and eigenfunctions for both ambient diamond and transitions to molten, metallic states. We then determine a three-body repulsive energy to compute accurate equation of state and structural properties for carbon under these conditions. Our results indicate a straightforward approach by which many-body effects can be included in DFTB, thus extending the method to a wide variety of systems and thermodynamic conditions.

read more