HPC News

New ‘Unify’ File Systems Deliver Fast I/O Performance over Distributed Storage

November 26, 2018

It’s no secret that high performance computing systems (HPC) are growing in complexity and capability. As LLNL brings Sierra (one of the world’s fastest supercomputers) online, computer scientists are investigating performance efficiency improvements at all levels of next-generation HPC architectures.

read more

DOE Machines Dominate Record-Breaking SC18

November 20, 2018

They say everything’s bigger in Texas, and the 30th anniversary of the annual International Conference of High Performance Computing, Networking, Storage and Analysis (SC18), held Nov. 11-16 in Dallas, did not disappoint. The conference, which broke records for attendees and exhibitors, saw Lawrence Livermore National Laboratory (LLNL) once again make its presence felt on the world’s biggest HPC stage.

read more

Call for HPC for Energy Innovation proposals

November 14, 2018

The U.S. Department of Energy's (DOE) High Performance Computing for Energy Innovation (HPC4EI) Initiative has issued its first joint solicitation for the High Performance Computing for Manufacturing Program (HPC4Mfg) and the High Performance Computing for Materials Program (HPC4Mtls).

The call for proposals seeks American companies interested in collaborating with DOE’s national laboratories on one-year projects to apply high-performance computing (HPC) modeling, simulation and data analysis to key challenges in U.S. manufacturing and material development.

read more

Sierra honored with Top Supercomputing Achievement from HPCwire

November 13, 2018

On November 13, 2018, the high performance computing publication HPCwire awarded Lawrence Livermore Laboratory (LLNL) and Oak Ridge National Laboratory (ORNL) their Editors’ Choice and Readers’ Choice Awards for the Top Supercomputing Achievement of 2018, recognizing the launch of the world’s two fastest computing systems.

Representatives from LLNL and ORNL accepted the awards for Sierra and Summit at the 2018 International Conference for High Performance Computing, Networking, Storage and Analysis (SC18) conference in Dallas, Texas.

read more

Corona: New Computing Cluster Coming to Livermore

November 12, 2018

Lawrence Livermore National Laboratory, in partnership with Penguin Computing, AMD and Mellanox Technologies, will accept delivery of Corona, a new unclassified high-performance computing (HPC) cluster that will provide unique capabilities for Lab researchers and industry partners to explore data science, machine learning and big data analytics.

read more about Corona

Sierra reaches higher altitudes, takes No. 2 spot on list of world's fastest supercomputers

November 12, 2018

Sierra, Lawrence Livermore National Laboratory’s (LLNL) newest supercomputer, rose to second place on the list of the world’s fastest computing systems, TOP500 List representatives announced November 12, 2018, at the International Conference for High Performance Computing, Networking, Storage and Analysis conference (SC18) in Dallas.

read more

Lawrence Livermore unveils NNSA’s Sierra, world’s third fastest supercomputer

October 26, 2018

Sierra, one of the fastest supercomputers in the world, will serve the National Nuclear Security Administration’s three nuclear security laboratories, providing high-fidelity simulations in support of NNSA’s core mission of ensuring the safety, security and effectiveness of the nation’s nuclear stockpile. [VIDEO]

read more

LLNL/LBNL team named as Gordon Bell Award finalists

September 20, 2018

A team of scientists and physicists headed by LLNL and LBNL has been named as one of six finalists for the prestigious 2018 Gordon Bell Award, one of the world’s top honors in supercomputing. They were supported in this work by the Sierra Integration Team of Livermore Computing.

read more

LLNL applies HPC to improve understanding of traumatic brain injury

July 2, 2018

A multi-institutional team of scientists and engineers plan to simultaneously challenge DOE’s supercomputing resources, advance artificial intelligence capabilities and enable a precision medicine approach for TBI.

read more

Supercomputers: Life and Death of a Neutron

June 6, 2018

The team participating in the latest study developed a way to improve their calculations of gA using an unconventional approach and supercomputers at ORNL and LLNL.

read more

Hardware and Integration Update

May 31, 2018

In an audio discussion, HI Director Terri Quinn (Lawrence Livermore National Laboratory) describes how HI performs its mission and what its top goals are.

read more

Leading a Revolution in Design

April 30, 2018

LLNL researchers are using HPC codes and systems to transform how engineers create complex parts with additive manufacturing technologies.

read more

New exascale system for earth simulation

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be released to the broader scientific community this month.

read more

DOE announces request for proposal for LLNL's next-generation exascale supercomputer

April 9, 2018

DOE Secretary Rick Perry announces the release of a request for proposals for development of new exascale supercomputers, including LLNL's El Capitan.

read more

Machine learning models could save lives through personalized sepsis diagnostics

April 4, 2018

Machine learning models developed at LLNL in conjunction with Kaiser Permanente can more accurately characterize a patient's progression through the stages of sepsis.

read more

LLNL/U.K. officials ink agreement to collaborate on HPC research, ensure competitiveness

February 14, 2018

Lawrence Livermore National Laboratory (LLNL) and the United Kingdom’s governing body for scientific research on Monday announced the signing of a new three-year agreement aimed at improving U.S. and U.K. industries through high performance computing, promoting research collaborations and boosting economic competitiveness in the two countries.

read more

DOE's HPC4Manufacturing program seeks industry proposals for energy advances

February 1, 2018

The Department of Energy (DOE (link is external)) on Feb. 1 announced up to $3 million will be available to U.S. manufacturers for public/private projects aimed at applying high performance computing to industry challenges for the advancement of energy innovation.

read more

DOE announces funding for new HPC4Manufacturing industry projects

January 11, 2018

The Department of Energy’s (DOE) Advanced Manufacturing Office (link is external) (AMO) today announced the funding of $1.87 million for seven new industry projects under an ongoing initiative designed to utilize DOE’s high-performance computing (HPC) resources and expertise to advance U.S. manufacturing and clean-energy technologies

read more

DOE announces first awardees for new HPC4Materials for 'Severe Environments'

January 11, 2018

The Department of Energy’s (DOE) Office of Fossil Energy (link is external) (FE) today announced the funding of $450,000 for the first two private-public partnerships under a brand-new initiative aimed at discovering, designing and scaling up production of novel materials for severe environments.

read more

Lab-led HPC for Manufacturing project wins 'Innovation Excellence' award at SC17

November 22, 2017

An HPC for Manufacturing project aimed at saving time and money for paper product manufacturers earned an HPC Innovation Excellence Award at the 2017 SuperComputing Conference (SC17 (link is external)) in Denver on Nov. 14.

read more

Siting the Sierra Supercomputer

November 20, 2017

Work is moving fast and furious in the Livermore Computing Complex at Lawrence Livermore National Laboratory (LLNL), where siting and installation for Sierra, the Lab’s next advanced technology high-performance supercomputer, is kicking into high gear.

read more

HPC Wire Award Winners Use LC Resources

November 17, 2017

Lawrence Livermore National Laboratory (LLNL) researchers won two HPCwire Editor’s Choice awards for their work in applying high-performance computing (HPC) to solve complex challenges. The awards were presented at SC17 in Denver.

read more

Sudden Changes at Ultra-High Pressure

October 31, 2017

Livermore physicist Jon Belof and a team of physicists, engineers, and computational scientists are subjecting matter to extreme conditions and simulating experiments with high-performance computers to study phase transitions at ultrahigh pressures.

read more

Exascale in motion on earthquake risks

October 12, 2017

Assessing large magnitude (greater than 6 on the Richter scale) earthquake hazards on a regional (up to 100 kilometers) scale takes big machines. To resolve the frequencies important to engineering analysis of the built environment (up to 10 Hz or higher), numerical simulations of earthquake motions must be done on today's most powerful computers.

read more

A quicker picker upper? Lab researchers eye papermaking improvements through HPC

October 4, 2017

Paper-making research, performed for an HPC4Manufacturing (HPC4Mfg) project with the papermaking giant, Proctor and Gamble, resulted in the largest multi-scale model of paper products to date, simulating thousands of fibers in ParaDyn with resolution down to the micron scale.

read more

Transforming electrical grid resiliency with distributed energy resources

September 28, 2017

Normally, in a large-scale emergency, distributed energy resources (DERs) -- such as the energy produced by solar panels at customers' homes -- are shut off to protect the greater electrical grid. But a new project headed by Lawrence Livermore National Laboratory (LLNL) aims to utilize these resources for restoration and recovery operations, boosting the grid's ability to bounce back from a blackout or cascading outage, and potentially reducing customer reconnection time to a matter of hours.

read more

Metamaterials: Using Supercomputers to Mold Electromagnetics

July 24, 2017

Sandia researchers modeled the electromagnetics of complex systems on two DOE supercomputers that can solve tens of millions of problems in hours: Trinity (LANL) and Sequoia (LLNL). This research has led to impressive advances in metamaterials research that will boost the substances’ flexibility, efficiency, adaptability and other properties.

read more

DOE's HPC4Mfg seeks industry proposals to advance energy tech

June 12, 2017

The U.S. Department of Energy's High Performance Computing for Manufacturing Program, designed to spur the use of national lab supercomputing resources and expertise to advance innovation in energy efficient manufacturing, is seeking a new round of proposals from industry to compete for $3 million.

read more

Accelerating Simulation Software with Graphics Processing Units

May 8, 2017

To address the challenges of transitioning to the next generation of high performance computing (HPC), Livermore is bringing together designers of hardware, software, and applications to rethink and redesign their HPC elements and interactions for the exascale era (i.e., systems capable of a billion billion floating point operations per second or 1018 flops).

read more

Preparing for Sierra, LLNL's next state-of-the-art supercomputer

April 21, 2017

In late 2017, IBM will begin delivery of Sierra, the latest in a series of leading-edge Advanced Simulation and Computing (ASC) Program supercomputers. Delivering peak speeds of up to 150 petaflops (1015 floating-point operations per second), Sierra is projected to provide at least four to six times the performance of Sequoia, Livermore’s current flagship supercomputer. To run efficiently on Sierra, applications must be modified to achieve a level of task division and coordination well beyond what previous systems demanded.

read more

Computational innovation boosts manufacturing

January 16, 2017

The DOE's High Performance Computing for Manufacturing (HPC4Mfg) Program aims to advance clean-energy technologies, increase the efficiency of manufacturing processes, accelerate innovation, reduce the time it takes to bring new technologies to market, and improve the quality of products. The program unites the world-class high-performance computing (HPC) resources and expertise of Lawrence Livermore and other national laboratories with U.S. manufacturers to deliver solutions that could revolutionize the manufacturing industry.

read more

Two LLNL CTS-1 cluster systems named to TOP100 supercomputer list

December 12, 2016

Two Penguin Computing systems installed at Lawrence Livermore National Laboratory, Quartz and Jade, were ranked 41st and 42nd on the TOP100 list of the world's fastest supercomputers. The announcement came during the SC16 supercomputing conference, held November 13–18, 2016, in Salt Lake City, Utah. The systems were procured under NNSA’s Tri-Laboratory Commodity Technology Systems program, or CTS-1, to bolster computing for national security at Los Alamos, Sandia, and Lawrence Livermore national laboratories.

read more

High-Performance Computing Takes Aim at Cancer

November 23, 2016

A historic partnership between the Department of Energy (DOE) and the National Cancer Institute (NCI) is applying the formidable computing resources at Livermore and other DOE national laboratories to advance cancer research and treatment. The effort will help researchers and physicians better understand the complexity of cancer, choose the best treatment options for every patient, and reveal possible patterns hidden in vast patient and experimental data sets.

read more

LLNL team’s molecular dynamics code among Gordon Bell Prize finalists

November 21, 2016

A Lawrence Livermore team's dramatically improved first-principles molecular dynamics code that promises to enable new computer simulation applications was one of the finalists for the 2016 Gordon Bell Prize. The team presented its ground-breaking project at the 2016 supercomputing conference (SC16) held in Salt Lake City, Utah, November 12-18, 2016. "Modeling Dilute Solutions using First-Principles Molecular Dynamics: Computing more than a Million Atoms with over a Million Cores," was the title of the Livermore team's submission for the competition. Using a robust new algorithm, the Livermore team has developed an O(N)complexity solver for electronic structure problems with fully controllable numerical error.

read more

Lawrence Livermore to lead "co-design" center for exascale computing ecosystem

November 11, 2016

Lawrence Livermore National Laboratory (LLNL) was one of four national labs selected to lead a "co-design" center by the Department of Energy's Exascale Computing Project (link is external) (ECP) as part of a four-year, $48 million funding award. Each co-design center will receive $3 million annually. LLNL's Tzanio Kolev is the director of the newly established ECP co-design Center for Efficient Exascale Discretizations (CEED). "Co-design" refers to the collaborative and interdisciplinary engineering process for developing capable exascale systems.

read more

Scientists selected to lead Exascale Computing Project software development

November 10, 2016

Lawrence Livermore scientists are among those who have been awarded funding to develop software for the Department of Energy's Exascale Computing Project (link is external) (ECP). The ECP selected 35 software development proposals representing 25 research and academic organizations; Livermore computer scientists will lead six of the projects and are collaborators on another seven. The awards, totalling $34 million for the first year of funding, cover many components of the software stack for exascale systems.

read more

ASML taps Lawrence Livermore to develop extreme EUV for chip manufacturing

October 25, 2016

Lawrence Livermore National Laboratory (LLNL) and ASML Holding NV (link is external) (ASML) have successfully established plasma simulation capabilities to advance extreme ultraviolet (EUV) light sources toward the manufacturing of next-generation semiconductors. Under a cooperative research and development agreement (CRADA), ASML is leveraging LLNL's expertise in lasers and plasma physics and the ability to perform complex, large-scale modeling and simulation using high-performance computing (HPC).

read more

Laboratory and Norwegian researchers collaborate to use HPC to improve cancer screening

October 5, 2016

Laboratory computer scientists and Norwegian researchers are collaborating to apply high performance computing (HPC) to the analysis of medical data to improve screening for cervical cancer. The convergence of high performance computing, big data, and life science is enabling the development of personalized medicine. "Delivering care tailored to the needs of the individual, rather than population averages, has the potential to transform the delivery of healthcare," says LLNL's Ghaleb Abdulla, Livermore lead on the collaboration with Norway and director of LLNL's Institute for Scientific Computing Research.

read more

National labs' researchers join effort to develop applications under Exascale Computing Project

September 9, 2016

The Department of Energy's (DOE) Exascale Computing Project (ECP) announced its first round of funding with the selection of 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations, including Lawrence Livermore National Laboratory (LLNL). The awards, totaling $39.8 million, target advanced modeling and simulation solutions to specific challenges supporting key DOE missions in science, clean energy, and national security.

read more

DOE HPC4Mfg Program to help jumpstart clean energy technologies

August 30, 2016

A U.S. Department of Energy (DOE) program designed to spur the use of high performance supercomputers to advance U.S. manufacturing has funded 13 new industry projects for a total of $3.8 million. Experts at Lawrence Livermore and other DOE national laboratories will work directly with manufacturing industry members to teach them how to adopt or advance their use of high performance computing (HPC) to address manufacturing challenges with a goal of increasing energy efficiency, reducing environmental impacts, and advancing clean energy technologies.

RAND and LLNL partner to demonstrate water resource management

August 25, 2016

Researchers from the RAND Corporation (link is external) and Lawrence Livermore National Laboratory (LLNL) have joined forces to combine high-performance computing with innovative public policy analysis to improve planning for particularly complex issues such as such as water resource management. Building on previous work conducted by RAND on the Colorado River Basin in 2012, RAND and the High Performance Computing Innovation Center (HPCIC) at LLNL hosted a joint workshop to employ high-performance computer simulations to stress-test several water management strategies over a vast number of plausible future scenarios in near real time.

read more

Energy Department to invest $16 million to accelerate computer design of materials

August 16, 2016

The Department of Energy will invest $16 million over the next four years to accelerate the design of new materials through the use of supercomputers. Two four-year projects will take advantage of superfast computers at DOE national laboratories by developing software to design fundamentally new functional materials destined to revolutionize applications in alternative and renewable energy, electronics, and a wide range of other fields. Lawrence Livermore will use its Vulcan supercomputer to study transition metal-oxides with the goal of developing new theoretical methods and simulation capabilities for the predictive calculation of the properties of complicated materials for energy applications.

read more

LLNL dedicates new unclassified supercomputer facility

June 29, 2016

On June 29, 2016, officials from the Department of Energy's National Nuclear Security Administration (NNSA) and local government representatives dedicated a new supercomputing facility at LLNL. Charles Verdon, LLNL principal associate director for Weapons and Complex Integration (WCI), presided over the ceremony. The $9.8 million modular and sustainable facility provides the Laboratory flexibility to accommodate future advances in computer technology and meet a rapidly growing demand for unclassified high-performance computing (HPC).

The new dual-level building consists of a 6,000-square-foot machine floor flanked by support space. The main computer structure is flexible in design to allow for expansion and the testing of future computer technology advances. In-house modeling and simulation expertise in energy-efficient building design was used in drawing up the specifications for the facility, including heating, ventilation, and air conditioning systems to meet federal sustainable design requirements to promote energy conservation. The flexible design will accommodate future liquid cooling solutions for HPC systems.

read more

HPC to play major role in Cancer Moonshot initiative

June 29, 2016

In January 2016, President Obama signed a Presidential Memorandum establishing a first-of-its-kind federal task force to end cancer as we know it. Led by Vice President Joe Biden, the Cancer Moonshot is a national effort to double the rate of progress—to make a decade's worth of advances in cancer prevention, diagnosis, treatment, and care in five years—and to ultimately end cancer. Lawrence Livermore National Laboratory's (LLNL) high-performance computing will play a major role in the initiative.

The Department of Energy (DOE) has launched three new pilot projects focused on bringing together nearly 100 cancer researchers, care providers, computer scientists and engineers to apply the nation's most advanced supercomputing capabilities to analyze data from preclinical models in cancer, molecular interaction data, and cancer surveillance data across four DOE national laboratories. The Collaboration of Oak Ridge, Argonne and Livermore (CORAL) supercomputing, led by Lawrence Livermore, which is typically used for stockpile stewardship, will be applied to biology to refine the understanding of the mechanisms leading to cancer development and accelerating the development of promising therapies that are more effective and less toxic.

read more

Preparing for the power demands of exascale systems

June 9, 2016

When considering the challenges of exascale computing, having sufficient power is right at the top of the list. Sequoia, which at 20 petaflops is currently our top HPC system, draws more than 9 MW of power—equivalent to the energy draw of more than 1,000 average homes. Systems like Sierra, Livermore’s next advanced technology HPC system that is spec’d at 120-150 petaflops peak, could potentially draw three times as much.

When tens of megawatts of power are on the line, advanced power management is needed to balance the highly fluctuant power demands and power availability. This requires orchestration of resources and real-time insight into the entire operational facility and energy grid. Even small interruptions during high-performance compute cycles can derail the job and disrupt power-grid management as well. To mitigate potential problems, LLNL has turned to OSIsoft, a company with deep roots in data collection, aggregation, and storage. OSIsoft helps LLNL track and analyze streams of operational data from computing racks, cooling systems, energy utilities and other equipment and stores it to central control point for the life of the assets.

read more

Lawrence Livermore to receive brain-inspired TrueNorth supercomputer

March 29, 2016

Lawrence Livermore will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery—a mere 2.5 watts of power.

read more

Shaving Time to Test Antidotes for Nerve Agents

March 1, 2016

Powered by Livermore Computing's world-class supercomputers, Vulcan & Sequoia, LLNL researchers are currently simulating the energy requirements for candidate drug molecules to permeate cell membranes — shaving weeks of compound testing by determining in advance how readily they’ll enter cells to perform their activity.

read more

DOE funds "HPC for manufacturing" projects

February 17, 2016

Lawrence Livermore National Laboratory (LLNL) and partners announced 10 new industry projects to advance manufacturing using high-performance computing (HPC) under a DOE program. Industry projects ranging from improved turbine blades for aircraft engines and reduced heat loss in electronics to waste reduction in paper manufacturing and improved fiberglass production are among the first to be selected for funding and partnerships with national labs under the U.S. Department of Energy's (DOE) High Performance Computing for Manufacturing (HPC4Mfg) Program. LLNL leads the program and partners with Lawrence Berkeley and Oak Ridge National Laboratories (LBNL and ORNL).

Each of the 10 Phase I projects will be funded at approximately $300,000 for a total of just under $3 million. Selected companies will partner with national labs, which will provide expertise in and access to high performance computing systems aimed at high-impact challenges. The Advanced Manufacturing Office (AMO) within DOE's Office of Energy Efficiency and Renewable Energy (EERE) created this program to advance clean energy technologies, increase the efficiency of manufacturing processes, accelerate innovation, shorten the time it takes to bring new technologies to market, and improve the quality of products.

read more

Sequoia enables Gordon Bell prize-winning simulation

December 7, 2015

The full power of Lawrence Livermore’s Sequoia supercomputer played a key role in the Earth mantle convection simulation that won the 2015 Gordon Bell Prize, announced at SC15 Supercomputing Conference. LLNL’s onsite IBM analyst Roy Musselman and Livermore Computing’s Scott Futral were acknowledged for their contributions to the project in carrying out the Sequoia calculations.

Published in Proceedings of the International Conference for High Performance Computing, Networking, Storage and AnalysisEarth simulation images courtesey of J. Rudi et al.

HPCWire Editor’s Choice Award given to tri-lab collaboration

November 19, 2015

The collaboration of Oak Ridge, Argonne and Lawrence Livermore (CORAL) that will bring the Sierra supercomputer to the Lab in 2018 has been recognized by HPCWire with an Editor’s Choice Award for Best HPC Collaboration between Government and Industry

read more

Sequoia simulation unveils whole-body blood flow

November 15, 2015

Livermore scientists and collaborators accomplish three-dimensional, high-resolution simulations of whole-body blood flow on 1,572,864 cores of Blue Gene/Q, also known as Sequoia. Blood flow simulations of hemodynamics in the systemic arterial tree can potentially have a tremendous impact on the diagnosis and treatment of patients suffering from vascular disease.

read more

Software Tags: 

State grant enables energy-saving retrofit

November 12, 2015

Supercomputers at Lawrence Livermore National Laboratory (LLNL) will be retrofitted with liquid cooling systems under a California Energy Commission (CEC) grant to assess potential energy savings. Reducing power consumption and the associated costs at data centers and high performance computing facilities is a leading concern of the HPC community," says Anna Maria Bailey, LLNL high performance computing facility manager, "addressing this issue is critical as we develop ever more powerful next-generation supercomputers."

read more

Livermore awards contract to provide 7+ petaflops of capacity computing

October 21, 2015

Penguin Computing will receive $39 million to provide more than 7 petaflops of “capacity” systems, called the Commodity Technology Systems-1 (CTS-1), to Los Alamos, Sandia and Lawrence Livermore national laboratories. The new systems will leverage the Open Compute Tundra Extreme Scale (ES) series architecture to support national security workloads at the three national laboratories.

LLNL and Rensselaer Polytechnic Institute to promote industry adoption of supercomputing

September 16, 2015

Lawrence Livermore National Laboratory (LLNL) and the Rensselaer Polytechnic Institute (RFI) will combine decades of expertise to help American industry and businesses expand use of high performance computing (HPC) under a signed memorandum of understanding. Livermore and RPI will look to bridge the gap between the levels of computing conducted at their institutions and the typical levels found in industry. Scientific and engineering software applications capable of running on HPC platforms are a prime area of interest.

read more

LLNL breaks ground on unclassified supercomputing facility

May 28, 2015

Lawrence Livermore National Laboratory broke ground today on a modular and sustainable supercomputing facility that will provide a flexible infrastructure able to accommodate the Laboratory’s growing demand for high performance computing (HPC).

The $9.875 million building, located on the Laboratory’s east side, will ensure computer room space to support the Advanced Simulation and Computing(ASC) Program’s unclassified HPC systems. ASC is the high-performance simulation effort of the National Nuclear Security Administration’s (NNSA) stockpile stewardship program to ensure the safety, security and reliability of the nation’s nuclear deterrent without testing.

“Unclassified high performance computing is critical to the stockpile stewardship program’s success and the need for this capability will continue to grow,” said Laboratory Director Bill Goldstein. “Modernizing the Lab’s computing infrastructure will enable us to better exploit next-generation supercomputers for NNSA by tapping the talents of top academic and private sector partners.”

read more

Revolutionary processing-in-memory architecture on the horizon

March 1, 2015

EVERYTHING changes. Nowhere is that maxim more apparent than in the world of computing. From smartphones and tablets to mainframes and supercomputers, the system architecture—how a machine’s nodes and network are designed—evolves rapidly as new versions replace old. As home computer users know, systems can change dramatically between generations, especially in a field where five years is a long time. Computational scientists at Lawrence Livermore and other Department of Energy (DOE) national laboratories must continually prepare for the next increase in computational power so that the transition to a new machine does not arrest efforts to meet important national missions.

That next jump in power will be a big one, as new machines begin to approach exascale computing. Exascale systems will process 1018 floating-point operations per second (flops), making them 1,000 times faster than the petascale systems that arrived in the late 2000s. Computational scientists will need to address a number of high-performance computing (HPC) challenges to ensure that these systems can meet the rising performance demands and operate within strict power constraints.

read more

Multi-center semi-empirical quantum models for carbon

February 16, 2015

We report on the development of many-body density functional tight binding (DFTB) models for carbon, which include either explicit or implicit calculation of multi-center terms in the Hamiltonian. We show that all of our methods yield accurate eigenstates and eigenfunctions for both ambient diamond and transitions to molten, metallic states. We then determine a three-body repulsive energy to compute accurate equation of state and structural properties for carbon under these conditions. Our results indicate a straightforward approach by which many-body effects can be included in DFTB, thus extending the method to a wide variety of systems and thermodynamic conditions.

read more

Research and visualization featured on Chemical Physics Letters cover

February 16, 2015

We report on the development of many-body density functional tight binding (DFTB) models for carbon, which include either explicit or implicit calculation of multi-center terms in the Hamiltonian. We show that all of our methods yield accurate eigenstates and eigenfunctions for both ambient diamond and transitions to molten, metallic states. We then determine a three-body repulsive energy to compute accurate equation of state and structural properties for carbon under these conditions. Our results indicate a straightforward approach by which many-body effects can be included in DFTB, thus extending the method to a wide variety of systems and thermodynamic conditions.

read more

LLNL trained a neural network with 15 billion parameters

February 16, 2015

We present a work-in-progress snapshot of learning with a 15 billion parameter deep learning network on HPC architectures applied to the largest publicly available natural image and video dataset released to-date. Recent advancements in unsupervised deep neural networks suggest that scaling up such networks in both model and training dataset size can yield significant improvements in the learning of concepts at the highest layers. We train our three-layer deep neural network on the Yahoo! Flickr Creative Commons 100M dataset.   

Software Tags: 
Data Management