Sierra, Livermore’s latest advanced technology high performance computing system, joined LLNL’s lineup of supercomputers in 2018. The new system provides computational resources that are essential for nuclear weapon scientists to fulfill the National Nuclear Security Administration’s stockpile stewardship mission through simulation in lieu of underground testing. Advanced Simulation and Computing (ASC) Program scientists and engineers will use Sierra to assess the performance of nuclear weapon systems as well as nuclear weapon science and engineering calculations. These calculations are necessary to understand key issues of physics, the knowledge of which later makes its way into the integrated design codes. This work on Sierra has important implications for other global and national challenges such as nonproliferation and counterterrorism.
The IBM-built Sierra supercomputer is providing over six times the sustained throughput performance and over five times the sustained scalable science performance of Sequoia, with a 125 petaflops peak. Sierra, which combines two types of processor chips — IBM’s Power 9 processors and NVIDIA’s Volta graphics processing units, is over five times more power efficient than Sequoia, with a peak power consumption of approximately 11 megawatts.
Additional Resources
For the Public
For Users
| Zone | SCF | 
| Vendor | IBM | 
| User-Available Nodes | 
										Login Nodes*
									 
										
            5
      
									 
										Batch Nodes
									 
										
            4,284
      
									 
										Debug Nodes
									 
										
            36
      
									 
										Total Nodes
									 
										
            4,474
      
									 | 
| CPUs | 
										CPU Architecture
									 
										
            IBM Power9
      
									 
										Cores/Node
									 
										
            44
      
									 
										Total Cores
									 
										
            190,080
      
									 | 
| GPUs | 
											GPU Architecture
										 
											
            NVIDIA V100 (Volta)
      
										 
											Total GPUs
										 
											
            17,280
      
										 
											GPUs per compute node
										 
											
            4
      
										 | 
| Memory Total (GiB) | 1,382,400 | 
| CPU Memory/Node (GiB) | 256 | 
| Peak Performance | 
											Peak PFLOPS (CPUs)
										 
											
            4.666
      
										 
											Peak PFLOPs (GPUs)	
										 
											
            120.960
      
										 
											Peak PFLOPS (CPUs+GPUs)	
										 
											
            125.626
      
										 | 
| Clock Speed (GHz) | 3.4 | 
| Peak single CPU memory bandwidth ((GiB)/s) | 170 | 
| OS | RHEL | 
| Interconnect | IB EDR | 
| Scheduling Policy (main batch queue) | node-scheduled | 
| Scheduler | LSF | 
| Recommended location for parallel file space | /p/gpfs1 | 
| Program | ASC | 
| Class | ATS-2, CORAL-1 | 
| Year Commissioned | 2018 | 
| Compilers | |
| Documentation | 
 
        
