Lassen
Lassen

Lassen is similar to the classified Sierra system, but smaller in size:  23 petaflops peak performance vs. Sierra's 125 petaflops peak performance. Lassen was ranked #10 on the June 2019 Top500 list.

*Login nodes: lassen[708-709]
NOTE: most numbers are for compute nodes only - not login or service nodes.

Job Limits

Each LC platform is a shared resource. Users are expected to adhere to the following usage policies to ensure that the resources can be effectively and productively used by everyone. You can view the policies on a system itself by running:

news job.lim.MACHINENAME

Web version of Lassen Job Limits

There are 788 compute nodes with 40 POWER9 cores, 4 NVIDIA Volta V100 GPUs, and 256 GB of memory on each node.

Jobs are scheduled per node. Lassen has two main scheduling pools (queues):

  •  pdebug—36 nodes
  •  pwdev—12 nodes
  •  pbatch—740 nodes
Queue          Max nodes / job    Max runtime
---------------------------------------------
pdebug             18(*)             2 hrs
pwdev          social(**)            12 hrs(**)
pbatch             256               12 hrs
---------------------------------------------

(*) pdebug is intended for debugging, visualization, and other inherently interactive work. It is NOT intended for production work. Do not use pdebug to run batch jobs. Do not chain jobs to run one after the other. Do not use more than half of the nodes during normal business hours. Individuals who misuse the pdebug queue in this or any similar manner will be denied access to running jobs in the pdebug queue.

(**) pwdev is for SD code developers to run short compiles/debugging/CI work.  Only users in the pwdev bank will have access to this pool.

  • 3 nodes max per user during daytime hours
  • jobs can be up to 4 hours during daytime hours
  • daytime hours are 0800-1800 California time Mon-Fri
  • to prevent runaway jobs, there's a technical maximum per job of 12 hours

Hardware

Each node has two 22-core 3.45 GHz IBM POWER9 processors. Two of the cores on each socket are reserved for system use, leaving 40 usable cores per node. The vast majority of the cycles on each node are provided by four NVIDIA Volta V100 GPUs per node. Each node also has 256 GB of system memory and 64 GB of GPU memory. The nodes are connected by Mellanox EDR InfiniBand.

Documentation

Contact

Please call or send email to the LC Hotline if you have questions. LC Hotline | phone: 925-422-4531 | email: lc-hotline@llnl.gov

 

Zone
CZ
Vendor
IBM
User-Available Nodes
Login Nodes*
2
Batch Nodes
756
Total Nodes
795
CPUs
CPU Architecture
IBM Power9
Cores/Node
44
Total Cores
34,848
GPUs
GPU Architecture
NVIDIA V100 (Volta)
Total GPUs
3,168
GPUs per compute node
4
GPU peak performance (TFLOP/s double precision)
7.00
Memory Total (GB)
253,440
CPU Memory/Node (GB)
256
Peak Performance
Peak TFLOPS (CPUs)
855.0
Peak TFLOPs (GPUs)
22,192.2
Peak TFLOPS (CPUs+GPUs)
23,047.20
Clock Speed (GHz)
3.5
Peak single CPU memory bandwidth (GB/s)
170
OS
RHEL
Interconnect
IB EDR
Recommended location for parallel file space
Program
ASC+M&IC
Class
ATS-2, CORAL-1
Year Commissioned
2018
Compilers
Documentation