Corona Supercomputer
Corona

These stats are necessarily simplified. See notes below.

NOTE Due to the mixed nature of the GPUs in this machine, some of the fields that would normally have single integer inputs are not rendering. Here is additional information:

GPU peak performance (TFLOP/s double precision): MI60 = 7.4

GPU Memory/Node (GB): MI60 = 32GB

Peak TFLOPs (GPUs): MI60 = 2,427TF

Also, local NVRAM storage, mounted on each node as /l/ssd (GB): 1500

Job Limits

Each LC platform is a shared resource. Users are expected to adhere to the following usage policies to ensure that the resources can be effectively and productively used by everyone. You can view the policies on a system itself by running:

news job.lim.MACHINENAME

Web version of Corona Job Limits

Hardware

There are 121 compute nodes, each with 256 GB of memory. All compute nodes have AMD Rome processors with 48 cores/node. Each compute node has 8 AMD MI50 GPUs with 32 GB of memory. The nodes are interconnected via InfiniBand QDR (QLogic).

Scheduling

Corona jobs are scheduled through Flux. Slurm wrappers are also loaded by default.

Jobs are scheduled per node. All nodes are in one queue.

The maximum time limit is 24 hours.

For more information about running on Corona, see: https://lc.llnl.gov/confluence/display/LC/Compiling+and+running+on+Coro…

Scratch Disk Space: Consult CZ File Systems Web Page: https://lc.llnl.gov/fsstatus/fsstatus.cgi

Documentation

Contact

Please call or send email to the LC Hotline if you have questions. LC Hotline | phone: 925-422-4531 | email: lc-hotline@llnl.gov

Zone
CZ
Vendor
Penguin
User-Available Nodes
Login Nodes*
3
Batch Nodes
105
Debug Nodes
16
Total Nodes
121
CPUs
CPU Architecture
121 nodes AMD Rome
Cores/Node
48
Total Cores
GPUs
GPU Architecture
121 nodes AMD 8xMI50
Total GPUs
968
GPUs per compute node
8
GPU peak performance (TFLOP/s double precision)
10,948.00
GPU global memory (GB)
3872.00
Memory Total (GB)
34,848
CPU Memory/Node (GB)
256
Peak single CPU memory bandwidth (GB/s)
153
OS
TOSS 4
Interconnect
IB HDR
Parallel job type
multiple nodes per job
Recommended location for parallel file space
Program
ASC, M&IC, CARES
Class
Other
Password Authentication
OTP, Kerberos
Year Commissioned
2019
Compilers

See Compilers page