NOTE This system currently has limited availability. 

El Capitan resides in the CZ while undergoing usability testing. When fully deployed, it will be moved to the SCF (see documentation on the LC Zones page).

A comparison of ATS-4 machines

  El Capitan Tuolumne El Dorado* RZAdams
Nodes 11,136 1,152 384 128
MI300As per node 4 4 4 4
MI300As total 44,544 4,608 1,536 512
Node Peak (DP TFLOP/s) 250.8 250.8 250.8 250.8
System Peak (DP PFLOP/s) 2,792.9 288.4 96.3 32.1
System Peak (SP PFLOP/s) 3,688.2 381.5 127.2 42.4
System Peak (HP PFLOP/s) 17,639.4 1,824.8 608.2 202.8
Node Memory (GiB) (All HBM3) 512 512 512 512
System Memory (TiB) 5,568 576 192 64
Compute cabinets 87 9 3 1
Peak Power (MW) 34.8 3.6 1.2 0.4
Total Rabbit Modules 696 72 24 8
November 2024 Top500 position 1 10 20 49

 

*El Dorado is sited at Sandia National Labs

El Capitan In-Depth

Zone
SCF
Vendor
HPE Cray
User-Available Nodes
Login Nodes*
32 nodes: elcap[1001-1016,12121-12136]
Batch Nodes
11,039
Debug Nodes
64
Total Nodes
11,103
APUs
APU Architecture
AMD MI300A
CPUs
CPU Architecture
4th Generation AMD EPYC
Cores/Node
96
Total Cores
1,065,888
GPUs
GPU Architecture
CDNA 3
Total GPUs
44,412
GPUs per compute node
4
GPU peak performance (TFLOP/s double precision)
68.00
GPU global memory (GB)
512.00
Memory Total (GB)
5,684,736
Clock Speed (GHz)
2.0
OS
TOSS 4
Interconnect
HPE Slingshot 11
Parallel job type
multiple nodes per job
Recommended location for parallel file space
Class
ATS-4, CORAL-2
Password Authentication
OTP, Kerberos, ssh keys
Year Commissioned
2024
Compilers
Documentation