*The Dane login nodes are node numbers 1,2,3,4,5,6,7,16.

Total cores: 112/node

Total threads: 224/node

Max turbo frequency: 3.8 GHz

Cache: 105MB/CPU; 210MB/node

Memory type: DDR5 4800

Number of memory channels: 8/CPU; 16/node

Thermal Design Power (TDP): 350W/CPU

Job Limits

Each LC platform is a shared resource. Users are expected to adhere to the following usage policies to ensure that the resources can be effectively and productively used by everyone. You can view the policies on a system itself by running:

news job.lim.dane

Web Version of Dane Job Limits

Each Dane node is based on Intel Sapphire Rapids processor with 56 cores per socket, 2 sockets per node, and 256 GB DDR5 memory.

Scheduling

Batch jobs are scheduled through SLURM.

  • pdebug—40 nodes (4480 cores), interactive use only.
  • pbatch—1456 nodes (163072 cores), batch use only.

   

Pools               Max nodes/user       Max runtime
---------------------------------------------------
 pdebug                    20(*)            1 hour
 pbatch                   520              24 hours
---------------------------------------------------

(*) Jobs in pdebug are limited to 20 nodes on a PER USER basis, not a PER JOB basis, to allow other users access. Pdebug is scheduled using fairshare and jobs are core-scheduled, not node-scheduled.

Do NOT run computationally intensive work on the login nodes. There are a limited number of login nodes which are meant primarily for editing files and launching jobs. A majority of the time when a login node is laggy, it is because a user has started up a compile on that login node.

Pdebug is intended for debugging, visualization, and other inherently interactive work.  It is not intended for production work. Do not use pdebug to run batch jobs.  Do not chain jobs to run one after the other. Individuals who misuse the pdebug queue in this or any similar manner may be denied access to running jobs in the pdebug queue.

Pdebug nodes are core scheduled. To allocate whole nodes, add a '--exclusive' flag to your sbatch or salloc command.

Interactive access to a batch node is allowed while you have a batch job running on that node, and only for the purpose of monitoring your job. When logging into a batch node, be mindful of the impact your work has on the other jobs running on the node.

Scratch Disk Space: Consult CZ File Systems Web Page   

Documentation

Contact

Please call or send email to the LC Hotline if you have questions. LC Hotline | phone: 925-422-4531 | email: lc-hotline@llnl.gov

Zone
CZ
Vendor
CZ
User-Available Nodes
Login Nodes*
8 nodes: dane[1,2,3,4,5,6,7,16]
Batch Nodes
1,496
Total Nodes
1,544
CPUs
CPU Architecture
Intel Sapphire Rapids
Cores/Node
112
Total Cores
167,552
Memory Total (GB)
382,976
CPU Memory/Node (GB)
256
Peak Performance
Peak TFLOPS (CPUs)
10,723.0
Clock Speed (GHz)
2.0
OS
TOSS4
Interconnect
Cornelis Networks
Parallel job type
multiple nodes per job
Recommended location for parallel file space
Program
ASC, M&IC
Password Authentication
OTP, Kerberos, ssh keys
Compilers

See Compilers page