Image
Bengal supercomputer

Job Limits

Each LC platform is a shared resource. Users are expected to adhere to the following usage policies to ensure that the resources can be effectively and productively used by everyone. You can view the policies on a system itself by running:

news job.lim.MACHINENAME

Web version of Bengal Job Limits

There are 6 available machines on the SCF. Please consider your needs to decide which machine is best for your flavor of work.

  • Agate—a 48 node machine for up to 36 core jobs (no interconnect)
  • Bengal—a machine for small to large jobs (24 hr job limit)
  • Mica—a small machine for debugging (8hr job limit)
  • Jadeita—a machine for small jobs
  • Magma—a machine for small to medium jobs
  • Jade—a machine for large jobs (4 jobs/user limit)

Bengal is now in Limited Availability mode and all jobs are scheduled per node in one pool. Remember you can run using the guests bank to get additional resources. Use of the guests bank will not impact your metal bank priority.

Hardware

Each Bengal node is based on Intel Sapphire Rapids processor with 56 cores per socket, 2 sockets per node, and 256 GB DDR5 memory.

Scheduling

  • pdebug—40 nodes (4,480 cores), interactive use only.
  • pbatch—1072 nodes (120,064 cores), batch use only.
Pools               Max nodes/job       Max runtime
---------------------------------------------------
pdebug                    20(*)            1 hour
pbatch                   110(**)          36 hours
---------------------------------------------------

(*) pdebug is intended for debugging, visualization, and other inherently interactive work.  It is NOT intended for production work. Do not use pdebug to run batch jobs.  Do not chain jobs to run one after the other. Individuals who misuse the pdebug queue in this or any similar manner will be denied access to running jobs in the pdebug queue.

(**) pbatch is for large jobs.

  •    Jobs are node scheduled
  •    There is a limit of 110 nodes per user at any one time

Interactive access to a batch node is allowed only while you have a batch job running on that node, and only for the purpose of monitoring your job. When logging into a batch node, be mindful of the impact your work has on the other jobs running on the node.

SBATCH SCRIPT:  #SBATCH  -n  number_of_cores

ALWAYS use the sbatch -n option when your job will use more than one task.

Zone
SCF
Vendor
Dell
User-Available Nodes
Login Nodes*
9
Batch Nodes
1,122
Total Nodes
1,158
CPUs
CPU Architecture
Intel Sapphire Rapids
Cores/Node
112
Total Cores
125,664
Memory Total (GB)
287,232
CPU Memory/Node (GB)
256
Peak Performance
Peak TFLOPS (CPUs)
7,966.2
Clock Speed (GHz)
2.0
OS
TOSS4
Interconnect
Cornelis Networks
Parallel job type
multiple nodes per job
Recommended location for parallel file space
Program
ASC, M&IC
Password Authentication
OTP, Kerberos, ssh keys
Year Commissioned
2023
Compilers

See Compilers page