*Login node: rzansel61

NOTE: most numbers are for compute nodes only - not login or service nodes.

Job Limits

Each LC platform is a shared resource. Users are expected to adhere to the following usage policies to ensure that the resources can be effectively and productively used by everyone. You can view the policies on a system itself by running:

news job.lim.MACHINENAME

Web Version of RZAnsel

There is one login node and 54 pdebug (interactive) nodes (no batch nodes). A max time of 12 hours/job is set for work done outside of daytime hours.

Scheduling

RZAnsel jobs are scheduled using LSF. Scheduling is not technically enforced so users are expected to monitor their own behavior and keep themselves within the current limits while following the policies written below:

  • Users will avoid computationally intensive work on the login node.
  • A user can have a maximum of 3 nodes with a runtime of up to 4 hours in the queue during the day with the following exception:
    • If there are fewer than 6 down nodes, the limit increases to 4 nodes
    • An occasional job for debugging that takes 4 nodes for a maximum of one hour can be run as long as it is the user's only job in the queue.
  • The queue can be found by typing "lsfjobs" at the prompt
  • Daytime is 0800-1730 Mondays-Fridays
  • No production runs allowed, only development and debugging
  • We are all family and expect developers to play nice. However if someone's job(s) have taken over the machine
  • Call them or send them an email
  • Email ramblings-help@llnl.gov with a screenshot so we can take care of the situation by killing work that violates policy.
  • This approach will monitored and limits will be modified as needed.

     

Documentation

Zone
RZ
Vendor
IBM
User-Available Nodes
Login Nodes*
1
Total Nodes
62
CPUs
CPU Architecture
IBM Power9
Cores/Node
44
Total Cores
2,376
GPUs
GPU Architecture
NVIDIA V100 (Volta)
Total GPUs
216
GPUs per compute node
4
GPU peak performance (TFLOP/s double precision)
7.00
Memory Total (GB)
17,280
CPU Memory/Node (GB)
256
Peak Performance
Peak TFLOPS (CPUs)
58.0
Peak TFLOPs (GPUs)
1,512.0
Peak TFLOPS (CPUs+GPUs)
1,570.00
Clock Speed (GHz)
3.4
Peak single CPU memory bandwidth (GB/s)
170
OS
RHEL
Interconnect
IB EDR
Recommended location for parallel file space
Program
ASC
Class
ATS-2, CORAL-1
Year Commissioned
2018
Documentation