*2 nodes: rzgenie[2,5]

Job Limits

Each LC platform is a shared resource. Users are expected to adhere to the following usage policies to ensure that the resources can be effectively and productively used by everyone. You can view the policies on a system itself by running:

news job.lim.MACHINENAME

Web Version of RZGenie

There are two login nodes, and 43 pdebug (interactive) nodes (no batch nodes). Each node has 36 2.6GHz Intel Xeon cores and 128GB memory. Hyperthreading is currently enabled.

Moab is not used on this partition. Jobs are scheduled per core. There is 1 scheduling pool:

pdebug—1548 cores (43 nodes)

Scheduling

RZgenie jobs are scheduled using SLURM. Scheduling is not technically enforced so users are expected to monitor their own behavior and keep themselves within the current limits while following the policies written below:

  • Users will not compile on the login nodes during daytime hours
  • A user can have a maximum of 108 processors with a runtime of up to
  • 4 hours in the queue during the day with the following exception:
    • An occasional job for debugging that takes 109-162 processors for a maximum of one hour can be run as long as it is the user's only job in the queue.
  • The queue (sorted by user) can be found by typing "squeue -S u" at the prompt after setting the environment variable:

    SQUEUE_FORMAT='%.7i %.9P %.8j %.8u %.2t %.10M %.6D %.4C %R'
  •  Daytime is 0800-2000 Mondays-Fridays not including holidays
  •  No production runs allowed, only development and debugging
  •  Users will avoid computationally intensive work on the login node
  •  We are all family and expect developers to play nice. However if someone's job(s) have taken over the machine
    •  Call them or send them an email
    •  Email ramblings-help@llnl.gov with a screenshot so we can take  care of the situation by killing work that violates policy.
  • This approach will be revisited later and additional limits will be set if necessary. If someone monopolizes the machine, developers can always shift to other RZ resources

Scratch Disk Space: Consult RZ File Systems Web Page:

https://rzlc.llnl.gov/fsstatus/fsstatus.cgi

 

Zone
RZ
Vendor
Penguin
User-Available Nodes
Login Nodes*
2
Batch Nodes
0
Debug Nodes
43
Total Nodes
48
CPUs
CPU Architecture
Intel Xeon E5-2695 v4
Cores/Node
36
Total Cores
1,728
Memory Total (GB)
6,144
CPU Memory/Node (GB)
128
Peak Performance
Peak TFLOPS (CPUs)
58.1
Clock Speed (GHz)
2.1
Peak single CPU memory bandwidth (GB/s)
77
OS
TOSS 4
Interconnect
Cornelis Networks Omni-Path
Parallel job type
multiple nodes per job
Recommended location for parallel file space
Program
ASC
Class
CTS-1
Compilers
Documentation