Image
Pascal

*login node: pascal83

Job Limits

Each LC platform is a shared resource. Users are expected to adhere to the following usage policies to ensure that the resources can be effectively and productively used by everyone. You can view the policies on a system itself by running:

news job.lim.MACHINENAME

Web version of Pascal Job Limits

Hardware

  • Each compute node has 2 18-core Intel Xeon E5-2695 v4 processors @2.1 GHz, 256 GiB of memory and 2 NVIDIA P100 (Pascal) GPUs. 
  • The login node pascal83 has 128 GiB of memory and no GPUs. 
  • Hyperthreading is enabled, giving compute nodes 72 cores.
  • Pascal uses a Mellanox EDR (100 GB) InfiniBand interconnect.

Scheduling

Pascal jobs are scheduled through SLURM. Jobs are scheduled per node. There are 2 node scheduling queues:

  • pvis—32 nodes (1152 cores), visualization use*
  • pbatch—131 nodes (4716 cores), only batch use permitted.

               Max nodes/job   Max runtime   Default time   Max jobs/user
   ----------------------------------------------------------------------
   pvis **          16          12 hours       30 minutes         n/a
   pbatch           32          24 hours       30 minutes         n/a
   ----------------------------------------------------------------------

*  The pvis queue is for visualization work. Interactive and debugging    work may be done in this queue using standby (`--qos=standby`).
** Good neighbor use of pvis queue: As a courtesy to others, we ask that pvis users not use more than half the nodes at any one time (through multiple jobs or in one job).

Documentation

Contact

Please call or send email to the LC Hotline if you have questions. LC Hotline | phone: 925-422-4531 | email: lc-hotline@llnl.gov

 

Zone
CZ
Vendor
Penguin Computing
User-Available Nodes
Login Nodes*
1
Batch Nodes
130
Total Nodes
164
CPUs
CPU Architecture
Intel Xeon E5-2695 v4
Cores/Node
36
Total Cores
5,904
GPUs
GPU Architecture
NVIDIA Tesla P100 (Pascal)
Total GPUs
326
GPUs per compute node
2
GPU peak performance (TFLOP/s double precision)
5.00
GPU global memory (GB)
16.00
Memory Total (GB)
41,984
CPU Memory/Node (GB)
256
Peak Performance
Peak TFLOPS (CPUs)
198.3
Peak TFLOPs (GPUs)
1,727.8
Peak TFLOPS (CPUs+GPUs)
1,926.10
Clock Speed (GHz)
2.1
Peak single CPU memory bandwidth (GB/s)
77
OS
TOSS 4
Interconnect
Cornelis Networks Omni-Path
Parallel job type
multiple nodes per user
Recommended location for parallel file space
Program
ASC, M&IC
Class
CTS, VIS
Password Authentication
OTP, Kerberos
Compilers

Compilers (TOSS 3)

Documentation