CSLIC is a resource reserved for moving files between LC file systems and HPSS archival storage.

Zone
SCF
Vendor
N/A
User-Available Nodes
Total Nodes
10
CPU Memory/Node (GB)
128
Clock Speed (GHz)
2.6
OS
TOSS 3
Password Authentication
OTP or Kerberos
Documentation

Job Limits

Each LC platform is a shared resource. Users are expected to adhere to the following usage policies to ensure that the resources can be effectively and productively used by everyone. You can view the policies on a system itself by running:

news job.lim.MACHINENAME

Web Version of CSLIC Job Limits

There are 10 nodes on CSLIC, and each node has 36 cores (dual eighteen-core sockets). CSLIC is intended for transferring files between file systems and HPSS Archival storage.

IMPORTANT NOTE:  Running production or other computational codes is done on other production machines, not on CSLIC.

You may run file transfers on the interactive login nodes. Batch jobs are scheduled per core. If you wish to run with more than one task per node, please use:

sbatch / salloc / srun: '-n <ntasks>'

CSLIC has 1 scheduling pool (or partition):

Pools                   Max nodes/job    Max runtime
----------------------------------------------------
pbatch                       1             infinite
----------------------------------------------------

If future usage dictates, the CSILC limits will be adjusted.

Documentation

Documentation files can be found in /usr/local/docs:

  • linux.basics
  • mpi.basics
  • LustreBasics.pdf
  • lustre.basics
  • FileSystemUse
  • lustre-purge

Hardware

There are 10 nodes on CSLIC, and each node has 36 cores (dual eighteen-core sockets).

  • login: cslic[2-9]
  • pbatch: cslic[10,11]

Contact

Please call or send email to the LC Hotline if you have questions. LC Hotline | phone: 925-422-4531 | email: lc-hotline@llnl.gov