Content tagged with #Technical-Bulletins

August 21, 2018.
As a forum to share information and important announcements, the Livermore Computing (LC) users meetings are scheduled routinely to get feedback and, most importantly, to hear from users about how the machines are used and what LC can do to accommodate their needs. This page contains the agenda for the next users meeting scheduled on August 21, 2018.

May 10, 2018.
The purge policy outlined herein supersedes the purge policy in Technical Bulletin Nos. 351, 411 and 492. This updated purge policy will go into effect on all of the /p/lscratch* Lustre file systems on the Open and Secure Computing Facilities (OCF and SCF) beginning May 10, 2018.

February 19, 2018.
UPDATED: November 5, 2018
Livermore Computing has established a schedule for the retirement of a number of our older lustre file systems that have fallen off warranty. Each file system will go through a similar process during its retirement. Initially each file system will be remounted on current clients in read-only mode for a two-week period. Jobs holding up the re-mounting of the file system will be killed. This time period is to allow users to ensure appropriate changes have been made to their applications. For the remaining four weeks the file system will be mounted in read-only mode *only* on the SLIC clusters (oslic, rzslic, cslic), allowing users to copy off data they wish to keep. After the six-week period the file systems will be retired and all data will no longer be retrievable.

January 25, 2018.
DEPRECATED: This TB has been superseded by TB#526
The High Performance Storage System (HPSS) archival storage system at Livermore Computing (LC) imposes a yearly growth-based quota. Reset each year on fiscal year boundaries, the FY18 (10/1/2017-9/30/2018) storage quotas per user are detailed in the following technical bulletin.

January 23, 2018.
As a forum to share information and important announcements, the Livermore Computing (LC) users meetings are scheduled routinely to get feedback and, most importantly, to hear from users about how the machines are used and what LC can do to accommodate their needs. This page contains the agenda for the next users meeting scheduled on February 8, 2018.

July 7, 2017.
On Wednesday, July 12, the Livermore Computing Data Storage Group will release a new version of HTAR on all open (OCF) and secure (SCF) production machines. HTAR—a utility that combines a flexible file-bundling tool with fast parallel access to high performance storage—allows users to store and selectively retrieve very large sets of files efficiently. This update to HTAR allows specifying entries to exclude during creation of new archive files.

June 4, 2017.
The license to run the Moab Workload Manager (Moab commands) will expire this year on September 30 and will not be renewed. The Tri-Labs have elected to activate SLURM’s scheduling plugin to provide the functionality formerly provided by Moab. Machines running TOSS 3 are already running SLURM scheduling.

March 20, 2017.
Livermore Computing (LC) is pleased to announce the deployment schedule for new /g home directory file systems. These new NAS systems consist of four NetApp 8040 servers.

February 21, 2017.
Effective February 23, access to the Enterprise (EN) Bitbucket server (https://mystash.llnl.gov) from the Collaborative Zone (CZ) will only be allowed via SSH git connections, and not via HTTPS. Going forward, in order to use the Enterprise Bitbucket, users will need to Enable SSH Access in their user profile in the Enterprise Bitbucket. Note, users are advised to secure your SSH private key with a passphrase to prevent unauthorized access to your repositories.

January 19, 2017.
Livermore Computing (LC) is pleased to announce the deployment of lscratchh, a new parallel file system on the OCF. The file system will be approximately 15 PB, will run an advanced version of Lustre, and will be mounted on Quartz and OSLIC only. Because Quartz users are currently writing to lscratche and lscratchd, it will be necessary for you to move your data to lscratchh. Quartz Users, Livermore Computing (LC) is pleased to announce the deployment of lscratchh, a new parallel file system on the OCF. The file system will be approximately 15 PB, will run an advanced version of Lustre, and will be mounted on Quartz and OSLIC only. Because Quartz users are currently writing to lscratche and lscratchd, it will be necessary for you to move your data to lscratchh.

Pages