Technical Bulletins

October 20, 2011.
It is Livermore Computing (LC) policy that user directories may not have world read, write, or execute permissions set to allow all users the ability to view, modify, or execute files contained in them. To enforce this policy, LC will monitor the permissions on top-level directories in the following file systems:

September 26, 2011.
To address the need for Restricted Zone (RZ) users to move data between their desktops and the RZ, Livermore Computing (LC) has deployed a machine named rzstage that allows easy and secure file transfers using SCP or SFTP between RZ users’ desktop machines in the LLNL Enterprise Network and file systems mounted in the RZ.

August 10, 2011.
Livermore Computing (LC) is continuing the process of moving its Open Computing Facility (OCF) resources from the LLNL Enterprise Network (EN) into a multi-zone High Performance Computing (HPC) Enclave. This Technical Bulletin contains brief background information about the HPC Enclave, provides the current schedule of important upcoming events, and answers some frequently asked questions that LC has received during customer meetings and briefings.

August 3, 2011.
The creation of the High Performance Computing (HPC) Enclave will necessitate restricting Restricted Zone (RZ) users from executing code or scripts from a Lustre file system when on an RZ compute platform. All executables must be from either the RZ users’ /g/g* home directory or other RZ file systems such as /nfs/tmp2, /usr/gapps, /usr/local, or /usr/global.

May 16, 2011.
This bulletin details major changes to the Open Computing Facility (OCF) that will be made during the next five months as Livermore Computing (LC) moves toward supporting a Livermore Valley Open Campus.

February 2, 2009.
Most CHAOS 4 machines—Atlas, Eos, Hera, Juno, Minos, Rhea, and Zeus—use NAT (network address translation) for their internal compute nodes. These internal nodes do not have accessible network interfaces to outside machines. Outward bound connections are sent to a routing node, and NAT changes their apparent IP address. Because they can not effectively communicate in both directions with the external world, storage tools will not work on these internal compute nodes. Storage tools have not worked properly on these machines since their installation. With the incorporation of iorun into these tools, batch jobs can access storage.

December 16, 2008.
The Livermore Computing (LC) Data Storage Group (DSG) recently released the utility program called HSI (Hierarchical Storage Interface) on all open (OCF) and secure (SCF) production machines. HSI is a powerful interface for users of the High Performance Storage System (HPSS). It is intended to provide a familiar UNIX-style environment for working within the HPSS environment, while automatically taking advantage of the power of HPSS (e.g., for high speed parallel file transfers) without requiring any special user interaction, where possible.

August 11, 2008.
The High Performance Storage System (HPSS), the archival storage system for the Livermore Computing (LC) open and secure computing facilities, provides different classes of service (COS) for storing files. The Data Storage Group has configured HPSS to use different COS to make the best possible use of available hardware resources. To date, the user interface automatically selects the COS in which to store a file.

April 14, 2008.
Hopper, the graphical file management tool, can do a lot more than just transferring files. Here are a few of the additional features that Hopper has to offer.

Online archive of technical bulletins.

Pages