Livermore Computing (LC) is continuing the process of moving its Open Computing Facility (OCF) resources from the LLNL Enterprise Network (EN) into a multi-zone High Performance Computing (HPC) Enclave. This Technical Bulletin contains brief background information about the HPC Enclave, provides the current schedule of important upcoming events, and answers some frequently asked questions that LC has received during customer meetings and briefings.


New collaborations and programs vital to the growth of LLNL necessitate LC developing a new model for accessing unclassified LC resources. Consequently, all LC OCF resources are being moved from the LLNL EN into an enclave with the following user accessible zones:

  • The Collaboration Zone (CZ) will contain most of the existing HPC systems and will be accessible to current OCF users, foreign nationals (FNs) and sensitive country foreign nationals (SCFNs). Systems include Sierra, uBGL, Hera, Atlas, Ansel, Aztec, Prism, Edge, Hive, uDawn, and Oslic.
  • The Restricted Zone (RZ) will contain a number of smaller systems serving a subset of the LC user community. RZ access will require an alternate authentication mechanism. Systems include rzedgelet, rzstagg, rzthriller, rzdawndev, rzzeus, rzslic, rzalastor, and rzgw.

Unique and independent /g/g* home directories and /nfs/tmp2 file systems will exist on both the RZ and CZ. Please rest assured that no user data will be deleted from home directories.


Date Event
August 11
  • rzslic become s available to RZ users.
  • /nfs/cz_tmp2 becomes available to all users.
August 23
  • RZ users will no longer be able to access their HPSS archival storage home directory (e.g., /users/u*/username) from CZ systems. Access to archival storage by RZ users will be allowed only from rzslic or any other RZ systems. File sharing within archival storage between RZ and CZ user directories will continue to be supported until September 13.
  • Zeus moves into the RZ and is renamed rzzeus.
August 30
  • All OCF systems unavailable starting at 7:00 a.m. until August 31 at 7:00 a.m. while zones are established.
  • /nfs/tmp2 moves into the RZ; /nfs/cz_tmp2 moves into the CZ nd is renamed to nsf/tmp2.
  • RZ users' /g/g* home directories moved to the RZ; CZ-only users /g/g* home directories moved into the CZ.
August 31
  • OCF systems returned to service with HPC Enclave zones established with new access rules.
September 13
  • RZ users' archival storage home directory permissions will be locked down, preventing further sharing of archival storage data through the use of group permissions.
October 18
  • SCFNs permitted access to the CZ.

Frequently Asked Questions

As an RZ user:

  • How do I access RZ systems? You will access RZ systems with a CRYPTOcard through the gateway system, The CRYPTOcard will be given to you by August 23. If you are off site, you will also need to use VPN to log in to any RZ system.
  • Once I have logged into rzgw, how do I get to the RZ systems? Once on rzgw you can log in to any RZ system using your PIN and RSA SecurID token code. You may also install SSH keys in your home directory on rzgw and the other RZ systems to ease the login process from rzgw to the RZ systems.
  • When will I need to use the CRYPTOcard? Beginning August 31, you will need to use the CRYPTOcard to login to From rzgw you can SSH to any RZ system.
  • How do I access my OCF archival storage directory? You will only be able to access your OCF archival storage directory from RZ systems; we recommend you access it from rzslic.
  • How do I run jobs on RZ systems? You will submit jobs as you do now; however, executables must be located in your home directory or /nfs/tmp2, not in /p/lscratch*.
  • To where do I write my output? Output from large parallel jobs should be written to /p/lscratch*; other output may be written to the RZ /nfs/tmp2 directory.
  • I currently run on Aztec. Will there be a similar system in the RZ? Yes. A new, 32-node system named rzcereal will be available on August 31 for serial and on-node parallel jobs.
  • Although I am an RZ user, I have accounts and banks on CZ systems. Can I still run jobs on CZ systems? Absolutely. However, on August 31, your CZ home directory will be empty except for default dot files. You will need to re-create your user environment in the CZ.
  • How do I move data between the RZ and CZ? You may put that data into one of the shared /p/lscratch* file systems that are shared by both the CZ and RZ. If necessary, you may then copy this data to your home directory.
  • Although I am an RZ user, I run big simulations on the CZ and generate a lot of data. How can I put it into archival storage? Large output datasets should be written to /p/lscratch*. To move it to archival storage, log in to an RZ system (rzslic) and from there you can copy it to storage.

As a CZ-only user:

  • How do I access CZ systems? You will access CZ systems via SSH using your PIN and RSA SecurID token code that you currently use. VPN will not be required for access to CZ systems after August 31, even from off site.
  • How do I access the OCF archival storage? As you do now. There is no change.
  • How do I run jobs on CZ systems? As you do now. There is no change.
  • To where do I write my output? Where you do now. There is no change.
  • Can I access or run jobs on RZ systems? No.
  • I have an allocation on Zeus. Where will I run my jobs after Zeus is moved into the RZ? Banks on Zeus that remain on the CZ have been moved to Atlas. You may begin using that bank immediately.

For all OCF users:

  • The current /nfs/tmp2 file system will be moved into the RZ on August 30. On August 11, the /nfs/cz_tmp2 file system will become available. Between August 11 and August 30, you may copy your data freely from the current /nfs/tmp2 directory to the /nfs/cz_tmp2 directory. If you will compute on the CZ, you should use this window of time to move your data. On August 30, /nfs/cz_tmp2 will be renamed /nfs/tmp2.
  • All OCF systems will be unavailable from 7:00 a.m. on August 30 through 7:00 a.m. on August 31.

PDF of TB469 for download and distribution.