Quotas for File Systems and Storage
LC Storage Type | Tier 1 | Tier 2 | Tier 3 | Snap- | Back up to tape? |
---|---|---|---|---|---|
Parallel File Systems /p/lustre[1,2,3] (Common) | 20TB/1M†† | 75TB/25M†† Fill out this form to request Tier 2 increase | Contact LC Hotline to initiate conversation with Livermore Computing and programmatic stakeholders | No | No |
/p/lustre4 (El Capitan) | 100TB/10M | | 500TB/100M†† Fill out this form to request Tier 2 increase |
|||
/p/lustre5 (Tuolumne) | 50TB/5M | 100TB/50M†† Fill out this form to request Tier 2 increase |
|||
/p/gpfs* (Sierra GPFS) | 50TB/5M†† | 400TB/40M†† Fill out this form to request Tier 2 increase |
|||
/p/gpfs* (Lassen) /p/vast*† |
20TB/1M†† | 75TB/25M†† Fill out this form to request Tier 2 increase |
|||
NAS/NFS Project /usr/workspace, /usr/WS* /usr/gapps† /usr/gdata† | 2TB/10M†† 10GB† 10GB† |
4TB/25M†† 30GB† 30GB† Fill out this form to request Tier 2 increase |
Yes Yes Yes | No Yes Yes | |
Home Directory /g/g* | 24GB |
Tier 2 form not available. Contact LC Hotline to initiate conversation with Livermore Computing and programmatic stakeholders | Yes | Yes | |
Archive (HPSS) /users/u* | 300TB |
No | No | ||
Home Directory /g/g* | 24GB |
Tier 2 form not available. Contact LC Hotline to initiate conversation with Livermore Computing and programmatic stakeholders | N/A |
Yes | Yes |
Archive (HPSS) /users/u* | 300TB | N/A |
No | No | |
Object Storage S3-protocol StorageGRID | 4TB | N/A | N/A | No | No |
†these quotas are per directory, not per user as in all other cases
††NOTE for quotas with a "/", the second number equals inode limits in M (million) number increments where appropriate.
Home Directories
- All production OCF (CZ and RZ), iSNSI and SCF LC systems share home directories that reside on global file system /g, NFS mounted from several dedicated servers.
- See Using LC Files Systems and Home Directories for details.
Parallel File Systems
- Parallel file systems are large, shared, file systems used for parallel I/O.
- An overview of the different types of parallel file systems deployed at LC can be found on the Parallel File Systems page. This page includes details on LC's quota policy for these systems.
- All machines except CORAL I machines use Lustre open source parallel file systems.
- CORAL I machines (Lassen, RZAnsel, Sierra) use GPFS from IBM.
- For additional information, see Using LC Files Systems and the LC Resources Tutorials Parallel File Systems Information.
Temporary File Systems
- Various temporary file systems (global and local) are for system use, temporary storage of I/O local to each machine, or large temporary storage for I/O shared among many machines.
- See Using LC Files Systems and Temporary File Systems for details.
/usr/workspace File Systems
- 2 TB of NFS mounted file space is provided for each user and group.
- These file systems are similar to home directories, but are not backed up. For details, see Using LC Files Systems and /usr/workspace File Systems.
/usr/gapps and /usr/gdata File Systems
- LC provides shared, collaborative, NFS mounted file space for user developed and supported applications and data on LC systems.
- See /usr/gapps, /usr/gdata File Systems and /usr/gapps File System for details.
Archival Storage
- LC maintains massive on-premises streaming media capabilities
- The High Performance Archival Storage System (HPSS) is available from all LC production systems.
- For more information, see Using LC Archival Storage or Archival HPSS Storage
Object Storage
- See here for specifics regarding using AWS S3 and on-premises S3-protocol data stores in LC.
- LC offers lower-scale individual use on-premises object storage for HPC-enabling use cases in the form of NetApp StorageGRID. See above for quotas information.
- These on-premises deployments utilize the widely accepted S3 protocol.
- LC recognizes a growing demand for larger-scale object stores on-premises next to HPC compute. We are examining various technologies to broaden accessibility of our existing highly scaled-out HPC storage with S3-protocol compatible access mechanisms. If you have such a need, contact us through the LC Hotline to discuss your use case.
File System Status
- The file system status displays real-time status information on LC file systems. It also shows which file systems are mounted by each cluster and file system bandwidth between clusters.
- CZ: CZ File System Status or mylc.llnl.gov
- RZ: RZ File System Status or rzmylc.llnl.gov
- SCF: SCF File System Status or mylc.llnl.gov
- iSNSI: File system status - TBA