GPFS File System
In all cases, the GPFS filesystems exist only on their individual clusters.
Zone | Machine | Mount | Capacity (PiB) |
---|---|---|---|
CZ | Lassen | /p/gpfs1 | 24 |
RZ | RZAnsel | /p/gpfs1 | 1.5 |
SCF | Sierra | /p/gpfs1 | 140 |
Lustre File System
NOTE Large systems such as Quartz, Jade, RZTopaz, and Zin have significantly more bandwidth to Lustre.
System details can be found on LC Confluence.
CZ File System | Mount | Capacity (PiB) |
---|---|---|
lustre1 | /p/lustre1 (/p/czlustre1 on rzslic) | 15 |
lustre2 | /p/lustre2 (/p/czlustre2 on rzslic) | 24 |
lustre3 | /p/lustre3 (only on oslic, rzslic, surface, pascal) | 8 |
RZ File System
|
Mount
|
Capacity (PiB)
|
lustre1 | /p/lustre1 | 20 |
SCF File System | Mount | Capacity (PiB) |
lustre1 | /p/lustre1 | 15 |
lustre2 | /p/lustre2 | 15 |
SNSI File System | Mount | Capacity (PiB) |
lustre1 | /p/lustre1 | 2.5 |
VAST File System
VAST is a new platform for use in the CZ. Details on the VAST deployment can be found in Technical Bulletin #538.
CZ File System | Mount | Initial Capacity (PiB) |
---|---|---|
vast1 | /p/vast1 (/p/czvast1 on rzslic) | 5.2 |
Quotas
Quotas will be in place to monitor and regulate both the total amount of data and the total number of files (or inodes) used by a particular user. The implementation will include both hard quotas and soft quotas. A hard quota will prevent new file writes from succeeding until the user has freed up space below the quota threshold. The soft quota is a lower threshold, set at 90% of the hard threshold and is equal to the user's quota. When the soft quota is hit, it starts a grace period for the user to clear space or files before having all writes blocked. During this grace period, I/O performance will likely be throttled. Once that probationary period is exceeded and the user has not yet reduced usage below their quota, the soft quota becomes a hard quota and new writes will fail.
There are 3 levels of quotas for users, as shown below. Movement of a user from Tier 1 to Tier 2 is permanent and may be requested using the following form or contacting the LC Hotline. For those needing further capacity or number of files, they can provide justification for a custom amount to their account coordinator who can then implement the change for up to six months.
LC Storage Type | Tier 1 | Tier 2 | Tier 3 | Snap- | Back up to tape? |
---|---|---|---|---|---|
Parallel File Systems
| 20TB/1M††
| 75TB/25M†† Fill out this form to request Tier 2 increase | Contact LC Hotline to initiate conversation with Livermore Computing and programmatic stakeholders | No | No |
Parallel File Systems /p/gpfs* |
50TB/5M†† | 400TB/40M†† Fill out this form to request Tier 2 increase |
|||
NAS/NFS Project /usr/workspace, /usr/WS* /usr/gapps† /usr/gdata† |
2TB/10M†† 10GB† 10GB† |
4TB/25M†† 30GB† 30GB† Fill out this form to request Tier 2 increase |
Yes Yes Yes | No Yes Yes | |
Home Directory /g/g* | 24GB |
Tier 2 form not available. Contact LC Hotline to initiate conversation with Livermore Computing and programmatic stakeholders | Yes | Yes | |
Archive (HPSS) /users/u* | 300TB |
No | No |
†these quotas are per directory, not per user as in all other cases
††NOTE for quotas with a "/", the second number equals inode limits in M (million) number increments where appropriate.