Livermore Computing (LC) is pleased to announce the deployment of lscratchh, a new parallel file system on the OCF. The file system will be approximately 15 PB, will run an advanced version of Lustre, and will be mounted on Quartz and OSLIC only. Because Quartz users are currently writing to lscratche and lscratchd, it will be necessary for you to move your data to lscratchh. Quartz Users, Livermore Computing (LC) is pleased to announce the deployment of lscratchh, a new parallel file system on the OCF. The file system will be approximately 15 PB, will run an advanced version of Lustre, and will be mounted on Quartz and OSLIC only. Because Quartz users are currently writing to lscratche and lscratchd, it will be necessary for you to move your data to lscratchh.
On Monday, January 23rd, OSLIC mounts p/lscratchh enabling users to begin moving their data from lsctratchd and lscratche to lscratchh.
On Wednesday, January 25th Quartz will be temporarily partitioned into two separate 7 SU, ~1300 node systems. The first system will be named Quartzold and will mount lscratchd and lscratche. The second will be called Quartz and it will mount lscratchh. To facilitate the movement of data from lscratchd and lscratche to lscratchh we will mount all three file systems on OSLIC. You will also be able to use rsync, dcp2, or Hopper’s synchronize or TurboSync features to move files between file systems. See below for details about performing file transfers.
During this phase Quartz users will be permitted to submit jobs to both the Quartz and Quartzold systems. All accounts, banks and groups currently on Quartz will be duplicated on Quartzold. Logins and job submissions to these two systems will be as follows:
Quartzold (mounts lscratchd and lscratche)
login: ssh quartzold
job submission: #MSUB –q pbatchold or #MSUB –q pdebugold
Quartz (mounts lscratchh)
- login: ssh quartz
- job submission #MSUB -q bpatch or -q pbdebug
On Wednesday, March 1st, Quartz and Quartzold will be reconfigured back into one system (named Quartz) mounting only lscratchh.
File Transfer Suggestions
LC recommends using the rsync command or the Synchronize Directory operation in Hopper for copying files from one Lustre file system to another. Using these methods will preserve timestamps, groups, and permissions of the copied files, with an additional advantage being that if the initial transfer fails or is interrupted, the command can be reissued without re-copying data that has already been transferred.
For example, to migrate data from /p/lscratche/kelly/mydata to /p/lscratchh/kelly/mydata, use either the rsync command or Hopper as shown below.
Using the rsync command (on OSLIC):
rsync –av /p/lscratche/kelly/mydata/ /p/lscratchh/kelly/mydata
Using the Hopper file management GUI (on OSLIC):
- Launch Hopper and navigate to /p/lscratche/kelly
- Open a second Hopper window (Connect, Connect to Local), and navigate to /p/lscratchh/kelly
- If necessary, create the directory /p/lscratchh/kelly/mydata
- From the first (lscratche) window, drag the “mydata” folder and drop it on top of the “mydata” folder in the second (lscratchh) window.
- Choose “Synchronize this Directory” from the pop-up menu and press Ok.
- You can exit from the Hopper client and the transfer will continue. You can monitor the transfer from the Transfer Manager dialog, available under the Windows menu.
- Hopper will retry a failed Synchronize operation twice, so if there is a temporary filesystem glitch it can help you recover.
- You can also manually re-launch a previous Synchronize operation by pressing the Resubmit button in the Transfer Manager.