...
File System | Suitable for ... | Technology | Features | Quota |
---|---|---|---|---|
HOME | permanent files, e. g. profile, utilities, sources, libraries, etc. | Lustre (on ws1 and ws2) |
| 100GB |
TCWORK | permanent large files. Main storage for your jobs and experiments input and output files. | Lustre (on ws1 and ws2) |
| 50 TB |
SCRATCHDIR | Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster. | Lustre (on ws1 and ws2) |
| part of TCWORK quota |
TMPDIR | Fast temporary data for an individual session or job, small files only. Local to every node. | SSD on shared nodes (*f QoSs) |
| 3 GB per session/job by default. Customisable up to 40 GB with
|
RAM on exclusive compute nodes (*p QoSs) | no limit (maximum memory of the node) |
...
Note |
---|
Note that there is no PERM or SCRATCH, and the corresponding environment variables will not be defined. |
Info |
HOME, TCWORK and SCRATCHDIR are all based on the Lustre Parallel filesystem for maximum reliability. Those will not be accessible from outside the HPCF, including VDI instances or VMs running the ecFlow servers |
The Storage server to use is controlled by the environment variable STHOST
, which may take the values "ws1
" or "ws2
". This variable needs to be defined when logging in, and also for all the jobs that need to run in batch. If logging in interactively without passing the environment variable, you will be prompted to choose the desired STHOST
:
...