...
Time critical option 2 users, or zids, have a special set of filesystems different from the regular user. They are served from different storage servers in different computing halls, and are not kept in sync automatically. It is the user's responsibility to ensure the required files and directory structures are present on both sides and synchronise them if and when needed. This means, for example, that zids will have 2 HOMES, one on each Storage Host. All the following storage locations can be referenced by the corresponding environment variables, which will be defined automatically for each session or job.
File System | Suitable for ... | Technology | Features | Quota |
---|---|---|---|---|
HOME | permanent files, e. g. profile, utilities, sources, libraries, etc. | Lustre (on ws1 and ws2) |
| 100GB |
TCWORK | permanent large files. Main storage for your jobs and experiments input and output files. | Lustre (on ws1 and ws2) |
| 50 TB |
SCRATCHDIR | Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster. | Lustre (on ws1 and ws2) |
| part of TCWORK quota |
TMPDIR | Fast temporary data for an individual session or job, small files only. Local to every node. | SSD on shared nodes (*f QoSs) |
| 3 GB per session/job by default. Customisable up to 40 GB with
|
RAM on exclusive compute nodes (*p QoSs) | no limit (maximum memory of the node) |
Note |
---|
Note that there is no PERM or SCRATCH, and the corresponding environment variables will not be defined. |
...
As a zid, you will be able to access the "t*
" time-critical QoSes which have a higher priority than their standard "n*
" counterparts:
QoS name | Type | Suitable for... | Shared nodes | Maximum jobs per user | Default / Max Wall Clock Limit | Default / Max CPUs | Default / Max Memory |
---|---|---|---|---|---|---|---|
tf | fractional | serial and small parallel jobs. | Yes | - | 1 day / 1 day | 1 / 64 | 8 GB / 128 GB |
tp | parallel | parallel jobs requiring more than half a node | No | - | 6 hours / 1 day | - | - |
Ecflow settings
Warning |
---|
While the availability of virtual infrastructure to run ecFlow servers remains limited, you may start your ecFlow servers in the interim HPCF dedicated node to be able to run your suites, as detailed in HPC2020: Using ecFlow. However, keep in mind the new model when migrating or designing your solution. |
...