...
Note |
---|
Note that there is no PERM or SCRATCH, and the corresponding environment variables will not be defined. |
Selecting the STHOST
The Storage server to use is controlled by the environment variable STHOST
, which may take the values "ws1
" or "ws2
". This variable needs to be defined when logging in, and also for all the jobs that need to run in batch. If logging in interactively without passing the environment variable, you will be prompted to choose the desired STHOST
:
...
Note | ||
---|---|---|
| ||
If you are submitting a ksh job, make sure you include this line right after the SBATCH directives header:
|
High-priority batch access
As a zid, you will be able to access the "t*
" time-critical QoSes which have a higher priority than their standard "n*
" counterparts:
QoS name | Type | Suitable for... | Shared nodes | Maximum jobs per user | Default / Max Wall Clock Limit | Default / Max CPUs | Default / Max Memory |
---|---|---|---|---|---|---|---|
tf | fractional | serial and small parallel jobs. | Yes | - | 1 day / 1 day | 1 / 64 | 8 GB / 128 GB |
tp | parallel | parallel jobs requiring more than half a node | No | - | 6 hours / 1 day | - | - |
Ecflow settings
Warning |
---|
While the availability of virtual infrastructure to run ecFlow servers remains limited, you may start your ecFlow servers in the interim HPCF dedicated node to be able to run your suites, as detailed in HPC2020: Using ecFlow. However, keep in mind the new model when migrating or designing your solution. |
...