...
Info |
---|
You can also use these options as command line arguments to sbatch. |
General directives
Directive | Description | Default | |||
---|---|---|---|---|---|
| Project account for resource accounting and billing purposes. | default project account for the user | |||
| A descriptive name of the job | Script name | |||
--chdir=... | Working directory of the job. The output and error files can be defined relative to this directory | submitting directory | |||
| Path to the file where standard output is redirected. Special placeholders for job id (%j) and the execution node (%N) | slurm-%j.out | |||
| Path to the file where standard error is redirected. Special placeholders for job id (%j) and the execution node (%N) | output value | --chdir=... | Working directory of the job. The output and error files can be defined relative to this directory | submitting directory |
| Quality of Service (or queue) where the job is to be submitted. Check the available queues for the platform. | normal | |||
| Wall clock limit of the job. Note that this is not cpu time limit The format can be: m, m:s, h:m:s, d-h, d-h:m or d-h:m:s | qos default time limit | |||
--mail-type=<type> | Notify user by email when certain event types occur. Valid values are: BEGIN, END, FAIL, REQUEUE and ALL | disabled | |||
--mail-user=<email> | email address to send the email | submitting user | |||
| account=<account>Project account for resource accounting and billing purposes. | default project account for the user |
...
| Export variables to the job, comma separated entries of the form VAR=VALUE. ALL means export the entire environment from the submitting shell into the job. NONE means getting a fresh session. |
|
Directives for resource allocation - non-serial jobs
Note |
---|
These directives are not available for ECGATE or Linux Clusters outside a parallel queue |
...
Directive | Description | Default |
---|---|---|
| Allocate resources for the specified number of parallel tasks. Note that a job requesting more than one must be submitted to a parallel queue. There might not be any parallel queue configured on the cluster | 1 |
| Allocate <nodes> number of nodes to the job | 1 |
| Allocate <threads> number of cpus for every task. Use for threaded applications. | 1 |
| Allocate a maximum of <tasks> tasks on every node. | node capacity |
| Allocate <threads> threads on every core (HyperThreading) | core thread capacity |
| Allocate <mem> memory for each task | core thread capacity |
Tip |
---|
See man sbatch or https://slurm.schedmd.com/sbatch.html for the complete list of directives and their options. |
Job variables
Inside a job, you can benefit from some variables defined by SLURM automatically. Some examples are:
- SLURM_JOBID
- SLURM_NODELIST
SLURM_SUBMIT_DIR
Job arrays
Job arrays offer a mechanism for submitting and managing collections of similar jobs quickly and easily. The array index values are specified using the --array or -a option of the sbatch command. The option argument can be specific array index values, a range of index values, and an optional step size as shown in the examples below. Jobs which are part of a job array will have the environment variable SLURM_ARRAY_TASK_ID set to its array index value.
...
Tip | |||||
---|---|---|---|---|---|
The --array option can also be used inside the job script as a job directive. For example:
|
Show If | |||||
---|---|---|---|---|---|
| |||||
Job arrays or other multiple concurrent jobs using IDLIf you are running a job array or other multiple concurrent jobs on lxc that call IDL then it is good to constrain these to run on a small number of nodes to limit the number of IDL licences requested. To do this add the --constraint=idl option to the scripts job directives:
|
...