Scheduling jobs

Instructor note

Total: 75min (Teaching:45Min | Discussion:0min | Breaks:0min | Exercises:30Min)

Objectives

  • Questions

    • What is a scheduler and why are they used?

    • How do I launch a program to run on any one node in the cluster?

    • How do I capture the output of a program that is run on a node in the cluster?

  • Objectives

    • Run a simple Hello World style program on the cluster.

    • Submit a simple Hello World style script to the cluster.

    • Use the batch system command line tools to monitor the execution of your job.

    • Inspect the output and error files of your jobs.

  • Keypoints

    • The scheduler handles how compute resources are shared between users.

    • Everything you do should be run through the scheduler.

    • A job is just a shell script.

    • If in doubt, request more resources than you will need.

Job scheduler

An HPC system might have thousands of nodes and thousands of users. How do we decide who gets what and when? How do we ensure that a task is run with the resources it needs? This job is handled by a special piece of software called the scheduler. On an HPC system, the scheduler manages which jobs run where and when.

The following illustration compares these tasks of a job scheduler to a waiter in a restaurant. If you can relate to an instance where you had to wait for a while in a queue to get in to a popular restaurant, then you may now understand why sometimes your job do not start instantly as in your laptop.

Compare a job scheduler to a waiter in a restaurant

The scheduler used in this lesson is SLURM. Although SLURM is not used everywhere, running jobs is quite similar regardless of what software is being used. The exact syntax might change, but the concepts remain the same.

Discussion

Have you experienced having to wait to get into a popular restaurant

Creating our test job

The most basic use of the scheduler is to run a command non-interactively. Any command (or series of commands) that you want to run on the cluster is called a job, and the process of using a scheduler to run the job is called batch job submission.

In this case, the job we want to run is just a shell script. Let’s create a demo shell script to run as a test. The landing pad will have a number of terminal-based text editors installed. Use whichever you prefer. Unsure? nano is a pretty good, basic choice.

Instructor note

What is a terminal text editor ?

## Create a file named example-job.sh your favourite editor
## The content should match what the ´cat´ command 
## displays below
[MY_USER_NAME@CLUSTER_NAME ~]$ nano example-job.sh 
[MY_USER_NAME@CLUSTER_NAME ~]$ cat example-job.sh

#!/bin/bash

echo -n "This script is running on "
hostname

## Make the file executable
[MY_USER_NAME@CLUSTER_NAME ~]$ chmod +x example-job.sh

Run the script in the login node.

Instructor note

Remind what are login nodes and compute nodes

[MY_USER_NAME@CLUSTER_NAME ~ ]$ ./example-job.sh

This script is running on <HOSTNAME_OF_CLUSTER_LOGIN_NODE>

This job runs on the login node.

There is a distinction between running the job through the scheduler and just “running it”. To submit this job to the scheduler, we use the sbatch command.

Note

Find your project account(s)

All users on HPC clusters in Norway are part of at leaset one project. Each project is allocated an amount of credit. The currency used is “CPU-hours”. When you communicating with the scheduler, we should indicate which account should be charged.

#Most Likely your output would be different
#must you should see at least one project

[MY_USER_NAME@CLUSTER_NAME ~]$ projects
nn9736k
nn9997k
nn9999k
nn9999o

Warning

You need a replace the <PROJECT_NAME> in the following command with one of the outputs you get from the projects command above

Discussion

Do you have any examples of automation that you can relate to the scheduler ?

Instructor note

Tricks to bypass the queue during the course:

–account=nn9970k –partition=nn9970k_course_CPU

sbatch  --account=<PROJECT_NAME> --mem=1G --time=01:00 example-job.sh

And that’s all we need to do to submit a job. Our work is done – now the scheduler takes over and tries to run the job for us. While the job is waiting to run, it goes into a list of jobs called the queue. To check on our job’s status, we check the queue using the command squeue -u $USER.

[MY_USER_NAME@CLUSTER_NAME ~]$ squeue -u $USER

JOBID   PARTITION NAME      USER    ST  TIME  NODES  NODELIST(REASON)
137860  normal    example-  usernm  R   0:02  1      c5-59

The best way to check our job’s status is with squeue. Of course, running squeue repeatedly to check on things can be a little tiresome. To see a real-time view of our jobs, we can use the --iterate argument command with how frequent should the output be refreshed.

Warning

  1. Press Ctrl-C when you want to stop the command.

  2. This test job may end before first iteration starts, if that happens we will see the output in a later job that takes more time

[MY_USER_NAME@CLUSTER_NAME ~]$ squeue -u $USER --iterate 5

Where’s the output?

On the login node, this script printed output to the terminal – but when we exit sbatch, there’s nothing. Where did it go?

Cluster job output is typically redirected to a file in the directory you launched it from. Use ls to find and read the file.

Customising a job

The job we just ran used all the scheduler’s default options. In a real-world scenario, that’s probably not what we want. The default options represent a reasonable minimum. Chances are, we will need more cores, more memory, more time, among other special considerations. To get access to these resources we must customize our job script.

Comments in UNIX shell scripts (denoted by #) are typically ignored, but there are exceptions. For instance the special #! comment at the beginning of scripts specifies what program should be used to run it (you’ll typically see #!/bin/bash). Schedulers like SLURM also have a special comment used to denote special scheduler-specific options. Though these comments differ from scheduler to scheduler, SLURM’s special comment is #SBATCH. Anything following the #SBATCH comment is interpreted as an instruction to the scheduler.

Let’s illustrate this by example. By default, a job’s name is the name of the script, but the --job-name option can be used to change the name of a job. Add an option to the script:

[MY_USER_NAME@CLUSTER_NAME ~]$ cat example-job.sh

#!/bin/bash

#SBATCH --job-name=new_name

echo -n "This script is running on "
hostname
echo "This script has finished successfully."

sleep 60

Instructor note

What does sleep 60 mean

https://open.pages.sigma2.no/job-script-generator/

Submit the job (using sbatch  <SLURM arguments> example-job.sh) and monitor it:

[MY_USER_NAME@CLUSTER_NAME ~]$ squeue -u $USER

JOBID ACCOUNT        NAME     ST REASON   START_TIME TIME TIME_LEFT NODES CPUS
38191 <PROJECT_NAME> new_name PD Priority N/A        0:00 1:00:00   1     1

Fantastic, we’ve successfully changed the name of our job!

Exercise

Exercise

Find out how to change the default output and error log file names. By default this are written to the same file slurm-<JOBID>.out

Hint You can use the manual pages for SLURM utilities to find more about their capabilities. On the command line, these are accessed through the man utility: run man <program-name>. You can find the same information online by searching “man ”. (https://slurm.schedmd.com/sbatch.html)

[MY_USER_NAME@CLUSTER_NAME ~]$ man sbatch

Resource requests

But what about more important changes, such as the number of cores and memory for our jobs? One thing that is absolutely critical when working on an HPC system is specifying the resources required to run a job. This allows the scheduler to find the right time and place to schedule our job. If you do not specify requirements (such as the amount of time you need), you will likely be stuck with your site’s default resources, which is probably not what you want.

The following are several key resource requests:

Discussion

  • --ntasks=<ntasks> or -n <ntasks>: How many CPU cores does your job need, in total?

  • --time <days-hours:minutes:seconds> or -t <days-hours:minutes:seconds>: How much real-world time (walltime) will your job take to run? The part can be omitted.

  • --mem=<megabytes>: How much memory on a node does your job need in megabytes? You can also specify gigabytes using by adding a little “g” afterwards (example: –mem=5g)

  • --nodes=<nnodes> or -N <nnodes>: How many separate machines does your job need to run on? Note that if you set ntasks to a number greater than what one machine can offer, Slurm will set this value automatically.

Note that just requesting these resources does not make your job run faster! We’ll talk more about how to make sure that you’re using resources effectively in a later episode of this lesson.

Discussion

Discuss the commands scontrol sacct

Examples

Submit a job that will use 1 full node and 1 minute of walltime.

[MY_USER_NAME@CLUSTER_NAME ~ ]$ cat example-job.sh
   
#!/bin/bash
#SBATCH --nodes=1 --exclusive
#SBATCH --time=00:01:00

echo -n "This script is running on "
sleep 60 # time in seconds
hostname
echo "This script has finished successfully."

[MY_USER_NAME@CLUSTER_NAME ~]$ sbatch  <SLURM arguments> example-job.sh

Binding resource request

Resource requests are typically binding. If you exceed them, your job will be killed. Let’s use walltime as an example. We will request 60 seconds of walltime, and attempt to run a job for two minutes.

[MY_USER_NAME@CLUSTER_NAME ~]$ cat example-job.sh

  #!/bin/bash
  #SBATCH --nodes=1
  #SBATCH --time=00:01:00
  #SBATCH --account=<PROJECT_NAME> 
  #SBATCH --mem=1G
  #SBATCH --job-name=long_job

  echo -n "This script is running on ... "
  sleep 120 # time in seconds
  hostname
  echo "This script has finished successfully."

Submit the job and wait for it to finish. Once it is has finished, check the log file.

[MY_USER_NAME@CLUSTER_NAME ~]$ sbatch  example-job.sh
[MY_USER_NAME@CLUSTER_NAME ~]$ watch -n 15 squeue -u $USER
....
[MY_USER_NAME@CLUSTER_NAME ~]$ cat slurm-38193.out

This job is running on: c1-14
slurmstepd: error: *** JOB 38193 ON gra533 CANCELLED AT 2021-07-02T16:35:48
DUE TO TIME LIMIT ***

Our job was killed for exceeding the amount of resources it requested. Although this appears harsh, this is actually a feature. Strict adherence to resource requests allows the scheduler to find the best possible place for your jobs. Even more importantly, it ensures that another user cannot use more resources than they’ve been given. If another user messes up and accidentally attempts to use all the cores or memory on a node, SLURM will either restrain their job to the requested resources or kill the job outright. Other jobs on the node will be unaffected. This means that one user cannot mess up the experience of others, the only jobs affected by a mistake in scheduling will be their own.

Cancelling a job

Sometimes we’ll make a mistake and need to cancel a job. This can be done with the scancel command. Let’s submit a job and then cancel it using its job number (remember to change the walltime so that it runs long enough for you to cancel it before it is killed!).

Warning

Please make sure to cancel the job you submited with the correct jobid (copy pasting the jobid in the example would not work

[MY_USER_NAME@CLUSTER_NAME ~]$ cat example-job.sh

  #!/bin/bash
  #SBATCH --nodes=1
  #SBATCH --time=00:01:00
  #SBATCH --account=<PROJECT_NAME>
  #SBATCH --mem=1G
  #SBATCH --job-name=long_job

  echo -n "Script has started ... "
  sleep 300 # time in seconds

[MY_USER_NAME@CLUSTER_NAME ~]$ sbatch  example-job.sh
[MY_USER_NAME@CLUSTER_NAME ~]$ squeue -u $USER

Submitted batch job 38759

JOBID ACCOUNT     NAME           ST REASON   TIME TIME_LEFT NODES CPUS
38759 PROJECT_NAME example-job.sh PD Priority 0:00 1:00      1     1

Now cancel the job with its job number (printed in your terminal). A clean return of your command prompt indicates that the request to cancel the job was successful.

[MY_USER_NAME@CLUSTER_NAME ~]$ scancel 38759
# ... Note that it might take a minute for the job to disappear from the queue ...
[MY_USER_NAME@CLUSTER_NAME ~]$ squeue -u $USER

JOBID  USER  ACCOUNT  NAME  ST  REASON  START_TIME  TIME  TIME_LEFT  NODES CPUS

Other types of jobs

Up to this point, we’ve focused on running jobs in batch mode. SLURM also provides the ability to start an interactive session.

There are very frequently tasks that need to be done interactively. Creating an entire job script might be overkill, but the amount of resources required is too much for a login node to handle. A good example of this might be building a genome index for alignment with a tool like HISAT2. Fortunately, we can run these types of tasks as a one-off with srun.

Interactive jobs

Instead of running on a login node, you can ask the queue system to allocate compute resources for you, and once assigned, you can run commands interactively for as long as requested. The below is an example.

Warning

Interactive jobs require that you maintain a stable/uninterupted connection to the cluster. There for we use terminal multiplexer like tmux or screen

salloc --account=nn9970k --time=30:00 --nodes=1 --ntasks=1 --mem=4G 

Find out available resources on a compute node

User interactive login to access a compute node and find out number of cores and amount of memory. How much of it is at your disposal ?

Keeping interactive jobs alive

Interactive jobs stop when you disconnect from the login node either by choice or by internet connection problems. To keep a job alive you can use a terminal multiplexer like tmux.

tmux allows you to run processes as usual in your standard bash shell

You start tmux on the login node before you get an interactive Slurm session with srun and then do all the work in it. In case of a disconnect you simply reconnect to the login node and attach to the tmux session again by typing:

$ tmux attach

Or in case you have multiple session running:

$ tmux list-session
$ tmux attach -t SESSION_NUMBER

As long as the tmux session is not closed or terminated (e.g. by a server restart) your session should continue. One problem with our systems is that the tmux session is bound to the particular login server you get connected to. So if you start a tmux session on login-1 on SAGA and next time you get randomly connected to login-2 you first have to connect to login-1 again by:

$ ssh login-1

To log out a tmux session without closing it you have to press Ctrl-B (that the Ctrl key and simultaneously “b”, which is the standard tmux prefix) and then “d” (without the quotation marks). To close a session just close the bash session with either Ctrl-D or type exit. You can get a list of all tmux commands by Ctrl-B and the ? (question mark). See also this page for a short tutorial of tmux. Otherwise, working inside a tmux session is almost the same as a normal bash session.