Working on a remote HPC system

Instructor note

Total: 30min (Teaching:30Min | Discussion:0min | Breaks:0min | Exercises:0Min)

Prerequisites

  • An active user account on one of the clusters

  • Basic UNIX knowledge

  • Familiar with the the UNIX terminal

Objectives

  • Questions

    • What is an HPC system?

    • How does an HPC system work?

    • How do I log on to a remote HPC system?

  • keypoints

    • An HPC system is a set of networked machines.

    • Connect to a remote HPC system.

    • Understand the general HPC system architecture.

    • HPC systems typically provide login nodes and a set of worker nodes.

    • The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted file systems, etc.).

    • Files saved on one node are available on all nodes.

Prerequisites

To work on a remote HPC system you must either have a terminal or some other application that can act as a terminal, for instance VSCode.

All linux and macOS laptops have a terminal, and nowadays all Windows laptops too.

See the UNIX-for-HPC intro lesson if you are not sure how to find your terminal.

Test if ssh is installed before proceeding

Check ssh version

To verify if ssh is installed on your system, run the following command in your terminal:

[user@laptop ~]$ ssh -V

If ssh is installed, you will see output similar to:

OpenSSH_x.x xx, OpenSSL xxx xxx

This confirms that ssh is installed and ready to use.

Danger

[user@laptop ~]$ ssh -V
Command 'ssh' not found

Not OK inform instructor

Get a user account and access to a project

 https://www.sigma2.no/how-apply-user-account

Logging in

The first step in using a cluster is to establish a connection from our laptop to the cluster, via the Internet and/or your organization’s network. We use a program called the Secure SHell (or ssh) client for this. This establishes a temporary encrypted connection between your laptop and the compute cluster.

../_images/connect-to-remote.svg

Go ahead and log in to the cluster:

See examples of the syntax to log into the cluster below. The word before the @ symbol, i.e. MY_USER_NAME here, is the user account name on that cluster.

Make sure to replace MY_USER_NAME with your own user account name. When the system asks you for your password, type it and press Enter.

Note

The characters you type after the password prompt are not displayed on the screen. Normal output will resume once you type the password and press Enter.

[user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no
(MY_USER_NAME@saga.sigma2.no) One-time password (OATH) for 'MY_USER_NAME': 
(MY_USER_NAME@saga.sigma2.no) Password:

Where are we?

Very often, many users think of a high-performance computing installation as one giant machine. Therefore, many people think that they’ve logged onto is the entire computing cluster. So what’s really happening? What computer have we logged on to? You can check the name of the current computer you are logged onto with the hostname command. (You may also notice that the current hostname is also part of our prompt!)

[MY_USER_NAME@login-1.SAGA ~]$ hostname

login-1.saga.sigma2.no

The machine is typically called login-N.<system> - where N is a number and <system> includes the system name, like saga or betzy. So this suggests that this is a special machine of type “login” and that there are several of these. Let’s explain how a typical compute cluster is built up.

Nodes

../_images/connect-to-remote-login-node.svg

Individual computers that compose a cluster are typically called nodes (although you will also hear people call them servers, computers and machines). On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the head node, login node, landing pad, or submit node. A login node serves as an access point or a gateway to the cluster. As a gateway, it is well suited for uploading and downloading files, setting up software, and running quick tests. It should never be used for doing actual work.

The real work on a cluster gets done by the worker (or compute) nodes. Worker nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.

Scheduling system

All interaction with the worker nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called Scheduling jobs ). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the worker nodes.

For example, we can view all the worker nodes with the sinfo command, which is a command belonging to our SLURM scheduler system.

[MY_USER_NAME@login-1.SAGA ~]$ sinfo

normal*      up 7-00:00:00      1  drain c1-10
normal*      up 7-00:00:00     70   resv c1-[19-22,24....
normal*      up 7-00:00:00    108    mix c1-[8,17],c2....
normal*      up 7-00:00:00    141  alloc c1-[1-7,9,11....
bigmem       up 14-00:00:0     10    mix c3-[33-34,51....
bigmem       up 14-00:00:0     25  alloc c3-[29-32,35....
bigmem       up 14-00:00:0      1   idle c6-6
accel        up 14-00:00:0      6    mix c7-[3-8]
accel        up 14-00:00:0      2  alloc c7-[1-2]
optimist     up   infinite      1  drain c1-10
optimist     up   infinite     70   resv c1-[19-22,24....
optimist     up   infinite    124    mix c1-[8,17],c2....
optimist     up   infinite    168  alloc c1-[1-7,9,11....
optimist     up   infinite      1   idle c6-6

What’s in a node?

All the nodes in an HPC system have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space.

CPUs are a computer’s tool for actually running programs and calculations.

Information about a current task is stored in the computer’s memory.

Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted. While this storage can be local (a hard drive installed inside it), it is more common for nodes to connect to a shared, remote fileserver or cluster of servers.

../_images/node-hardware.svg

Note

Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphical Processing Units (GPUs).

Shared file systems

Files stored on a node can be stored either on a shared file system or a local file system. Files that are stored on a shared file system are available to all nodes (login or compute) that have this shared file system mounted.

An example of a shared filesystem is your home directory. Or it can be some /cluster/projects folder for instance.

../_images/shared-file-system.svg

Local file systems

An example of a local filesystem can be the scratch file system which is available on a particular worker node. A local filesystem is not shared with other nodes.

What’s in your home directory?

When you log into a system you land in your home directory. This is your private work-area. Only you have access to the files and folders in your home directory.

The system administrators may have configured your home directory with some helpful files, folders, and links (shortcuts) to space reserved for you on other filesystems. Or it can just be empty. Take a look around and see what you can find.

Note

Hint: The shell commands pwd and ls may come in handy.

  • pwd - print working directory

  • ls - list directory

See the UNIX-for-HPC intro course for information about these commands if you are not acquainted with them.

Use pwd to print the working directory path:

[MY_USER_NAME@login-1.SAGA ~]$ pwd

/cluster/home/MY_USER_NAME

Exercise

The deepest layer should differ: MY_USER_NAME is uniquely yours.

Compare your path with a colleague - are there differences in the path at higher levels compared to the other participants?

You can run ls to list the directory contents, though it’s possible nothing will show up (if no files have been provided). To be sure, use the -a flag to show hidden files, too.

[MY_USER_NAME@CLUSTER_NAME ~]$ ls -a

At a minimum, this will show the current directory as ., and the parent directory as ...

Note

Once you start placing files and folders in your home directory, your home directory will look different to other peoples home directories.

Next up!

With all of this in mind, we will now cover how to talk to the cluster’s scheduler, and use it to start running our scripts and programs!