Working on a remote HPC system

Instructor note

Total: 30min (Teaching:30Min | Discussion:0min | Breaks:0min | Exercises:0Min)

Prerequisites

  • An active user account on one of the clusters

  • Basic UNIX knowledge

  • Familiar with the the UNIX terminal

Objectives

  • Questions

    • What is an HPC system?

    • How does an HPC system work?

    • How do I log on to a remote HPC system?

  • keypoints

    • An HPC system is a set of networked machines.

    • Connect to a remote HPC system.

    • Understand the general HPC system architecture.

    • HPC systems typically provide login nodes and a set of worker nodes.

    • The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted file systems, etc.).

    • Files saved on one node are available on all nodes.

Get a user account and access to a project

 https://www.sigma2.no/how-apply-user-account

Logging in

The first step in using a cluster is to establish a connection from our laptop to the cluster, via the Internet and/or your organization’s network. We use a program called the Secure SHell (or ssh) client for this. This establishes a temporary encrypted connection between your laptop and the compute cluster.

Connect to cluster

Make sure you have an SSH client installed on your laptop. Refer to the UNIX intro to HPC section for more details.

Test if ssh is installed before proceeding

Check ssh version

[user@laptop ~]$ ssh -V

OpenSSH_x.x xx, OpenSSL xxx xxx

ssh installed and OK to continue

Danger

[user@laptop ~]$ ssh -V

Command ‘ssh’ not found

Not OK inform instructor

Warning

If you are on a Windows version that does not have a terminal then please install https://gitforwindows.org/ in order to get one.

Go ahead and log in to the cluster:

See examples of the syntax to log into the cluster below. The word before the @ symbol, i.e. MY_USER_NAME here, is the user account name on that cluster.

Make sure to replace MY_USER_NAME with your own user account name. When the system asks you for your password, type it and press Enter.

Note

The characters you type after the password prompt are not displayed on the screen. Normal output will resume once you type the password and press Enter.

[user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no

Where are we?

Very often, many users are tempted to think of a high-performance computing installation as one giant, magical machine. Sometimes, people will assume that the computer they’ve logged onto is the entire computing cluster. So what’s really happening? What computer have we logged on to? The name of the current computer we are logged onto can be checked with the hostname command. (You may also notice that the current hostname is also part of our prompt!)

[MY_USER_NAME@login-1.SAGA ~]$ hostname

login-1.saga.sigma2.no

Nodes

../_images/connect-to-remote-login-node.svg

Individual computers that compose a cluster are typically called nodes (although you will also hear people call them servers, computers and machines). On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the head node, login node, landing pad, or submit node. A login node serves as an access point or a gateway to the cluster. As a gateway, it is well suited for uploading and downloading files, setting up software, and running quick tests. It should never be used for doing actual work.

The real work on a cluster gets done by the worker (or compute) nodes. Worker nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.

All interaction with the worker nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called Scheduling jobs ). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the worker nodes.

For example, we can view all the worker nodes with the sinfo command, which is a command belonging to our SLURM scheduler system.

[MY_USER_NAME@login-1.SAGA ~]$ sinfo

normal*      up 7-00:00:00      1  drain c1-10
normal*      up 7-00:00:00     70   resv c1-[19-22,24....
normal*      up 7-00:00:00    108    mix c1-[8,17],c2....
normal*      up 7-00:00:00    141  alloc c1-[1-7,9,11....
bigmem       up 14-00:00:0     10    mix c3-[33-34,51....
bigmem       up 14-00:00:0     25  alloc c3-[29-32,35....
bigmem       up 14-00:00:0      1   idle c6-6
accel        up 14-00:00:0      6    mix c7-[3-8]
accel        up 14-00:00:0      2  alloc c7-[1-2]
optimist     up   infinite      1  drain c1-10
optimist     up   infinite     70   resv c1-[19-22,24....
optimist     up   infinite    124    mix c1-[8,17],c2....
optimist     up   infinite    168  alloc c1-[1-7,9,11....
optimist     up   infinite      1   idle c6-6

Belonging to a cluster are also specialized machines used for managing disk storage, user authentication, and other infrastructure-related tasks. Although we do not typically log on to or interact with these machines directly, they enable a number of key features like ensuring our user account and files are available throughout the HPC system.

What’s in a node?

All the nodes in an HPC system have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space.

CPUs are a computer’s tool for actually running programs and calculations.

Information about a current task is stored in the computer’s memory.

Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted. While this storage can be local (a hard drive installed inside it), it is more common for nodes to connect to a shared, remote fileserver or cluster of servers.

Node anatomy

Shared file systems

As mentioned, files stored on a node can be stored either on a shared file system or a local file system. Files that are stored on a shared file system are available to all nodes (login or compute) that have this shared file system mounted.

An example of a shared filesystem is your home directory. Or it can be some /cluster/projects folder for instance.

An example of a local filesystem can be the scratch file system on a particular worker node.

Shared storage

What’s in your home directory?

The system administrators may have configured your home directory with some helpful files, folders, and links (shortcuts) to space reserved for you on other filesystems. Take a look around and see what you can find.

Home directory contents vary from user to user. Please discuss any differences you spot with your neighbors.

Hint: The shell commands pwd and ls may come in handy.

Use pwd to print the working directory path:

[MY_USER_NAME@login-1.SAGA ~]$ pwd

/cluster/home/MY_USER_NAME

The deepest layer should differ: MY_USER_NAME is uniquely yours. Are there differences in the path at higher levels?

You can run ls to list the directory contents, though it’s possible nothing will show up (if no files have been provided). To be sure, use the -a flag to show hidden files, too.

[MY_USER_NAME@CLUSTER_NAME ~]$ ls -a

At a minimum, this will show the current directory as ., and the parent directory as ...

If both of you have empty directories, they will look identical. If you or your neighbor has used the system before, there may be differences.

Explore Your Computer

Note that, if you’re logged in to the remote computer cluster, you need to log out first. To do so, type Ctrl+d or exit:

[MY_USER_NAME@CLUSTER_NAME ~]$ exit
[user@laptop ~]$

Explore your laptop!

Try to find out the number of CPUs and amount of memory available on your personal computer.

There are several ways to do this. Most operating systems have a graphical system monitor, like the Windows Task Manager. More detailed information can be found on the command line:

  • Use system utilities

    [user@laptop ~]$ nproc --all
    [user@laptop ~]$ free -mh
    
  • Read from /proc

    [user@laptop ~]$ cat /proc/cpuinfo
    [user@laptop ~]$ cat /proc/meminfo
    
  • Run system monitor

    [user@laptop ~]$ top -n 1
    
    [user@laptop ~]$ top -n 1 -$USER #Only current user processes
    

Explore The Login Node

Now compare the resources of your computer with those of the login node.

  [user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no

  [MY_USER_NAME@login-3.SAGA ~]$ nproc --all
  64

  [MY_USER_NAME@login-3.SAGA ~]$ free -m
               total        used        free      shared  buff/cache   available
  Mem:          773152       32325      369097           6      376744      740826
  Swap:           4095           0        4095

  [MY_USER_NAME@login-3.SAGA ~]$ cat /proc/meminfo | head -n5
  MemTotal:       791707708 kB
  MemFree:        377944708 kB
  MemAvailable:   758595312 kB
  Buffers:            1652 kB
  Cached:         361764044 kB

We can also have a look at the filesystem with the command df.

   [MY_USER_NAME@login-3.SAGA ~]$ df -Th  | grep -v tmpfs
   Filesystem                              Type      Size  Used Avail Use% Mounted on
   /dev/mapper/xcatvg-root                 xfs       7.3T   74G  7.2T   1% /
   /dev/sdb2                               xfs      1018M  222M  797M  22% /boot
   /dev/sdb1                               vfat      256M  7.0M  249M   3% /boot/efi
   export.dl-nird.sigma2.no:/data/lake1    nfs4      8.1P  7.0P  1.1P  87% /nird/datalake 
   export.ts-nird.sigma2.no:/project/fs1   nfs4       19P   14P  4.9P  75% /nird/projects
   export.dl-nird.sigma2.no:/backup/ts/hpc nfs4      5.8P  2.6P  3.3P  44% /cluster/backup/hpc
   beegfs_nodev                            beegfs    5.8P  2.6P  3.3P  44% /cluster
   cvmfs2                                  fuse      9.8G  5.6G  4.3G  57% /cvmfs/software.eessi.io

The local filesystems (ext, tmp, xfs, zfs) will depend on whether you’re on the same login node (or compute node, later on). Networked filesystems (beegfs, cifs, gpfs, nfs, pvfs) will be similar – but may include your user account name, depending on how it is mounted.

Compare your computer, the head node and the worker node

Compare your laptop’s number of processors and memory with the numbers you see on the cluster head node and worker node.

What implications do you think the differences might have on running your research work on the different systems and nodes?

Differences Between Nodes

Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphical Processing Units (GPUs).

Next up!

With all of this in mind, we will now cover how to talk to the cluster’s scheduler, and use it to start running our scripts and programs!