Working on a remote HPC system

Instructor note

Total: 30min (Teaching:30Min | Discussion:0min | Breaks:0min | Exercises:0Min)

Prerequisites

  • An active user account on one of the clusters

  • Basic UNIX knowledge

  • Familiar with the the UNIX terminal

Objectives

  • Questions

    • What is an HPC system?

    • How does an HPC system work?

    • How do I log on to a remote HPC system?

  • keypoints

    • An HPC system is a set of networked machines.

    • Connect to a remote HPC system.

    • Understand the general HPC system architecture.

    • HPC systems typically provide login nodes and a set of worker nodes.

    • The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted file systems, etc.).

    • Files saved on one node are available on all nodes.

Get a user account and access to a project

 https://www.sigma2.no/how-apply-user-account

Logging in

The first step in using a cluster is to establish a connection from our laptop to the cluster, via the Internet and/or your organization’s network. We use a program called the Secure SHell (or ssh) client for this. Make sure you have an SSH client installed on your laptop. Refer to the UNIX intro to HPC section for more details.

Connect to cluster

Instructor note

Remind what ssh is

Warning

If you are on a Windows version that does not have a terminal then please install https://gitforwindows.org/ to get an terminal

Test if ssh is installed before proceeding

Check ssh version

[user@laptop ~]$ ssh -V

OpenSSH_x.x xx, OpenSSL xxx xxx

ssh installed and OK to continue

Danger

[user@laptop ~]$ ssh -V

Command ‘ssh’ not found

Not OK inform instructor

Go ahead and log in to the cluster:

Remember to replace MY_USER with the username supplied by the instructors. You may be asked for your password.

Warning

The characters you type after the password prompt are not displayed on the screen. Normal output will resume once you type the passward and press Enter.

[user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no

You are logging in using a program known as the secure shell or ssh. This establishes a temporary encrypted connection between your laptop and compute cluster. The word before the @ symbol, e.g. MY_USER_NAME here, is the user account name that you have permission to use on the cluster.

Where are we?

Very often, many users are tempted to think of a high-performance computing installation as one giant, magical machine. Sometimes, people will assume that the computer they’ve logged onto is the entire computing cluster. So what’s really happening? What computer have we logged on to? The name of the current computer we are logged onto can be checked with the hostname command. (You may also notice that the current hostname is also part of our prompt!)

[MY_USER_NAME@login-1.SAGA ~]$ hostname

login-1.saga.sigma2.no

What’s in your home directory?

The system administrators may have configured your home directory with some helpful files, folders, and links (shortcuts) to space reserved for you on other filesystems. Take a look around and see what you can find.

Home directory contents vary from user to user. Please discuss any differences you spot with your neighbors.

Hint: The shell commands pwd and ls may come in handy.

Use pwd to print the working directory path:

[MY_USER_NAME@login-1.SAGA ~]$ pwd

/cluster/home/MY_USER_NAME

The deepest layer should differ: MY_USER_NAME is uniquely yours. Are there differences in the path at higher levels?

You can run ls to list the directory contents, though it’s possible nothing will show up (if no files have been provided). To be sure, use the -a flag to show hidden files, too.

[MY_USER_NAME@CLUSTER_NAME ~]$ ls -a

At a minimum, this will show the current directory as ., and the parent directory as ...

If both of you have empty directories, they will look identical. If you or your neighbor has used the system before, there may be differences. What are you working on?

Nodes

../_images/connect-to-remote-login-node.svg

Individual computers that compose a cluster are typically called nodes (although you will also hear people call them servers, computers and machines). On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the head node, login node, landing pad, or submit node. A login node serves as an access point to the cluster. As a gateway, it is well suited for uploading and downloading files, setting up software, and running quick tests. It should never be used for doing actual work.

The real work on a cluster gets done by the worker (or compute) nodes. Worker nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.

All interaction with the worker nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called Scheduling jobs ). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the worker nodes.

For example, we can view all the worker nodes with the sinfo command.

[MY_USER_NAME@login-1.SAGA ~]$ sinfo

normal*      up 7-00:00:00      1  drain c1-10
normal*      up 7-00:00:00     70   resv c1-[19-22,24....
normal*      up 7-00:00:00    108    mix c1-[8,17],c2....
normal*      up 7-00:00:00    141  alloc c1-[1-7,9,11....
bigmem       up 14-00:00:0     10    mix c3-[33-34,51....
bigmem       up 14-00:00:0     25  alloc c3-[29-32,35....
bigmem       up 14-00:00:0      1   idle c6-6
accel        up 14-00:00:0      6    mix c7-[3-8]
accel        up 14-00:00:0      2  alloc c7-[1-2]
optimist     up   infinite      1  drain c1-10
optimist     up   infinite     70   resv c1-[19-22,24....
optimist     up   infinite    124    mix c1-[8,17],c2....
optimist     up   infinite    168  alloc c1-[1-7,9,11....
optimist     up   infinite      1   idle c6-6

There are also specialized machines used for managing disk storage, user authentication, and other infrastructure-related tasks. Although we do not typically log on to or interact with these machines directly, they enable a number of key features like ensuring our user account and files are available throughout the HPC system.

What’s in a node?

All the nodes in an HPC system have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space. CPUs are a computer’s tool for actually running programs and calculations. Information about a current task is stored in the computer’s memory. Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted. While this storage can be local (a hard drive installed inside it), it is more common for nodes to connect to a shared, remote fileserver or cluster of servers.

Node anatomy

Explore Your Computer

Note that, if you’re logged in to the remote computer cluster, you need to log out first. To do so, type Ctrl+d or exit:

[MY_USER_NAME@CLUSTER_NAME ~]$ exit
[user@laptop ~]$

Try to find out the number of CPUs and amount of memory available on your personal computer. There are several ways to do this. Most operating systems have a graphical system monitor, like the Windows Task Manager. More detailed information can be found on the command line:

  • Use system utilities

    [user@laptop ~]$ nproc --all
    [user@laptop ~]$ free -mh
    
  • Read from /proc

    [user@laptop ~]$ cat /proc/cpuinfo
    [user@laptop ~]$ cat /proc/meminfo
    
  • Run system monitor

    [user@laptop ~]$ top -n 1
    
    [user@laptop ~]$ top -n 1 -$USER #Only current user processes
    

Explore The Login Node

Now compare the resources of your computer with those of the head node.

  [user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no
  [MY_USER_NAME@login-1.SAGA ~]$ nproc --all
  [MY_USER_NAME@login-1.SAGA ~]$ free -m

  You can get more information about the processors using `lscpu`,
  and a lot of detail about the memory by reading the file `/proc/meminfo`:
   
  #Mention about the pipe here

  [MY_USER_NAME@login-1.SAGA ~]$ cat /proc/meminfo | head -n5
   MemTotal:       792051868 kB
   MemFree:        15642864 kB
   MemAvailable:   738986424 kB
   Buffers:             900 kB
   Cached:         723319712 kB


  You can also explore the available filesystems using `df` to show **d**isk **f**ree space.
  The `-h` flag renders the sizes in a human-friendly format, i.e., GB instead of B. The **t**ype
  flag `-T` shows what kind of filesystem each resource is.

  [MY_USER_NAME@login-4.SAGA ~]$ df -Th  | grep -v tmpfs
   Filesystem                                     Type      Size  Used Avail Use% Mounted on
   /dev/mapper/xcatvg-root                        xfs        17G  3,6G   13G  22% /
   /dev/sdb2                                      xfs      1016M  151M  865M  15% /boot
   /dev/sdb1                                      vfat      200M   12M  189M   6% /boot/efi
   /dev/mapper/xcatvg-tmp                         xfs       125G  444M  125G   1% /tmp
   /dev/mapper/xcatvg-var                         xfs        17G  2,2G   15G  14% /var
   /dev/mapper/xcatvg-localscratch                xfs       7,2T  303M  7,2T   1% /localscratch
   export.nird:/trd-project4                      nfs       1,5P  1,2P  293T  81% /trd-project4
   export.nird:/trd-project3                      nfs       2,5P  2,3P  207T  92% /trd-project3
   export.nird:/trd-project4/nird/projects        nfs       1,5P  1,2P  293T  81% /nird/projects/nird
   export.nird:/trd-project1                      nfs       3,9P  3,4P  425T  90% /trd-project1
   export.nird:/trd-project2                      nfs       1,9P  1,5P  354T  82% /trd-project2
   beegfs_nodev                                   beegfs    5,8P  3,4P  2,5P  59% /cluster
   robinhood.betzy.sigma2.no:/cluster/backup/saga nfs4      6,9P  5,2P  1,7P  77% /cluster/backup/saga
   

The local filesystems (ext, tmp, xfs, zfs) will depend on whether you’re on the same login node (or compute node, later on). Networked filesystems (beegfs, cifs, gpfs, nfs, pvfs) will be similar – but may include yourUserName, depending on how it is mounted.

Shared file systems

This is an important point to remember: files saved on one node (computer) are often available everywhere on the cluster!

Shared storage

Compare Your Computer, the Head Node and the Worker Node

Compare your laptop’s number of processors and memory with the numbers you see on the cluster head node and worker node. Discuss the differences with your neighbor.

What implications do you think the differences might have on running your research work on the different systems and nodes?

Differences Between Nodes

Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphical Processing Units (GPUs).

With all of this in mind, we will now cover how to talk to the cluster’s scheduler, and use it to start running our scripts and programs!