Working on a remote HPC system
Instructor note
Total: 30min (Teaching:30Min | Discussion:0min | Breaks:0min | Exercises:0Min)
Prerequisites
An active user account on one of the clusters
Basic UNIX knowledge
Familiar with the the UNIX terminal
Objectives
Questions
What is an HPC system?
How does an HPC system work?
How do I log on to a remote HPC system?
keypoints
An HPC system is a set of networked machines.
Connect to a remote HPC system.
Understand the general HPC system architecture.
HPC systems typically provide login nodes and a set of worker nodes.
The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted file systems, etc.).
Files saved on one node are available on all nodes.
Get a user account and access to a project
https://www.sigma2.no/how-apply-user-account
https://research.educloud.no/
Logging in
The first step in using a cluster is to establish a connection from our laptop to the cluster, via the Internet and/or your organization’s network. We use a program called the Secure SHell (or ssh) client for this. This establishes a temporary encrypted connection between your laptop and the compute cluster.
Make sure you have an SSH client installed on your laptop. Refer to the UNIX intro to HPC section for more details.
Test if ssh is installed before proceeding
Check ssh version
[user@laptop ~]$ ssh -V
OpenSSH_x.x xx, OpenSSL xxx xxx
ssh installed and OK to continue
Danger
[user@laptop ~]$ ssh -V
Command ‘ssh’ not found
Not OK inform instructor
Warning
If you are on a Windows version that does not have a terminal then please install https://gitforwindows.org/ in order to get one.
Go ahead and log in to the cluster:
See examples of the syntax to log into the cluster below. The word before the @
symbol, i.e. MY_USER_NAME
here, is the user account name on that cluster.
Make sure to replace MY_USER_NAME
with your own user account name.
When the system asks you for your password, type it and press Enter
.
Note
The characters you type after the password prompt are not displayed on
the screen. Normal output will resume once you type the password and press Enter
.
[user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no
[user@laptop ~]$ ssh MY_USER_NAME@fram.sigma2.no
[user@laptop ~]$ ssh MY_USER_NAME@betzy.sigma2.no
[user@laptop ~]$ ssh MY_USER_NAME@fox.educloud.no
One-Time_Code:
Password:
Where are we?
Very often, many users are tempted to think of a high-performance computing installation as one
giant, magical machine. Sometimes, people will assume that the computer they’ve logged onto is the
entire computing cluster. So what’s really happening? What computer have we logged on to? The name
of the current computer we are logged onto can be checked with the hostname
command. (You may also
notice that the current hostname is also part of our prompt!)
[MY_USER_NAME@login-1.SAGA ~]$ hostname
login-1.saga.sigma2.no
(MY_USER_NAME@login-1.fram.sigma2.no): hostname
login-1.fram.sigma2.no
[MY_USER_NAME@login-1.BETZY ~]$ hostname
login-1.betzy.sigma2.no
[MY_USER_NAME@login-4 ~]$ hostname
login-4.fox.ad.fp.educloud.no
Nodes
Individual computers that compose a cluster are typically called nodes (although you will also hear people call them servers, computers and machines). On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the head node, login node, landing pad, or submit node. A login node serves as an access point or a gateway to the cluster. As a gateway, it is well suited for uploading and downloading files, setting up software, and running quick tests. It should never be used for doing actual work.
The real work on a cluster gets done by the worker (or compute) nodes. Worker nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.
All interaction with the worker nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called Scheduling jobs ). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the worker nodes.
For example, we can view all the worker nodes with the sinfo
command,
which is a command belonging to our SLURM scheduler system.
[MY_USER_NAME@login-1.SAGA ~]$ sinfo
normal* up 7-00:00:00 1 drain c1-10
normal* up 7-00:00:00 70 resv c1-[19-22,24....
normal* up 7-00:00:00 108 mix c1-[8,17],c2....
normal* up 7-00:00:00 141 alloc c1-[1-7,9,11....
bigmem up 14-00:00:0 10 mix c3-[33-34,51....
bigmem up 14-00:00:0 25 alloc c3-[29-32,35....
bigmem up 14-00:00:0 1 idle c6-6
accel up 14-00:00:0 6 mix c7-[3-8]
accel up 14-00:00:0 2 alloc c7-[1-2]
optimist up infinite 1 drain c1-10
optimist up infinite 70 resv c1-[19-22,24....
optimist up infinite 124 mix c1-[8,17],c2....
optimist up infinite 168 alloc c1-[1-7,9,11....
optimist up infinite 1 idle c6-6
[MY_USER_NAME@login-2.fram.sigma2.no): sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
normal* up 7-00:00:00 3 drain* c10-4,c27-...
normal* up 7-00:00:00 1 down* c86-4
normal* up 7-00:00:00 1 drng c24-10
normal* up 7-00:00:00 135 resv c3-8,c5-[1...
normal* up 7-00:00:00 792 alloc c3-[1-7,9-...
normal* up 7-00:00:00 53 idle c9-12,c10-...
bigmem up 14-00:00:0 1 mix c8-8
bigmem up 14-00:00:0 7 alloc c8-[1-6],h...
bigmem up 14-00:00:0 2 idle c8-7,hugem...
optimist up infinite 135 resv c3-8,c5-[1...
optimist up infinite 792 alloc c3-[1-7,9-...
optimist up infinite 53 idle c9-12,c10-...
corona_01 up 14-00:00:0 3 drain* c10-4,c27-...
corona_01 up 14-00:00:0 1 down* c86-4
corona_01 up 14-00:00:0 1 drng c24-10
[MY_USER_NAME@login-1.BETZY ~]$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
normal* up 4-00:00:00 6 drain* b[2211,3171,...
normal* up 4-00:00:00 5 down* b[2270,2272,...
normal* up 4-00:00:00 42 drain b[2117,2119,...
normal* up 4-00:00:00 1183 alloc b[1101-1130,...
normal* up 4-00:00:00 79 idle b[1131-1134,...
normal* up 4-00:00:00 20 down b[2249-2250,...
preproc up 1-00:00:00 1 drain* b5202
preproc up 1-00:00:00 1 mix b5201
preproc up 1-00:00:00 5 idle b[5203-5206],preproc-1
ns9039k up 7-00:00:00 1 drain* b5245
accel up 7-00:00:00 3 mix b[5301-5302,5304]
accel up 7-00:00:00 1 idle b5303
[MY_USER_NAME@login-4 ~]$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
normal* up 5-00:00:00 1 down~ c1-5
normal* up 5-00:00:00 1 drain* c1-12
normal* up 5-00:00:00 3 mix c1-[26-28]
normal* up 5-00:00:00 19 idle c1-[6-11,13-25]
accel up 7-00:00:00 2 mix gpu-[2,4]
accel up 7-00:00:00 3 idle gpu-[1,5-6]
bigmem up 7-00:00:00 1 idle c1-29
ifi_accel up 7-00:00:00 1 mix gpu-4
ifi_accel up 7-00:00:00 2 idle gpu-[5-6]
ifi_bigmem up 7-00:00:00 1 idle c1-29
pods up 7-00:00:00 2 idle c1-[6-7]
Belonging to a cluster are also specialized machines used for managing disk storage, user authentication, and other infrastructure-related tasks. Although we do not typically log on to or interact with these machines directly, they enable a number of key features like ensuring our user account and files are available throughout the HPC system.
What’s in a node?
All the nodes in an HPC system have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space.
CPUs are a computer’s tool for actually running programs and calculations.
Information about a current task is stored in the computer’s memory.
Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted. While this storage can be local (a hard drive installed inside it), it is more common for nodes to connect to a shared, remote fileserver or cluster of servers.
What’s in your home directory?
The system administrators may have configured your home directory with some helpful files, folders, and links (shortcuts) to space reserved for you on other filesystems. Take a look around and see what you can find.
Home directory contents vary from user to user. Please discuss any differences you spot with your neighbors.
Hint: The shell commands pwd
and ls
may come in handy.
Use pwd
to print the working directory path:
[MY_USER_NAME@login-1.SAGA ~]$ pwd
/cluster/home/MY_USER_NAME
(MY_USER_NAME@login-1.fram.sigma2.no): pwd
/cluster/home/MY_USER_NAME
[MY_USER_NAME@login-1.BETZY ~]$ pwd
/cluster/home/MY_USER_NAME
[MY_USER_NAME@login-4 ~]$ pwd
/fp/homes01/u01/MY_USER_NAME
The deepest layer should differ: MY_USER_NAME is uniquely yours. Are there differences in the path at higher levels?
You can run ls
to list the directory contents, though it’s possible nothing will show
up (if no files have been provided). To be sure, use the -a
flag to show hidden files, too.
[MY_USER_NAME@CLUSTER_NAME ~]$ ls -a
At a minimum, this will show the current directory as .
, and the parent directory as ..
.
If both of you have empty directories, they will look identical. If you or your neighbor has used the system before, there may be differences.
Explore Your Computer
Note that, if you’re logged in to the remote computer cluster, you need to log out first.
To do so, type Ctrl+d
or exit
:
[MY_USER_NAME@CLUSTER_NAME ~]$ exit
[user@laptop ~]$
Explore your laptop!
Try to find out the number of CPUs and amount of memory available on your personal computer.
There are several ways to do this. Most operating systems have a graphical system monitor, like the Windows Task Manager. More detailed information can be found on the command line:
Use system utilities
[user@laptop ~]$ nproc --all [user@laptop ~]$ free -mh
Read from
/proc
[user@laptop ~]$ cat /proc/cpuinfo [user@laptop ~]$ cat /proc/meminfo
Run system monitor
[user@laptop ~]$ top -n 1 [user@laptop ~]$ top -n 1 -$USER #Only current user processes
Explore The Login Node
Now compare the resources of your computer with those of the login node.
[user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no
[MY_USER_NAME@login-3.SAGA ~]$ nproc --all
64
[MY_USER_NAME@login-3.SAGA ~]$ free -m
total used free shared buff/cache available
Mem: 773152 32325 369097 6 376744 740826
Swap: 4095 0 4095
[MY_USER_NAME@login-3.SAGA ~]$ cat /proc/meminfo | head -n5
MemTotal: 791707708 kB
MemFree: 377944708 kB
MemAvailable: 758595312 kB
Buffers: 1652 kB
Cached: 361764044 kB
[user@laptop ~]$ ssh MY_USER_NAME@fram.sigma2.no
[MY_USER_NAME@login-1.FRAM ~]$ nproc --all
64
[MY_USER_NAME@login-1.FRAM ~]$ free -m
total used free shared buff/cache available
Mem: 127561 10642 3557 20 114184 116919
Swap: 999 203 796
[MY_USER_NAME@login-1.FRAM ~]$
MemTotal: 130622940 kB
MemFree: 3052240 kB
MemAvailable: 119623944 kB
Buffers: 4436 kB
Cached: 112621500 kB
[user@laptop ~]$ ssh MY_USER_NAME@betzy.sigma2.no
[MY_USER_NAME@login-3.BETZY ~]$ nproc --all
128
[MY_USER_NAME@login-3.BETZY ~]$ free -m
total used free shared buff/cache available
Mem: 515289 111826 19047 4842 393380 403462
Swap: 0 0 0
[MY_USER_NAME@login-3.BETZY ~]$ cat /proc/meminfo | head -n5
MemTotal: 527656092 kB
MemFree: 17932092 kB
MemAvailable: 413916888 kB
Buffers: 1232 kB
Cached: 393924196 kB
[user@laptop ~]$ ssh ec-MY_USER_NAME@fox.uio.no
[ec-MY_USER_NAME@login-1.FRAM ~]$ nproc --all
256
[ec-MY_USER_NAME@login-3 ~]$ free -m
total used free shared buff/cache available
Mem: 514862 21515 490460 6 5976 493347
Swap: 10239 0 10239
[ec-MY_USER_NAME@login-3 ~]$ cat /proc/meminfo | head -n5
MemTotal: 527219484 kB
MemFree: 498382440 kB
MemAvailable: 501338560 kB
Buffers: 6416 kB
Cached: 5205416 kB
We can also have a look at the filesystem with the command df
.
[MY_USER_NAME@login-3.SAGA ~]$ df -Th | grep -v tmpfs
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/xcatvg-root xfs 7.3T 74G 7.2T 1% /
/dev/sdb2 xfs 1018M 222M 797M 22% /boot
/dev/sdb1 vfat 256M 7.0M 249M 3% /boot/efi
export.dl-nird.sigma2.no:/data/lake1 nfs4 8.1P 7.0P 1.1P 87% /nird/datalake
export.ts-nird.sigma2.no:/project/fs1 nfs4 19P 14P 4.9P 75% /nird/projects
export.dl-nird.sigma2.no:/backup/ts/hpc nfs4 5.8P 2.6P 3.3P 44% /cluster/backup/hpc
beegfs_nodev beegfs 5.8P 2.6P 3.3P 44% /cluster
cvmfs2 fuse 9.8G 5.6G 4.3G 57% /cvmfs/software.eessi.io
MY_USER_NAME@login-1.FRAM ~]$ df -Th | grep -v tmpfs
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/system_vg-root_lv xfs 16G 5.0G 11G 32% /
/dev/sda2 xfs 1014M 224M 791M 23% /boot
/dev/sda1 vfat 1022M 7.0M 1016M 1% /boot/efi
/dev/mapper/system_vg-tmp_lv xfs 3.9G 83M 3.9G 3% /tmp
/dev/mapper/system_vg-var_lv xfs 30G 8.7G 22G 29% /var
10.3.5.18@o2ib,10.4.5.18@o2ib1:10.3.5.19@o2ib,10.4.5.19@o2ib1:/scratch lustre 2.3P 2.2P 47T 98% /cluster
export.ts-nird.sigma2.no:/project/fs1 nfs4 19P 14P 4.9P 75% /nird/projects
cvmfs2 fuse 9.8G 3.0G 6.8G 31% /cvmfs/pilot.eessi-hpc.org
cvmfs2 fuse 9.8G 3.0G 6.8G 31% /cvmfs/pilot.nessi.no
cvmfs2 fuse 9.8G 3.0G 6.8G 31% /cvmfs/software.eessi.io
export.dl-nird.sigma2.no:/data/lake1 nfs4 8.1P 7.0P 1.1P 87% /nird/datalake
export.dl-nird.sigma2.no:/backup/ts/hpc nfs4 12P 12P 155T 99% /cluster/backup/hpc
export.dl-nird.sigma2.no:/backup/ts/hpc-home-fram nfs4 12P 12P 155T 99% /cluster/backup/home
[MY_USER_NAME@login-3.BETZY ~]$ df -Th | grep -v tmpfs
Filesystem Type Size Used Avail Use% Mounted on
/dev/md127 xfs 223G 29G 195G 13% /
/dev/md126 vfat 1023M 7.0M 1016M 1% /boot/efi
10.145.200.23@o2ib:10.145.200.22@o2ib:/cluster lustre 6.9P 3.7P 3.1P 55% /cluster
export.ts-nird.sigma2.no:/project/fs1 nfs4 19P 14P 4.9P 75% /nird/projects
export.dl-nird.sigma2.no:/data/lake1 nfs4 8.1P 7.0P 1.1P 87% /nird/datalake
export.dl-nird.sigma2.no:/backup/ts/hpc nfs4 12P 12P 155T 99% /cluster/backup/hpc
cvmfs2 fuse 9.8G 2.0G 7.9G 20% /cvmfs/pilot.nessi.no
cvmfs2 fuse 9.8G 2.0G 7.9G 20% /cvmfs/software.eessi.io
[ec-MY_USER_NAME@login-3 ~]$ df -Th | grep -v tmpfs
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/internvg-root xfs 2.0G 754M 1.3G 37% /
/dev/mapper/internvg-usr xfs 25G 4.6G 21G 19% /usr
/dev/nvme0n1p1 xfs 3.5T 55G 3.5T 2% /localscratch
/dev/sda2 xfs 1014M 401M 614M 40% /boot
/dev/mapper/internvg-home xfs 1014M 40M 975M 4% /home
/dev/mapper/internvg-opt xfs 2.0G 976M 1.1G 48% /opt
/dev/mapper/internvg-var xfs 16G 3.6G 13G 23% /var
/dev/sda1 vfat 599M 7.1M 592M 2% /boot/efi
/dev/mapper/internvg-tmp xfs 31G 985M 31G 4% /tmp
fp-projects01 gpfs 801T 454T 347T 57% /fp/projects01
fp-fox01 gpfs 400T 67T 334T 17% /cluster
fp-homes01 gpfs 33T 12T 21T 37% /fp/homes01
The local filesystems (ext, tmp, xfs, zfs) will depend on whether you’re on the same login node (or compute node, later on). Networked filesystems (beegfs, cifs, gpfs, nfs, pvfs) will be similar – but may include your user account name, depending on how it is mounted.
Compare your computer, the head node and the worker node
Compare your laptop’s number of processors and memory with the numbers you see on the cluster head node and worker node.
What implications do you think the differences might have on running your research work on the different systems and nodes?
Differences Between Nodes
Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphical Processing Units (GPUs).