Working on a remote HPC system
Instructor note
Total: 30min (Teaching:30Min | Discussion:0min | Breaks:0min | Exercises:0Min)
Prerequisites
An active user account on one of the clusters
Basic UNIX knowledge
Familiar with the the UNIX terminal
Objectives
Questions
What is an HPC system?
How does an HPC system work?
How do I log on to a remote HPC system?
keypoints
An HPC system is a set of networked machines.
Connect to a remote HPC system.
Understand the general HPC system architecture.
HPC systems typically provide login nodes and a set of worker nodes.
The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted file systems, etc.).
Files saved on one node are available on all nodes.
Get a user account and access to a project
https://www.sigma2.no/how-apply-user-account
https://research.educloud.no/
Logging in
The first step in using a cluster is to establish a connection from our laptop to the cluster, via the Internet and/or your organization’s network. We use a program called the Secure SHell (or ssh) client for this. Make sure you have an SSH client installed on your laptop. Refer to the UNIX intro to HPC section for more details.
Instructor note
Remind what ssh is
Warning
If you are on a Windows version that does not have a terminal then please install https://gitforwindows.org/ to get an terminal
Test if ssh is installed before proceeding
Check ssh version
[user@laptop ~]$ ssh -V
OpenSSH_x.x xx, OpenSSL xxx xxx
ssh installed and OK to continue
Danger
[user@laptop ~]$ ssh -V
Command ‘ssh’ not found
Not OK inform instructor
Go ahead and log in to the cluster:
Remember to replace MY_USER
with the username supplied by the instructors. You may be asked
for your password.
Warning
The characters you type after the password prompt are not displayed on
the screen. Normal output will resume once you type the passward and press Enter
.
[user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no
[user@laptop ~]$ ssh MY_USER_NAME@fram.sigma2.no
[user@laptop ~]$ ssh MY_USER_NAME@betzy.sigma2.no
[user@laptop ~]$ ssh MY_USER_NAME@fox.educloud.no
One-Time_Code:
Password:
You are logging in using a program known as the secure shell or ssh
. This establishes a temporary
encrypted connection between your laptop and compute cluster
. The word before the @
symbol, e.g. MY_USER_NAME
here, is the user account name that you have permission to use on the
cluster.
Where are we?
Very often, many users are tempted to think of a high-performance computing installation as one
giant, magical machine. Sometimes, people will assume that the computer they’ve logged onto is the
entire computing cluster. So what’s really happening? What computer have we logged on to? The name
of the current computer we are logged onto can be checked with the hostname
command. (You may also
notice that the current hostname is also part of our prompt!)
[MY_USER_NAME@login-1.SAGA ~]$ hostname
login-1.saga.sigma2.no
(MY_USER_NAME@login-1.fram.sigma2.no): hostname
login-1.fram.sigma2.no
[MY_USER_NAME@login-1.BETZY ~]$ hostname
login-1.betzy.sigma2.no
[MY_USER_NAME@login-4 ~]$ hostname
login-4.fox.ad.fp.educloud.no
What’s in your home directory?
The system administrators may have configured your home directory with some helpful files, folders, and links (shortcuts) to space reserved for you on other filesystems. Take a look around and see what you can find.
Home directory contents vary from user to user. Please discuss any differences you spot with your neighbors.
Hint: The shell commands pwd
and ls
may come in handy.
Use pwd
to print the working directory path:
[MY_USER_NAME@login-1.SAGA ~]$ pwd
/cluster/home/MY_USER_NAME
(MY_USER_NAME@login-1.fram.sigma2.no): pwd
/cluster/home/MY_USER_NAME
[MY_USER_NAME@login-1.BETZY ~]$ pwd
/cluster/home/MY_USER_NAME
[MY_USER_NAME@login-4 ~]$ pwd
/fp/homes01/u01/MY_USER_NAME
The deepest layer should differ: MY_USER_NAME is uniquely yours. Are there differences in the path at higher levels?
You can run ls
to list the directory contents, though it’s possible nothing will show
up (if no files have been provided). To be sure, use the -a
flag to show hidden files, too.
[MY_USER_NAME@CLUSTER_NAME ~]$ ls -a
At a minimum, this will show the current directory as .
, and the parent directory as ..
.
If both of you have empty directories, they will look identical. If you or your neighbor has used the system before, there may be differences. What are you working on?
Nodes
Individual computers that compose a cluster are typically called nodes (although you will also hear people call them servers, computers and machines). On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the head node, login node, landing pad, or submit node. A login node serves as an access point to the cluster. As a gateway, it is well suited for uploading and downloading files, setting up software, and running quick tests. It should never be used for doing actual work.
The real work on a cluster gets done by the worker (or compute) nodes. Worker nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.
All interaction with the worker nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called Scheduling jobs ). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the worker nodes.
For example, we can view all the worker nodes with the sinfo
command.
[MY_USER_NAME@login-1.SAGA ~]$ sinfo
normal* up 7-00:00:00 1 drain c1-10
normal* up 7-00:00:00 70 resv c1-[19-22,24....
normal* up 7-00:00:00 108 mix c1-[8,17],c2....
normal* up 7-00:00:00 141 alloc c1-[1-7,9,11....
bigmem up 14-00:00:0 10 mix c3-[33-34,51....
bigmem up 14-00:00:0 25 alloc c3-[29-32,35....
bigmem up 14-00:00:0 1 idle c6-6
accel up 14-00:00:0 6 mix c7-[3-8]
accel up 14-00:00:0 2 alloc c7-[1-2]
optimist up infinite 1 drain c1-10
optimist up infinite 70 resv c1-[19-22,24....
optimist up infinite 124 mix c1-[8,17],c2....
optimist up infinite 168 alloc c1-[1-7,9,11....
optimist up infinite 1 idle c6-6
[MY_USER_NAME@login-2.fram.sigma2.no): sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
normal* up 7-00:00:00 3 drain* c10-4,c27-...
normal* up 7-00:00:00 1 down* c86-4
normal* up 7-00:00:00 1 drng c24-10
normal* up 7-00:00:00 135 resv c3-8,c5-[1...
normal* up 7-00:00:00 792 alloc c3-[1-7,9-...
normal* up 7-00:00:00 53 idle c9-12,c10-...
bigmem up 14-00:00:0 1 mix c8-8
bigmem up 14-00:00:0 7 alloc c8-[1-6],h...
bigmem up 14-00:00:0 2 idle c8-7,hugem...
optimist up infinite 135 resv c3-8,c5-[1...
optimist up infinite 792 alloc c3-[1-7,9-...
optimist up infinite 53 idle c9-12,c10-...
corona_01 up 14-00:00:0 3 drain* c10-4,c27-...
corona_01 up 14-00:00:0 1 down* c86-4
corona_01 up 14-00:00:0 1 drng c24-10
[MY_USER_NAME@login-1.BETZY ~]$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
normal* up 4-00:00:00 6 drain* b[2211,3171,...
normal* up 4-00:00:00 5 down* b[2270,2272,...
normal* up 4-00:00:00 42 drain b[2117,2119,...
normal* up 4-00:00:00 1183 alloc b[1101-1130,...
normal* up 4-00:00:00 79 idle b[1131-1134,...
normal* up 4-00:00:00 20 down b[2249-2250,...
preproc up 1-00:00:00 1 drain* b5202
preproc up 1-00:00:00 1 mix b5201
preproc up 1-00:00:00 5 idle b[5203-5206],preproc-1
ns9039k up 7-00:00:00 1 drain* b5245
accel up 7-00:00:00 3 mix b[5301-5302,5304]
accel up 7-00:00:00 1 idle b5303
[MY_USER_NAME@login-4 ~]$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
normal* up 5-00:00:00 1 down~ c1-5
normal* up 5-00:00:00 1 drain* c1-12
normal* up 5-00:00:00 3 mix c1-[26-28]
normal* up 5-00:00:00 19 idle c1-[6-11,13-25]
accel up 7-00:00:00 2 mix gpu-[2,4]
accel up 7-00:00:00 3 idle gpu-[1,5-6]
bigmem up 7-00:00:00 1 idle c1-29
ifi_accel up 7-00:00:00 1 mix gpu-4
ifi_accel up 7-00:00:00 2 idle gpu-[5-6]
ifi_bigmem up 7-00:00:00 1 idle c1-29
pods up 7-00:00:00 2 idle c1-[6-7]
There are also specialized machines used for managing disk storage, user authentication, and other infrastructure-related tasks. Although we do not typically log on to or interact with these machines directly, they enable a number of key features like ensuring our user account and files are available throughout the HPC system.
What’s in a node?
All the nodes in an HPC system have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space. CPUs are a computer’s tool for actually running programs and calculations. Information about a current task is stored in the computer’s memory. Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted. While this storage can be local (a hard drive installed inside it), it is more common for nodes to connect to a shared, remote fileserver or cluster of servers.
Explore Your Computer
Note that, if you’re logged in to the remote computer cluster, you need to log out first.
To do so, type Ctrl+d
or exit
:
[MY_USER_NAME@CLUSTER_NAME ~]$ exit
[user@laptop ~]$
Try to find out the number of CPUs and amount of memory available on your personal computer. There are several ways to do this. Most operating systems have a graphical system monitor, like the Windows Task Manager. More detailed information can be found on the command line:
Use system utilities
[user@laptop ~]$ nproc --all [user@laptop ~]$ free -mh
Read from
/proc
[user@laptop ~]$ cat /proc/cpuinfo [user@laptop ~]$ cat /proc/meminfo
Run system monitor
[user@laptop ~]$ top -n 1 [user@laptop ~]$ top -n 1 -$USER #Only current user processes
Explore The Login Node
Now compare the resources of your computer with those of the head node.
[user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no
[MY_USER_NAME@login-1.SAGA ~]$ nproc --all
[MY_USER_NAME@login-1.SAGA ~]$ free -m
You can get more information about the processors using `lscpu`,
and a lot of detail about the memory by reading the file `/proc/meminfo`:
#Mention about the pipe here
[MY_USER_NAME@login-1.SAGA ~]$ cat /proc/meminfo | head -n5
MemTotal: 792051868 kB
MemFree: 15642864 kB
MemAvailable: 738986424 kB
Buffers: 900 kB
Cached: 723319712 kB
You can also explore the available filesystems using `df` to show **d**isk **f**ree space.
The `-h` flag renders the sizes in a human-friendly format, i.e., GB instead of B. The **t**ype
flag `-T` shows what kind of filesystem each resource is.
[MY_USER_NAME@login-4.SAGA ~]$ df -Th | grep -v tmpfs
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/xcatvg-root xfs 17G 3,6G 13G 22% /
/dev/sdb2 xfs 1016M 151M 865M 15% /boot
/dev/sdb1 vfat 200M 12M 189M 6% /boot/efi
/dev/mapper/xcatvg-tmp xfs 125G 444M 125G 1% /tmp
/dev/mapper/xcatvg-var xfs 17G 2,2G 15G 14% /var
/dev/mapper/xcatvg-localscratch xfs 7,2T 303M 7,2T 1% /localscratch
export.nird:/trd-project4 nfs 1,5P 1,2P 293T 81% /trd-project4
export.nird:/trd-project3 nfs 2,5P 2,3P 207T 92% /trd-project3
export.nird:/trd-project4/nird/projects nfs 1,5P 1,2P 293T 81% /nird/projects/nird
export.nird:/trd-project1 nfs 3,9P 3,4P 425T 90% /trd-project1
export.nird:/trd-project2 nfs 1,9P 1,5P 354T 82% /trd-project2
beegfs_nodev beegfs 5,8P 3,4P 2,5P 59% /cluster
robinhood.betzy.sigma2.no:/cluster/backup/saga nfs4 6,9P 5,2P 1,7P 77% /cluster/backup/saga
[user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no
[MY_USER_NAME@login-2.fram.sigma2.no ~]$ nproc --all
[MY_USER_NAME@login-2.fram.sigma2.no ~]$ free -m
You can get more information about the processors using `lscpu`,
and a lot of detail about the memory by reading the file `/proc/meminfo`:
#Mention about the pipe here
[MY_USER_NAME@login-2.fram.sigma2.no ~]$ cat /proc/meminfo | head -n5
MemTotal: 131006352 kB
MemFree: 66140852 kB
MemAvailable: 121282580 kB
Buffers: 0 kB
Cached: 50358492 kB
You can also explore the available filesystems using `df` to show **d**isk **f**ree space.
The `-h` flag renders the sizes in a human-friendly format, i.e., GB instead of B. The **t**ype
flag `-T` shows what kind of filesystem each resource is.
[MY_USER_NAME@login-4.SAGA ~]$ df -Th | grep -v tmpfs
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/xcatvg-root xfs 17G 3,6G 13G 22% /
/dev/sdb2 xfs 1016M 151M 865M 15% /boot
/dev/sdb1 vfat 200M 12M 189M 6% /boot/efi
/dev/mapper/xcatvg-tmp xfs 125G 444M 125G 1% /tmp
/dev/mapper/xcatvg-var xfs 17G 2,2G 15G 14% /var
/dev/mapper/xcatvg-localscratch xfs 7,2T 303M 7,2T 1% /localscratch
export.nird:/trd-project4 nfs 1,5P 1,2P 293T 81% /trd-project4
export.nird:/trd-project3 nfs 2,5P 2,3P 207T 92% /trd-project3
export.nird:/trd-project4/nird/projects nfs 1,5P 1,2P 293T 81% /nird/projects/nird
export.nird:/trd-project1 nfs 3,9P 3,4P 425T 90% /trd-project1
export.nird:/trd-project2 nfs 1,9P 1,5P 354T 82% /trd-project2
beegfs_nodev beegfs 5,8P 3,4P 2,5P 59% /cluster
robinhood.betzy.sigma2.no:/cluster/backup/saga nfs4 6,9P 5,2P 1,7P 77% /cluster/backup/saga
[user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no
[MY_USER_NAME@login-2.BETZY ~]$ nproc --all
[MY_USER_NAME@login-2.BETZY ~]$ free -m
You can get more information about the processors using `lscpu`,
and a lot of detail about the memory by reading the file `/proc/meminfo`:
#Mention about the pipe here
[MY_USER_NAME@login-2.BETZY ~]$ cat /proc/meminfo | head -n5
MemTotal: 528019692 kB
MemFree: 54527924 kB
MemAvailable: 512437536 kB
Buffers: 209684 kB
Cached: 454194704 kB
You can also explore the available filesystems using `df` to show **d**isk **f**ree space.
The `-h` flag renders the sizes in a human-friendly format, i.e., GB instead of B. The **t**ype
flag `-T` shows what kind of filesystem each resource is.
[MY_USER_NAME@login-4.SAGA ~]$ df -Th | grep -v tmpfs
Filesystem Type Size Used Avail Use% Mounted on
/dev/md3 ext4 215G 26G 178G 13% /
/dev/md0 ext4 990M 89M 835M 10% /boot
/dev/md1 vfat 256M 9,7M 247M 4% /boot/efi
10.145.200.13:/exports/md/slurm nfs4 7,3T 1,5T 5,4T 22% /etc/slurm
export.nird-trd.sigma2.no:/trd-project1 nfs 3,9P 3,4P 425T 90% /trd-project1
export.nird-trd.sigma2.no:/trd-project2 nfs 1,9P 1,5P 354T 82% /trd-project2
export.nird-trd.sigma2.no:/trd-project4 nfs 1,5P 1,2P 293T 81% /trd-project4
export.nird-trd.sigma2.no:/trd-project3 nfs 2,5P 2,3P 207T 92% /trd-project3
export.nird-trd.sigma2.no:/trd-project4/nird/projects nfs 1,5P 1,2P 293T 81% /nird/projects/nird
10.145.200.23@o2ib:10.145.200.22@o2ib:/cluster lustre 6,9P 5,2P 1,7P 77% /cluster
[user@laptop ~]$ ssh MY_USER_NAME@saga.sigma2.no
[MY_USER_NAME@login-4 ~]$ nproc --all
[MY_USER_NAME@login-4 ~]$ free -m
You can get more information about the processors using `lscpu`,
and a lot of detail about the memory by reading the file `/proc/meminfo`:
#Mention about the pipe here
[MY_USER_NAME@login-4 ~]$ cat /proc/meminfo | head -n5
MemTotal: 527508932 kB
MemFree: 125151596 kB
MemAvailable: 508687276 kB
Buffers: 1115916 kB
Cached: 384488656 kB
You can also explore the available filesystems using `df` to show **d**isk **f**ree space.
The `-h` flag renders the sizes in a human-friendly format, i.e., GB instead of B. The **t**ype
flag `-T` shows what kind of filesystem each resource is.
[MY_USER_NAME@login-4.SAGA ~]$ df -Th | grep -v tmpfs
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/xcatvg-root xfs 17G 3,6G 13G 22% /
/dev/sdb2 xfs 1016M 151M 865M 15% /boot
/dev/sdb1 vfat 200M 12M 189M 6% /boot/efi
/dev/mapper/xcatvg-tmp xfs 125G 444M 125G 1% /tmp
/dev/mapper/xcatvg-var xfs 17G 2,2G 15G 14% /var
/dev/mapper/xcatvg-localscratch xfs 7,2T 303M 7,2T 1% /localscratch
export.nird:/trd-project4 nfs 1,5P 1,2P 293T 81% /trd-project4
export.nird:/trd-project3 nfs 2,5P 2,3P 207T 92% /trd-project3
export.nird:/trd-project4/nird/projects nfs 1,5P 1,2P 293T 81% /nird/projects/nird
export.nird:/trd-project1 nfs 3,9P 3,4P 425T 90% /trd-project1
export.nird:/trd-project2 nfs 1,9P 1,5P 354T 82% /trd-project2
beegfs_nodev beegfs 5,8P 3,4P 2,5P 59% /cluster
robinhood.betzy.sigma2.no:/cluster/backup/saga nfs4 6,9P 5,2P 1,7P 77% /cluster/backup/saga
The local filesystems (ext, tmp, xfs, zfs) will depend on whether you’re on the same login node (or compute node, later on). Networked filesystems (beegfs, cifs, gpfs, nfs, pvfs) will be similar – but may include yourUserName, depending on how it is mounted.
Compare Your Computer, the Head Node and the Worker Node
Compare your laptop’s number of processors and memory with the numbers you see on the cluster head node and worker node. Discuss the differences with your neighbor.
What implications do you think the differences might have on running your research work on the different systems and nodes?
Differences Between Nodes
Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphical Processing Units (GPUs).
With all of this in mind, we will now cover how to talk to the cluster’s scheduler, and use it to start running our scripts and programs!