Why use a Cluster?

Instructor note

Total: 20min (Teaching:10Min | Discussion:10min | Breaks:0min | Exercises:0Min)

Objectives

  • Keypoints

    • High Performance Computing (HPC) typically involves connecting to a very large computing systems elsewhere in the world.

    • These other systems can be used to do work that would either be impossible or much slower on smaller systems.

    • To achieve the best benefit, the problem that needs to be solved should be broken down to smaller components, where different machines can solve the parts at the same time.

    • The standard method of interacting with such systems is via a command line interface called Bash.

Welcome to HPC

Discussion

Frequently, research problems that use computing can outgrow the capabilities of the desktop or laptop computer where they started:

  • A statistics student wants to cross-validate a model. This involves running the model 2400 times. Each run takes an hour. On a laptop only one iteration would fit a at a time.

  • A genomics researcher has been using small datasets of sequence data, but soon will be receiving a new type of sequencing data that is 10 times as large. It’s already challenging to open the datasets on a computer – analyzing these larger datasets will probably crash it.

  • An engineer is using a fluid dynamics package that has an option to run in parallel. So far, this option was not utilized on a desktop. In going from 2D to 3D simulations, the simulation time has more than tripled. It might be useful to take advantage of that option or feature.

In all these cases, access to more computers is needed. Those computers should be usable at the same time.

A standard Laptop for standard tasks

Today, people coding or analyzing data typically work with laptops.

A standard laptop

Let’s dissect what resources the programs running on a laptop require:

  • the keyboard and/or touch-pad is used to tell the computer what to do (Input)

  • the internal computing resources Central Processing Unit and Memory perform calculation

  • the display depicts progress and results (Output)

Schematically, this can be reduced to the following:

../_images/Simple_Von_Neumann_Architecture.svg

GPU

When tasks take too long

A standard laptop

When the task to solve becomes heavy on computations, the operations are typically out-sourced from the local laptop or desktop to elsewhere. Take for example the task to find the directions for your next vacation. The capabilities of your laptop are typically not enough to calculate that route spontaneously: finding the shortest path through a network runs on the order of (v log v) time, where v (vertices) represents the number of intersections in your map. Instead of doing this yourself, you use a website, which in turn runs on a server, that is almost definitely not in the same room as you are.

../_images/server.png

.

Note here, that a server is mostly a noisy computer mounted into a rack cabinet which in turn resides in a data center. The internet made it possible that these data centers do not require to be nearby your laptop. What people call the cloud is mostly a web-service where you can rent such servers by providing your credit card details and requesting remote resources that satisfy your requirements. This is often handled through an online, browser-based interface listing the various machines available and their capacities in terms of processing power, memory, and storage.

The server itself has no direct display or input methods attached to it. But most importantly, it has much more storage, memory and compute capacity than your laptop will ever have. In any case, you need a local device (laptop, workstation, mobile phone or tablet) to interact with this remote machine, which people typically call ‘a server’.

When one server is not enough

If the computational task or analysis to complete is daunting for a single server, larger agglomerations of servers are used. These go by the name of “clusters” or “super computers”.

A rack with servers

The methodology of providing the input data, configuring the program options, and retrieving the results is quite different to using a plain laptop. Moreover, using a graphical interface is often discarded in favor of using the command line. This imposes a double paradigm shift for prospective users asked to

  1. work with the command line interface (CLI), rather than a graphical user interface (GUI)

  2. work with a distributed set of computers (called nodes) rather than the machine attached to their keyboard & mouse

Note

What is an HPC system?

The term HPC system is a stand-alone resource for computationally intensive workloads. They are typically comprised of a multitude of integrated processing and storage elements, designed to handle high volumes of data and/or large numbers of floating-point operations (FLOPS) with the highest possible performance. For example, all the machines on the Top-500 list are HPC systems. To support these constraints, an HPC resource must exist in a specific, fixed location: networking cables can only stretch so far, and electrical and optical signals can travel only so fast.

The word cluster is often used for small to moderate scale HPC resources less impressive than the Top-500. Clusters are often maintained in computing centers that support several such systems, all sharing common networking and storage to support common compute intensive tasks.

Note

Parallel computing The full potential of the HPC system is when the processing can take place in parallel. That is to divide a larger computation into many smaller computations. These smaller computations should be able to take place independently. Thus they can be
carried out simultaneously on multiple processor cores, sometime distributed on multiple compute node.

GPUs can perform thousands of simultaneous computations. Which makes it possible to distribute the training process in machine learning for example.

Note

Difference between a HPC computing cluster and the cloud

  • All computers involved are located in same location (e.g. in the same room)

  • All computers are connected to each other with very fast local area network(LAN)

  • All computers involved are architecturally identical and runs the same operating system

  • A set of optimised software installed and accessible from all computers

  • All computers have access to shared storage, if you place a file on one machine you can access it from all the other machines

  • All computers have a synchronised clock.

  • A scheduler is involved (latter lesson)

A cloud service could have access to a HPC cluster as part of the service as well