Page tree
Skip to end of metadata
Go to start of metadata

Summary of the computing clusters

The Department of Chemistry and Materials Science has two computing clusters for computational chemistry research and teaching:

  • puhuri.pub.chemistrylab.aalto.fi (288 CPU cores)
  • wihuri.pub.chemistrylab.aalto.fi (96 CPU cores)

The clusters are equipped with Rocks Clusters Linux cluster distribution and gigabit Ethernet interconnect. The clusters have been built up in several phases and the configurations are as follows:

ClusterPhaseNodes

CPU Cores/ node

CPU type

Memory/ node (GB)

Memory/ CPU core (GB)

Disk/ node (TB)

Wihuri2014412Xeon E5-2630 v2 (2.6 GHz)645.332

2015412Xeon E5-2620 v3 (2.4 GHz)645.332








Puhuri2016436Xeon E5-2697 v4 (2.3 GHz)1283.562

2017436Xeon Gold 6140 (2.3 GHz)1925.332

General guidelines

  • Each cluster consists of a frontend server and computing nodes. Users' home directories (/home/<userid>) are visible both on the frontend and on the computing nodes.
  • All jobs must be run on the computing nodes, using the queuing system! Running them directly on the frontend is strictly forbidden. 
  • Only short pre- and post-processing tasks related to the jobs can be performed on the frontend (a rule of thumb for such tasks: max. 1 minute, 1 CPU, 1 GB memory). Interactive sessions should be used for tasks that consume more resources.
  • The home directories should not be considered as a very reliable location for long-term data storage (the home directories are backuped daily to a USB disk). Copy all important files regularly to your own workstation and keep personal backups.
  • Temporary directory on the frontend and all nodes is /chemtemp

Connecting to the computing clusters

  • You can connect the clusters using SSH
  • Each cluster can only be connected from Aalto network or by using Aalto VPN (first start VPN, then connect with SSH)

Connecting from a Windows computer

  • To use a computing cluster, you need an SSH client. For Windows, PuTTY is an excellent (and free) SSH client. Just download putty.exe to any folder and execute it. No installation is necessary. 
  • To transfer files between a cluster and your workstation, you can use SFTP. WinSCP is a reasonably good SFTP client.

Connecting from a Mac computer 

  • With Mac, you can either use the native terminal program or for example iTerm SSH client. 
  • To transfer files between a cluster and your workstation, you can use different options: 
    (1) using any SFTP client. For example, FileZilla
    (2) using the command line in iTerm. For example, to copy the file from puhuri to your computer: scp username@puhuri.pub.chemistrylab.aalto.fi:/home/path_to_the file/filename ./ 
    (3) using rsync utility

Working on the cluster

  • You will need a basic knowledge of Linux to use the cluster (good tutorials are available)
  • Any text files on the cluster can be edited over the SSH terminal connection by using a text editor. nano is an easy-to-use editor with a menu system. vi is a very convenient and fast editor with a bit steeper learning curve (a short vi reference might help). 
  • To edit a file, just execute nano myfile.txt or vi myfile.txt.

Computational chemistry software

All computing clusters are equipped with the JCHEM sofware package, which includes a number of programs for computational chemistry research and teaching. The JCHEM package greatly simplifies the day-to-day life of the users by providing unified interfaces for the management and usage of the various program packages. Guidelines for using the software have been divided into the following sub-topics:

Note that the JCHEM software package is also installed on CSC Puhti supercluster. See the instructions to setup JCHEM at CSC.

SSH logins to compute nodes

Normally it is not necessary to login directly to the compute nodes. However, if a calculation crashes and leaves some important files to the temporary directory of the node (/chemtemp), it might be necessary to operate on the compute node via SSH. The file <job name.>batch-log in the job directory contains all the information on the job, including the full path of the temporary directory on the compute node.

Connecting works with the normal SSH approach (ssh <node>). For example, ssh compute-0-4. After moving to the temporary directory (cd /chemtemp/TM_4578) The important files in the directory can be copied normally to your cluster home directory, which is also available on the compute nodes (cp mydata.dat /home/antti/). Then you can exit the compute node and the files will available on the frontend.

Please note that it is forbidden to login directly to a compute node to perform any actual computational work. Use Interactive sessions for this, instead.

  • No labels