Summary of the available computing clusters and servers
The Department of Chemistry and Materials Science has the following computing resources for computational chemistry research and teaching:
- wihuri.pub.chemistrylab.aalto.fi (96 CPU cores)
- puhuri.pub.chemistrylab.aalto.fi (288 CPU cores)
- suhari.pub.chemistrylab.aalto.fi (48 CPU cores)
- mylly1.pub.chemistrylab.aalto.fi (128 CPU cores)
- mylly2.pub.chemistrylab.aalto.fi (128 CPU cores)
The detailed configurations are as follows:
CPU Cores/ node
Memory/ node (GB)
Memory/ CPU core (GB)
Disk/ node (TB)
|Wihuri (stage1)||2014||4||12||Xeon E5-2630 v2 (2.6 GHz)||64||5.33||2||Rocks Clusters 6.1 (Gigabit interconnect between nodes)|
|Wihuri (stage2)||2015||4||12||Xeon E5-2620 v3 (2.4 GHz)||64||5.33||2|
|Puhuri (stage1)||2016||4||36||Xeon E5-2697 v4 (2.3 GHz)||128||3.56||2||Rocks Clusters 7.1 (Gigabit interconnect between nodes)|
|Puhuri (stage2)||2017||4||36||Xeon Gold 6140 (2.3 GHz)||192||5.33||2|
|Suhari||2018||1||48||AMD EPYC 7451 (2.3/2.9 GHz)||256||5.33||4||CentOS 7.9|
|Mylly1||2020||1||128||AMD EPYC 7702 (2.0/2.6 GHz)||256||2||3||CentOS Stream 8. |
GPU: PNY GeForce GTX 1080 Ti 11GB (Pascal)
|Mylly2||2020||1||128||AMD EPYC 7702 (2.0/2.6 GHz)||256||2||3||CentOS Stream 8. |
GPU: PNY GeForce GTX 1080 Ti 11GB (Pascal)
- Wihuri and Puhuri are computing clusters that consist of a frontend server and computing nodes. Users' home directories (/home/<userid>) are visible both on the frontend and on the computing nodes.
- All jobs must be run on the computing nodes, using queuing system! Running them directly on the frontend is strictly forbidden.
- Only short pre- and post-processing tasks related to the jobs can be performed on the frontend (a rule of thumb for such tasks: max. 1 minute, 1 CPU, 1 GB memory). Interactive sessions should be used for tasks that consume more resources.
- The home directories should not be considered as a very reliable location for long-term data storage (the home directories are backuped daily to a USB disk). Copy all important files regularly to your own workstation and keep personal backups.
- Temporary directory on the frontend and all nodes is /chemtemp
Connecting to the computing servers
- You can connect the servers using SSH
- Each server can only be connected from Aalto network or by using Aalto VPN (first start VPN, then connect with SSH)
Connecting from a Windows computer
- To use a computing server, you need an SSH client. For Windows, PuTTY is an excellent (and free) SSH client. Just download putty.exe to any folder and execute it. No installation is necessary.
- To transfer files between a cluster and your workstation, you can use SFTP. WinSCP is a reasonably good SFTP client.
Connecting from a Mac computer
- With Mac, you can either use the native terminal program or for example iTerm SSH client.
- To transfer files between a cluster and your workstation, you can use different options:
(1) using any SFTP client. For example, FileZilla.
(2) using the command line in iTerm. For example, to copy the file from puhuri to your computer: scp email@example.com:/home/path_to_the file/filename ./
(3) using rsync utility
Working on the computing servers
- You will need a basic knowledge of Linux to use the servers (good tutorials are available)
- Any text files on the server can be edited over the SSH terminal connection by using a text editor. nano is an easy-to-use editor with a menu system. vi is a very convenient and fast editor with a bit steeper learning curve (a short vi reference might help).
- To edit a file, just execute nano myfile.txt or vi myfile.txt.
Computational chemistry software
All computing servers are equipped with the JCHEM sofware package, which includes a number of programs for computational chemistry research and teaching. The JCHEM package greatly simplifies the day-to-day life of the users by providing unified interfaces for the management and usage of the various program packages. Guidelines for using the software have been divided into the following sub-topics:
- Managing software modules (jmod and setting up .modules file)
- Submitting jobs (jsub, jnodes)
- Monitoring and managing jobs (jstat, jdel)
- Interactive sessions (jinter)
SSH logins to compute nodes (Wihuri and Puhuri only)
Normally it is not necessary to login directly to the compute nodes. However, if a calculation crashes and leaves some important files to the temporary directory of the node (/chemtemp), it might be necessary to operate on the compute node via SSH. The file <job name.>batch-log in the job directory contains all the information on the job, including the full path of the temporary directory on the compute node.
Connecting works with the normal SSH approach (ssh <node>). For example, ssh compute-0-4. After moving to the temporary directory (cd /chemtemp/TM_4578) The important files in the directory can be copied normally to your cluster home directory, which is also available on the compute nodes (cp mydata.dat /home/antti/). Then you can exit the compute node and the files will available on the frontend.
Please note that it is forbidden to login directly to a compute node to perform any actual computational work. Use Interactive sessions for this, instead.