Page tree
Skip to end of metadata
Go to start of metadata

How to use A+, how to create a course, where do I find instructions, and the questions I have on A+

  • We provide an A+ orientation meeting each year in early June for summer interns with tasks related with A+. You'll get the exact date and invitation from the HR team.
  • If needed, also a minimal intro to using Git can be included.
  • The recorded events are shared in the A+ Panopto folder with special note to the orientation events in the A+ Intro subfolder.



  • For interns: you need at least a laptop, and, depending on your needs, also a screen, mouse, keyboard, maybe something else
    • Contact CS IT at to tell them what you would need and for agreeing on when to pick them up
    • Pick up the equipment from CS IT in CS building room A243
  • what your laptop should include:
  • also macOS is an easy alternative, if CS IT has some "on the shelf". With Windows, a Linux virtual machine is the best choice.

Technical guides

Best practices


Links to git repositories

Quick start guide explains how to start A+ course development with the Aplus-manual course. You can run the course in your computer with the A+ Docker containers.

The Aplus-manual can be used for testing A+ features. There is also a separate test-course that focuses on using all A+ features with many different settings so that it is easier to manually start testing them. The test-course can of course be expanded with new A+ features (and many features are still missing there). You can can create pull requests for the test-course too!

The A+ architecture is briefly explained in these pages:

Our work is mostly located under the apluslms organisation in The Acos server has its own organisation. Course (content) repositories are private and located in the Aalto GitLab server Many course repos are under the course group

You don't need to clone or fork all of them. Do that when you need to. Many repositories are listed here, so you can find them when needed.

Installing Docker Compose

Docker Compose ("docker-compose") is another tool besides Docker ("docker"). CS IT should have installed Docker into Aalto work laptops for you (i.e. there should be a package "docker-ce" and a command "docker"). Unfortunately, Ubuntu repositories have outdated versions of docker-compose, so it should be installed manually. It can be installed into the user's home directory without administration privileges. Docker-compose is just a single binary executable file.

NOTE: this is not needed on macOS as docker-compose is included in the Docker for Mac application.

Installation can be completed by copy-pasting and executing following in a terminal window. If the last lines are ok, then do logout from the desktop and log back in to make the command work without full path.

Run this in terminal to install docker-compose
set -k

## Prepare:
# next will create location for binaries in your home
mkdir -p "$HOME/.local/bin"
# then we make executables in that folder visible as commands by adding the above location to the PATH variable
# we do this by changing PATH variable in .profile, which executed by login terminals (e.g. when you login via GUI)
echo 'export PATH="$HOME/.local/bin:$PATH"' >> "$HOME/.profile"
# zsh (shell you are using) doesn't read .profile, so will make it do that (optional step).
echo "[ -e \"\$HOME/.profile\" ] && emulate sh -c '. \"\$HOME/.profile\"'" >> "$HOME/.zprofile"

## Download docker-compose
# find out the newest version
compose_version=$(curl -LSs | grep -F '"tag_name":' | cut -d'"' -f4)
# download the docker-compose
curl -LSs -o "$HOME/.local/bin/docker-compose" "$compose_version/docker-compose-$(uname -s)-$(uname -m)"
# make the binary executable
chmod +x "$HOME/.local/bin/docker-compose"

## Verify
~/.local/bin/docker-compose -v
md5sum ~/.local/bin/docker-compose
# should print:
#   docker-compose version 1.25.5, build 8a1c60f6
#   3485ce0470f084d338732c541873339a  /u/XX/<user>/unix/.local/bin/docker-compose
# or:
#   docker-compose version 1.26.2, build eefe0d31
#   218a4d71308268cd4b9c31e208c9bf4c  /u/XX/<user>/unix/.local/bin/docker-compose

Following example shows what it should look like. Note that the script is pasted one block at a time and only last part provides any feedback. Furthermore, if the version number is the same, then the hash on the last line should be the same for you too. However, if you get a newer version, then it will be different.

Everything should work now, but if you are unsure, you can compare your local files to these examples:

# this is ~/.profile

# add ~/.local/bin to PATH
export PATH="$HOME/.local/bin:$PATH"

# set your terminal editors to nano (default would be vim)
export EDITOR=nano
export VISUAL=nano
# this is ~/.zprofile

# zsh doesn't load .profile by default, so do that
[ -e "$HOME/.profile" ] && emulate sh -c '. "$HOME/.profile"'


Show exit codes in the terminal (OPTIONAL)

This configuration is not required for your environment to work for A+ development, but it might still be useful.

Sometimes crashed and failed commands go unnoticed. To help with that, you can include program exit codes in the prompt. If you drop this ..zshrc into your home folder, then your shell will start to look like the following image.

After downloading the that file, move it to your home folder, and then rename it from home.zshrc to .zshrc (yes, it starts with a dot).

NOTE: this only works for zsh, thus bash users require something different (e.g. macOS users).

Use nano (i.e. not vim) for git commits with nice colors (OPTIONAL)

In this part, we configure our terminal text editor to nano. which might be easier to use than the typical default vim. In addition, we add custom syntax coloring to git operations.

First, let's set our default editor

Run this in terminal to set VISUAL and EDITOR to nano
echo 'export EDITOR=nano' >> "$HOME/.profile"
echo 'export VISUAL=nano' >> "$HOME/.profile"

Now, your ~/.profile should look something like the example file in the seciton Installing Docker Compose. You need to log out and in for above changes to take effect.

Second, let's configure nano to show some colors and to react to mouse clicks. To do that, create following files:

# See /etc/nanorc for more default settings, which can be copied here.

## Remember the used search/replace strings for the next session.
set historylog

## Display line numbers to the left of the text.
#set linenumbers
# instead use: nano -l file.txt
# or press: alt+shift+3 (alt+#)

## Enable mouse support, if available for your system.  When enabled,
## mouse clicks can be used to place the cursor, set the mark (with a
## double click), and execute shortcuts.  The mouse will work in the X
## Window System, and on the console when gpm is running.
set mouse

# system styles
include "/usr/share/nano/*.nanorc"
# my styles
include "~/.nano/git.nanorc"

If you need to copy text from open nano session, then you need to press SHIFT while selecting text with the mouse. You could also disable mouse by commenting out the corresponding line. Or you can add line numbers. Both parameters have comments in the file above.

For the second file, you might need to create the folder ~/.nano.

# This code is free software; you can redistribute it and/or modify it under
# the terms of the new BSD License.
# Copyright (c) 2010, Sebastian Staud
# Copyright (c) 2020, Jaakko Kantojärvi

# A nano configuration file to enable syntax highlighting of some Git specific
# files with the GNU nano text editor (

## This syntax format is used for git config files
syntax "git-config" "git(config|modules)$|\.git/config$"

color brightcyan "\<(true|false)\>"
color cyan "^[[:space:]]*[^=]*="
color brightmagenta "^[[:space:]]*\[.*\]$"
color yellow ""(\\.|[^"])*"|'(\\.|[^'])*'"
color brightblack "(^|[[:space:]]+)#([^{].*)?$"
color ,green "[[:space:]]+$"

## This syntax format is used for commit (and tag) messages
syntax "git-commit" "COMMIT_EDITMSG|TAG_EDITMSG"

# Commit message
#color yellow ".*"
# Warn when a commit message exceeds 50 and 72 chars per line
#color brightyellow "^(.{1,50})$"
color brightwhite "^(.{1,50})$"
color yellow "^(.{51}).*$"
color red "^(.{73}).*$"

# Headers in the end of the message:
color white         "^[A-Za-z_-]+: .*$"
color brightblue    "^[A-Za-z_-]+: "
color brightgreen   "^([Cc]lose[sd]?|[Ff]ix(e[sd])?|[Rr]esolve[sd]?) ((#|GH-)[0-9]+|https?://[^ ]*)$"
color green         "^([Cc]lose[sd]?|[Ff]ix(e[sd])?|[Rr]esolve[sd]?)"

# Comments
color brightblack "^#.*"

# Branch status
color brightmagenta "^# [A-Za-z]+[^:]*:$"
color brightred   "^#[[:space:]]Your branch and '[^']+"
color brightblack "^#[[:space:]]Your branch and '"
color brightwhite "^#[[:space:]]On branch [^ ]+"
color brightblack "^#[[:space:]]On branch "
color brightwhite "^#[[:space:]]Your branch is up to date with '[^']+"
color brightblack "^#[[:space:]]Your branch is up to date with '"

# Files changes
color white       "#[[:space:]](deleted|modified|new file|renamed):[[:space:]].*"
color red         "#[[:space:]]deleted:"
color green       "#[[:space:]]modified:"
color brightgreen "#[[:space:]]new file:"
color brightblue  "#[[:space:]]renamed:"

# Untracked filenames
color white "^#	[^?*:;{}\\]+(/|\.?[^/?*:;{}\\]+)$"

# Rebase actions:
color brightyellow  "^# interactive rebase in progress.*"
color yellow        "^#[[:space:]]+(pick|reword|edit|squash|fixup|exec|break|drop|label|reset|merge) .*"
color brightwhite   "^#[[:space:]]+(pick|reword|edit|squash|fixup|exec|break|drop|label|reset|merge)"

# Commit IDs
color brightblue "[0-9a-f]{7,40}"

# Recolor hash symbols
color brightblack "^#"

# Trailing spaces (+LINT is not ok, git uses tabs)
color ,red "[[:space:]]+$"

## This syntax format is used for interactive rebasing
syntax "git-rebase-todo" "git-rebase-todo"

# Default
color yellow ".*"

# Comments
color brightblack "^#.*"

# Rebase commands
color green         "^(e|edit) [0-9a-f]{7,40}"
color green         "^# e, edit"
color brightgreen   "^(f|fixup) [0-9a-f]{7,40}"
color brightgreen   "^# f, fixup"
color brightwhite   "^(p|pick) [0-9a-f]{7,40}"
color brightwhite   "^# p, pick"
color blue          "^(r|reword) [0-9a-f]{7,40}"
color blue          "^# r, reword"
color brightred     "^(s|squash) [0-9a-f]{7,40}"
color brightred     "^# s, squash"
color brightblue    "^(m|merge) (-[Cc] [0-9a-f]{7,40}|[^ ]+)"
color brightred     "^(m|merge) "
color brightred     "^# m, merge"
color red           "^(d|drop) [0-9a-f]{7,40}"
color red           "^# d, drop"
color white         "^(x|exec)[[:space:]]+[^ ]+.*$"
color brightyellow  "^(x|exec) "
color brightyellow  "^# x, exec"
color white         "^(b|break)"
color white         "^# b, break"
color brightblue    "^(l|r|label|reset) [^ ]+"
color white         "^(l|r|label|reset) "
color white         "^# l, label"
color white         "^# t, reset"

# Commit IDs
color brightblue "[0-9a-f]{7,40}"

# Recolor hash symbols
color brightblack "^#"

Now, when you issue a command, which uses editor (e.g. git commit), you should see nano with some nice colors.

Creating Python virtual environments

Python virtual environments allow you to install Python packages (libraries and tools) without interfering with the system-wide Python installation. Each virtual environment is isolated from each other, so you can, for example, install different versions of the same package in different environments. You typically create a new venv for each app that requires a lot of dependencies, for example, a venv for a-plus and a venv for the mooc-grader, assuming you develop both systems. We recommend that the venv directory is created outside the application directory, e.g., outside the a-plus source code directory.

Since we can run A+ and the MOOC-Grader in Docker containers, we typically only need virtual environments for running certain Django ( commands like "test", "makemigrations", "makemessages", and "compilemessages".

The example below creates a new venv in the directory venv_aplus under the current working directory.

Creation of Python virtual environment
aptdcon --install python3-venv
python3 -m venv venv_aplus
source venv_aplus/bin/activate
pip install --upgrade pip setuptools
pip install wheel
# You have created the virtual environment now, but usually you also want to install dependencies from some app to the venv.
# For example for A+, change directory to the a-plus (source code) directory that you cloned from git. It contains the required pip packages in the file requirements.txt.
pip install -r requirements.txt

The source command is used to activate the venv again at another time (in a new terminal window or after rebooting the machine): source venv_aplus/bin/activate

A+ local development setup

Course developers usually do not modify A+ source code and they do not need to clone the a-plus git repo. They can run the latest released versions of the A+ Docker containers in order to test their courses. The container version is set in docker-compose.yml. is a utility repo for debugging A+ and mooc-grader using VSCode in a docker container. The src/srv part below applies to it too, it is set to /srv/ in the repo.

A+ and MOOC-grader developers need to mount the source code to the Docker containers so that they can run their own version of the source code in the container. Example paths for the volumes field in docker-compose.yml (the full example docker-compose.yml file is also below):

  • /home/user/a-plus/:/src/aplus/:ro
  • /home/user/mooc-grader/:/src/grader/:ro
  • The first path before the colon is the local path that you need to fix for your own computer.
  • A-plus is installed in /srv/aplus in the run-aplus-front container. You can mount development version of the source code to /src/aplus. The container will then copy it to /srv/aplus and compile the translation file ( If you mount directly to /srv/aplus, you need to manually compile the translation file beforehand, but on the other hand, Django can reload the code and restart the server without restarting the whole container when you edit the source code files.
  • The same applies to the run-mooc-grader container, but the paths inside the container are /src/grader and /srv/grader.
  • You should add the following symbolic link to the root of the mooc-grader directory in your computer: ln -s /srv/courses/ exercises
    • With the symbolic link, the run-mooc-grader container finds the course directory even when you mount your own mooc-grader source code into the container.
  • The run-mooc-grader container normally mounts the course directory /srv/courses/default
  • In the run-mooc-grader container, if you do not use the symbolic link "exercises → /srv/courses" and if you mount the mooc-grader code to /srv/grader, then you need to change the course directory mount to /srv/grader/courses/default in docker-compose.yml.

docker-compose.yml with A+ development mounts
version: '3'

    image: apluslms/run-mooc-grader:1.8
      - data:/data
      - /var/run/docker.sock:/var/run/docker.sock
      - /tmp/aplus:/tmp/aplus
      # This file should be in a course directory and
      # the course directory is mounted to the MOOC-Grader container.
      - .:/srv/courses/default:ro
      # mount the mooc-grader source code to one of the two options:
      - /home/user/mooc-grader:/src/grader:ro
      #- /home/user/mooc-grader:/srv/grader
      #- .:/srv/grader/courses/default:ro
      - "8080:8080"
    image: apluslms/run-aplus-front:1.8
      - data:/data
      # mount the A+ source code to one of these:
      - /home/user/a-plus:/src/aplus:ro
      #- /home/user/a-plus:/srv/aplus:ro
      - "8000:8000"
      - grader

aplus/ is needed when you run Django commands (such as makemessages, compilemessages, makemigrations). It should contain the settings below. Note that when you run the run-aplus-front container, it does not use the file from the root of the a-plus source code.

DEBUG = True
BASE_URL = 'http://localhost:8000/'

Debugging a python program running inside docker using VSCode

This only tells you how to start the debugger. For how to actually use it, see

  1. Install the python extension to VSCode if you haven't already:
  2. The program to be debugged needs to be started using a program called debugpy. The easiest way to install and run it is to add sh -c "pip install debugpy -t /tmp && python /tmp/debugpy --listen <python script and args>" to the command docker runs. E.g. if using docker-compose and the command is python3 runserver (or ["python3", "", "runserver", ""] in the .yml file), you would set the command to sh -c "pip install debugpy -t /tmp && python /tmp/debugpy --listen runserver" (or ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --listen runserver"] in the .yml file). You can also add the flag --wait-for-client to the debugpy command to stop the program from running before the debugger is attached.
  3. Expose the port 5678 to your host machine. E.g. add "<some port>:5678" to the ports list in the .yml file for docker-compose where <some port> is a port you can choose to be used for the debugging.
  4. Add a vscode launch configuration for debugging the program by adding the following to the configurations list in the launch.json file: 
        "name": "<some name>",
        "type": "python",
        "request": "attach",
        "connect": {
            "host": "localhost",
            "port": <the exposed port chosen earlier>
        "pathMappings": [
                "localRoot": "<path to the code directory on the host machine>",
                "remoteRoot": "<path to the code directory in the docker container>"

    If you want to step into third-party libraries or some other code that is not within the localRoot directory, you'll need to add more items to the pathMappings list explaining where to find the code. See for variables you can use in the localRoot and remoteRoot fields (${workspaceFolder} would probably be useful). I have found that sometimes the variables don't work right with the python debugger, so you might want to use absolute paths if possible. 

Now you should see the the newly added launch configuration in the list of debugable things (top of the Run and Debug tab). Starting it once the program in the docker container is running should attach the debugger to it.

Example configs:

version: '3'
    image: apluslms/run-aplus-front:1.12
    command: ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --listen runserver"]
      - data:/data
      - ./a-plus:/srv/aplus:ro
      APLUS_LOCAL_SETTINGS: "/srv/aplus/aplus/" # this replaces the local settings from the run-aplus-front image
      - "5678:5678"
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit:
    "version": "0.2.0",
    "configurations": [
          "name": "Attach: A+",
          "type": "python",
          "request": "attach",
          "connect": {
              "host": "localhost",
              "port": 5678
          "pathMappings": [
                  "localRoot": "${workspaceFolder}",
                  "remoteRoot": "/srv/aplus"
                  "localRoot": "/home/aplus/aplus/venvs/a-plus/lib/python3.8/site-packages/",
                  "remoteRoot": "/usr/local/lib/python3.7/dist-packages/"
                  "localRoot": "/home/aplus/aplus/aplus-messaging/aplus_auth/",
                  "remoteRoot": "/aplus-auth/aplus_auth/"
          "justMyCode": false

Git tutorials and guides


This is NOT a tutorial. If you are unfamiliar with git or unsure of what you are doing, read some tutorials and/or ask for help. If you are not sure what a command does, read the documentation.

Reading documentation in the terminal:
General: man git
Command specific: git <command> --help OR man git-<command> OR man git <command>

COMMANDS                                                           EXAMPLE (where 'origin' is the upstream and 'fork' my remote repository)
--------                                                           ------------------------------------------------------------------------
git fetch <remote>                                                 git fetch origin
git checkout <branch to be updated>                                git checkout master
git merge --ff-only <remote>/<branch> <branch to be updated>       git merge --ff-only origin/master master
# OPTIONAL: if you wish to update your fork as well
git push <your remote fork> <branch>                               git push fork master

# start with updating your local repo if it's not up to date

git checkout <branch or commit where you wish to begin your new branch>
git checkout -b <name of new branch>

# make wanted changes

git status
git add <file> # do this for all wanted files, if want just part of a file, use -p
# if you want to add all tracked files, use -u

git commit
# write commit message

# Make new changes and commits as needed

git push <your remote> <branch>

If ready for a pull request, you can create it in GitHub

# start with cleaning the branch history if applicable (no PR yet, but there are fixups or things to squash)

git checkout <branch>
git rebase <branch to rebase onto>    # Example: git rebase master

git checkout <branch>
# make the desired changes
git add <file>        # for all the changed files

If an entirely new commit:
git commit
# write commit message

If a fixup:
git commit --fixup=<commit>

git push <your remote> <branch>

git checkout <branch to clean up>
git rebase -i <commit or tag or branch that is before the things you want to clean>
# if you have fixup-commits, you can also use:
# git rebase -i --autosquash <commit>

# reorder commits to wished order, mark commits that should be squashed to the previous ones with f or s appropriately, etc.
# In Nano, CTRL+K (cuts the line you are currently on) and CTRL+U (pastes the line) may be very useful for reordering the commits

# if the pull request is done and approved, then we can force push the cleaned branch
git push -f <your remote fork> <branch>

  • CTRL+Z pauses whatever you had in progress
    • You can see what all you have on pause with the command jobs
    • If you have just one job on pause, you can return to it with the command fg
    • If you have several jobs on pause, you can return to the job you wish with the command fg %2, where the number is the number of the job
  • CTRL+S stops the connection between the keyboard and the terminal, all keypresses are stored in a buffer
    • CTRL+Q resumes the connection (and executes everything in the buffer)
    • If you have written something that's in the buffer that you DO NOT want to be executed, you can just close the terminal tab or terminal

VSCode partial staging

VSCode allows partial staging. That is, it allows you to stage (add to the next commit) specific lines:

  1.  In the Source Control tab: click on the file you want to stage lines from under "Changes"
    • This should open a diff view with the original file on left and changed on the right
  2. Select the lines you want to stage from the changed file
  3. Right-click and choose "Stage Selected Ranges"
    • Whole lines are staged at once, you cannot only stage half of a line
    • You don't need to select the whole line to stage it, the whole line is staged as long as part of it selected
    • Single lines can be staged by just right clicking on them without needing to select.

You can also use this to revert a change by choosing "Revert Selected Ranges".

Similar process can be used to unstage changes from the files under "Staged Changes" in the Source Control tab.

Django tutorials (Python web framework)


Manual testing

Manual testing is always required as well even if you had a lot of automated tests. Mount your source code into the A+ and MOOC-Grader Docker containers and run them in a test course. Think of what can be tested automatically and what should be tested manually.

Unit testing

  • It is easier to run these Django commands in your host machine than in a container. Thus, you need to create a Python virtual environment (instructions are located in this page) and install the requirements (pip packages) from the project (A+ or MOOC-Grader) into the venv.
  • Running unit tests in a Django project: python test
    • To save time, you can run only a certain test suite with parameters: python test exercise.tests.ExerciseTest.test_base_exercise_absolute_url
  • New features should be accompanied with adequate unit tests. Create tests for at least the most important cases and also edge cases.
  • Unit tests are made for the backend Django code. We don't have unit tests for frontend JavaScript code (and it is not possible to add any JS unit tests now).
  • We don't have enough automatic tests for our old code base, but at least we should test new features better and always include automatic tests.

Selenium tests

Selenium tests run a browser such as Firefox and execute interactions in the web page according to the test code. They test if the page behaves correctly. Selenium tests are a form of functional testing that tests the system from the end users perspective instead of testing internal functions of the system like unit testing does.

  • A+ has some old Selenium tests. They cover only a small part of the system.
  • Selenium tests are a bit difficult to write and they may easily break when the HTML code of the pages is changed again later.
  • Let's use Selenium tests for only the most important parts of the system, especially when bugs are hard to find manually. For example, end-of-course features, results/points page that teachers use for final grading of the course.
  • A+ has some documentation about its Selenium tests:

Type hints

For VSCode, install the django-types package to your venv using pip: `pip3 install django-types`.

Type hints must be added to the following things:

  • Function parameters and return types (even if the function doesn't return anything)
  • Variables if the type cannot be inferred by the type checker

Typing rules:

  • The type should be the one that the design dictates (as opposed to what the usage dictates)
    • Scenario: A function is designed to take any type that inherits from class A but currently only type B (which inherits from A) is ever passed to the function.
      The type hint should be A because that is what the function is designed to take.
    • Scenario: A method doesn't use some of its parameters but they are required due to overriding.
      The type hints should be whatever all the other overrides of the method take (i.e. what the type hints of the overridden method are).
    • Scenario: An abstract method that is designed to overridden by other classes.
      Again, choose the types that the overriding versions are designed to take. There should be some common ancestor for them.
    • Overriding methods can tighten the type hints if it is designed such that the overriding version only takes that type  
    • Only use Any if the types really can be anything, and the type cannot be inferred using generics or TypeVar
  • Unknown types are not allowed 
  • Mark types inside lists and dicts
  • Whenever you modify a function and it doesn't have type hints, add them

Git and pull request guidelines

Read these:

  • git commit messages are important and they should explain the reason and background for the changes:
  • Pull Request Etiquette:
    • Responsibilities of author (person who creates the pull request) and reviewer (who accepts it)
    • Includes good reasoning why good commit messages are important (in addition to the "hidden documentation" above)
    • Though, doesn't include how to give positive comments
  • About the style of comments and behaviour on pull request
  • There is a pull request template in the A+ repositories. It is used in the pull request description when you make a new pull request. The template reminds you to write clear a description and to discuss how the code has been and should be tested.
  • The pull request title should be written in the imperative form: "Add this feature", NOT "Adds some feature" or "This PR adds a feature".
  • The pull request description is written like normal text. Be clear and concise, but include all relevant information so that readers understand the topic and the need for the changes. Compare writing the description to writing email messages.
  • Naming the git branch for the pull request: the name does not technically affect anything, but it is best to use very short and clear names that describe the topic of the pull request. You will more easily recognize your own git branches later when they have descriptive names. You may include the issue number in the branch name if you like that.

Markku's comments:

  • You create new PRs for new features and fixes that logically belong together. So, one PR for one feature. The contents of one PR should constitute a logical set of changes for the whole feature. A pull request does not always add a new feature, since there are also bug fixes and other necessary changes without adding new features.
  • You modify the PR by pushing new commits when you fix bugs and other issues that have been pointed out in the code review and testing.
  • You should clean up the git history before creating the PR. While the PR is being reviewed, you make changes in new commits and do not squash so that reviewers can easily follow the latest changes (the newest commits). At the end when everything has been polished, you clean up the git history again (git rebase, squash etc.) and force-push.
  • There can be more than one commit. Each commit should constitute a sensible set of changes that somewhat belong together. The code should usually always "work" after each commit, so don't leave the code in a completely broken state after a commit (syntax errors, immediate crashes when executing the code). Of course, not every detail can work after a single commit when large features are split into many commits, but there are usually logical subcomponents or such.
  • Massive commits are hard to read when developers track changes afterwards (for example, while bug hunting). A large number of very small commits is also hard to digest.

Tuomas's comments on new features that require refactoring:

  • Split the commits in two steps: first make the necessary changes and then implement the new feature
    • This is not an iron-clad rule, sometimes it is difficult to split the changes like this 
    • That is, a single commit should only contain refactoring or new feature code, not both.
    • E.g. you could have first 3 commits that refactor the old code and then 2 commits that implement the new feature
    • This documents the code well and makes reviewing the PR much easier
    • The VSCode partial staging would probably help in this
  • The product breaking due to a commit does not make the commit automatically bad
    • The purpose of a commit is to document the code and changes
    • Whatever breaks, must be fixed later in the same PR
    • If a PR contains commits that break the product, it must be merged using a merge commit to mark the commits in the PR as belonging to a bigger whole. 

Reviewing someone else's pull request:

  • Start a review in the pull request (Github has a button for that) and add comments to the code lines that you want to discuss. You may, for example, suggest changes, ask about unclear things (that should maybe be implemented better or clarified with a comment), and point out possible bugs.
  • It is possible to write comments in Github without starting a review, but we prefer to start a review when you are really reviewing the whole pull request. Individual comments could be used if you had to comment on something without reviewing the whole pull request.
  • The pull request discussion tab is used for comments that are not tied to any specific line of code.
  • When there are conversations on certain code lines, either the developer or the reviewer could resolve the conversation, which will collapse it (it can still be opened later, however). There is no strict rule who should resolve those conversations. If the developer can be certain that he/she has finished the requested changes correctly, then the developer could resolve it after pushing the changes. Otherwise, the reviewer or the manager could resolve the conversation after verifying that the code has been fixed correctly.

UPDATE 26.6.2020 based on discussion (TODO: clean up the pull request and issue guide)

  • There should normally be a Github issue about something before you implement the changes and submit a pull request.
  • Github issue should describe a problem and usecase (what the user needs or wants and why). The issue should not be a direct statement "do this".
    • We have many old issues that are orders ("do this"), but then it is important to find out the actual problem. That can be discussed in the issue. The original idea ("do this in that way") may actually be bad and the problem should be solved with a different approach.
  • In the issue, discuss ideas about how to solve the problem and what kind of implementation would be good. Describe the overall idea of the implementation before you start coding. This way, we can agree on the approach before you write a lot of code and submit a pull request.
  • Since we have agreed about the approach for implementation before submitting the PR, we won't argue about the approach in the PR discussion.
  • The PR discussion and review can concentrate on smaller issues and details in the code (like coding style, inefficient code, or security problems). The overall architecture should already be good since it has been agreed on in the issue discussion. Remember to link the PR to the issue.
  • The PR discussion should stay on topic and discuss the changes of the PR. Discussion about the actual problem/feature is done in the issue, not the PR. Usually, the discussion about the problem should be completed before submitting the PR and there should be no need to debate about the original problem anymore in the PR discussion.
  • If new concerns appear during the PR discussion that are not directly related to the original issue, then a new issue can be created from those. Let's keep the PR discussion on topic!
  • Draft pull requests can be used for sharing coding ideas at an early stage. For example, if you want to discuss some code design details before finishing the code, then a draft PR is a good way to share the code for others and discuss about it.
  • Draft PRs are not ready for normal, full reviews and no such reviews should be done on them.
  • If you have a draft PR, you can force-push finished code to the git branch in the end and convert it to a normal pull request.
  • Normal pull requests are ready for review. Reviews may suggest changes to the code, but massive changes should not be needed at this stage since the problem has been discussed in the issue beforehand.
  • After reviews have been done and the fixed code is finished, the git history shall be cleaned. Small fixes should usually be squashed into the appropriate commits and the code may need to be rebased onto the upstream master.

Jaakko's comment on 26.6.2020:

Goal of the process is to ensure, there wouldn't be huge problems at the PR time, but that still can happen. Sometimes it might happen that we need to move back to the issue, discuss it with the new information in mind, and choose what should be done if anything. With a clear idea about the problem, use case and an implementation idea, that should happen too often though.
In short, the development, implementation or the review can find information, which was not available during the discussion about the issue. If that happens, I think a good idea would be commenting in the issue "The code/thing/idea (link to code if relevant) made me think that have we have not consider thing Y" or such. This can mean: PR is closed (if it's way of the new plan), PR is redirected, PR is accepted, but it doesn't close the issue and another PR is created to continue. Keep in mind that new issues should be created if that is more suitable.
(Interestingly, original paper about waterfall development model described a process, where you can return to previous steps if that is need  )

  • No labels


  1. Maybe a link to the github repositories could be good (Aplus, mooc-grader, rst-tools, a+ manual ...). 

    About the PR, I think that we could start creating the with some basic information, and provide a link to that file.

  2. I read this guide how-to-contribute-to-open-source-getting-started-with-git  (at least the first three articles), and they explain pretty well the process of creating a PR. (git setup, fork repositories, remote branches, rebase and some other topics). You can read it, and perhaps add the link under the Git and pull request guidelines section. I think we can create a copy of those guidelines and use them in our projects.