This page lists the research topics that are currently available in the Secure Systems group or with our industry partners. Each topic can be structured either as a special assignment or as an MSc thesis depending on the interests, background and experience of the students. If you are interested in a particular topic, send e-mail to the contact person (as well as the responsible supervisor: (N. Asokan), Tuomas Aura, Lachlan Gunn or Janne Lindqvist) listed explaining your background and interests.
All topics have one or more of the following keywords:
PLATSEC Platform Security
NETSEC Network Security
ML & SEC Machine learning and Security/Privacy
USABLE Usable security
OTHER Other security research themes
Available research topics for students in the Secure Systems Group
PLATSEC Private outsourced computation with Blinded Memory
Outsourcing computational tasks to cloud services can be cheap and convenient, but requires a lot of trust in the provider. How do we know that the cloud provider won't suffer a data breach, or sell our private data?
Blinded Memory (BliMe) aims to modify a CPU so that it can make a convincing promise to its clients that it will not reveal sensitive data outside the system, whether by malware, software vulnerabilities, or even side-channel attacks (such as measuring execution times, or with more advanced attacks like the well-known Meltdown and Spectre). It does this by using special hardware to import data from outside, then applying a taint-tracking policy to make sure that the data is not revealed, except to the original client. If the program tries to leak sensitive data, then it will crash.
This is a large project, and there are lots of different ways that you might be able to contribute. Here are a few:
Formal verification
We have developed a formally-verified model of BliMe that written in F*. F* is a functional language intended for formal verification, from which it is possible to extract C or OCaml code.
- Formally-verified executable simulation. So far we have used F* to prove the security of our model, but cannot test the accuracy of the model since it is not directly executable. The goal of this project would be to take our generic description of the ISA does in F* code, and add the extra functionality needed to make it executable.
- Unsafe registers/memory. The model verifies that, by applying taint tracking throughout the system, sensitive data cannot leak outside the system. However, we cannot use taint-tracking everywhere. Currently we only model this for the program counter, which is treated as a special case, and everything else can potentially hold sensitive data. But a real system has other non-private state, such as memory mapped to peripherals, and the effects of variable-length instructions. The goal of this project would be to extend the model to incorporate this kind of state, and show that an appropriately-chosen taint-tracking rule still preserves the privacy of client data.
- Microarchitectural side channels. We currently consider only an unpipelined in-order CPU model, where instructions complete one after another in the order in which they appear in memory. Real processors are more complex than this. To get high performance, instructions must take more than one cycle, with one instruction beginning to execute before previous instructions have finished. In many processors, instructions are not executed in order, with a later instruction able to be executed before an earlier one if the data it needs is already available. The goal of this project would be to model this kind of processor in order to verify the security of a speculative processor.
GPUs
BliMe so far has been implemented only for CPUs. But many real-world applications, such as machine learning, also use a GPU for computation. In this project you will investigate how to extend BliMe to GPUs by taking advantage of the fact that GPU code tends to be simpler and more predictable than code written for CPUs, in order to achieve maximum performance. You will use static analysis to verify that such a piece of code does not leak sensitive data, and therefore allow BliMe to be extended to GPUs supporting confidential computing, like Nvidia's new H100 GPU.
Useful skills
This is a big project with a wide variety of tasks, so if you're interested then please get in contact, and we'll see how your background can fit in. In general, C or C++ programming skills and a basic information security background will be useful for this project, but any familiarity at all with the following might also be helpful:
- Functional programming
- Scala programming
- Formal verification
- Cryptographic protocols
For more information, please contact: Lachlan Gunn, email lachlan.gunn@aalto.fi
Further reading:
BliMe: Verifiably Secure Outsourced Computation with Hardware-Enforced Taint Tracking, https://arxiv.org/abs/2204.09649
PLATSEC Application isolation using SELinux
The Linux feature SELinux enforces a class of access control policy known as type enforcement. These policies are built up of rules like "web servers can read website files" and "web applications can open connections to a PostgreSQL database server". In practice, real systems are complex enough that it is difficult to write these rules by hand without giving applications more privileges than they need.
In this project, you will look at ways to automatically modify policies in order to reduce the privileges of applications without breaking the system. In particular, your goal will be to identify a set of policy changes that will isolate an application from the rest of the system, except for some listed set of interactions that can be audited.
Useful knowledge:
- Programming skills in Python
- A general information security background (e.g. as provided by the course Information Security)
- Experience with the Linux operating system
For more information, please contact: Lachlan Gunn, email lachlan.gunn@aalto.fi