Skip to end of metadata
Go to start of metadata



Available research topics for students in the Secure Systems Group 

This page lists the research topics that are currently available in the Secure Systems group or with our industry partners. Each topic can be structured either as a special assignment or as an MSc thesis depending on the interests, background and experience of the students. If you are interested in a particular topic, send e-mail to the contact person (as well as the professor responsible: either N. Asokan or Tuomas Aura) listed explaining your background and interests.

All topics have one or more of the following keywords: 

PLATSEC Platform Security

NETSEC Network Security

ML & SEC Machine learning and Security/Privacy

USABLE Usable security and stylometry

OTHER Other systems security research themes




Fingerprinting Neural Network Models using their learned representation ML & SEC

Creators of machine learning models need solutions to demonstrate their ownership if their models are stolen. Watermarking has been proposed as a technique to enable demonstration of ownership. For instance, several recent proposals watermark deep neural network (DNN) models using backdooring: training them with additional mislabeled data [a,b]. 

In contrast to watermarking, fingerprinting models would enable to identify and prove ownership of ML models without the need for modifying them (by adding a watermark). Recently several methods have been introduced to extract and compare the representations learned by ML models, especially Neural Networks: NN [c]. The goal of this project is to assess if Neural Network representations can be used as a reliable technique to fingerprint NN models. It entails implementing and comparing existing Neural Network representations: CKA [d], CCA [e], Cosine similarity, etc. and evaluating their ability to quantify the high similarity between models derived from a same ML model (using fine-tuning, pruning, distillation [f]) but also the low similarity between different models not derived from a common model.

Requirements:

  • Programming skill in Python / PyTorch
  • Knowledge of Deep Learning, DNN/CNN model training and parametrisation 
  • Knowledge about image classification and how to train such models
  • Knowledge of fine-tuning and pruning is a plus

References:

  • [a] Adi, Y., Baum, C., Cisse, M., Pinkas, B., Keshet, J.: Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In: 27th Security Symposium (Security 18). pp. 1615–1631 (2018)
  • [b] Zhang, J., Gu, Z., Jang, J., Wu, H., Stoecklin, M.P., Huang, H., Molloy, I.: Protecting intellectual property of deep neural networks with watermarking. In: Proceedings of the 2018 on Asia Conference on Computer and Communications Security. pp. 159–172 (2018)
  • [c] Morcos, A., Raghu, M. and Bengio, S., 2018. Insights on representational similarity in neural networks with canonical correlation. In Advances in Neural Information Processing Systems (pp. 5727-5736).
  • [d] Kornblith, Simon, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. "Similarity of neural network representations revisited." arXiv preprint arXiv:1905.00414 (2019).
  • Raghu, M., Gilmer, J., Yosinski, J. and Sohl-Dickstein, J., 2017.
  • [e] Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems (pp. 6076-6085).
  • [f] Liu, K., Dolan-Gavitt, B., Garg, S.: Fine-pruning: Defending against backdooring attacks on deep neural networks. In: International Symposium on Research in Attacks, Intrusions, and Defenses. pp. 273–294. Springer (2018)

For further information: Please contact Samuel Marchal (samuel.marchal@aalto.fi)

Locally Sensitive Neural Based- Perceptual Hashing and Contextual Hashing ML & SEC

Note: this is actually two topics but in different domains (Computer Vision and Natural Language Processing).

Locally sensitive hashing (LSH) is an embedding function that for two similar inputs A and B, produces hashes Ah and Ab that are similar. This is different from from hashing in a cryptographic sense - where distribution of embeddings should not reveal information about the inputs. LSH is extremely useful when it comes to efficient search and storage. However, for many domains the similarity of two inputs may be straightforward to determine for a human but not for the hashing function that is not aware of things such as context, meaning, representation. Hence, we need solutions that can accommodate for these deficiencies.

In these projects, we are going to investigate and design novel locally sensitive hash functions based on neural networks. Our focus is on two domains:

1) Perceptual hashing of images: image that was subject to change of contrast, shifting, rotation, inversion should produce similar hashes to the original.

2) Contextual text hashing of text: sentences that have roughly the same meaning and similar grammatical structure should have similar hashes, e.g. I think it was a great movie vs In my opinion, this was a nice film.

Note: 1) There's going to be a programming pre-assignment. 2) Literature review can be done as a special assignment.

Requirements

  • MSc students in security, computer science, machine learning
  • familiarity with both standard ML techniques as well as deep neural networks (NLP and image classification)
  • Good math and algorithmic skills (hashing in particular)
  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have

  • industry experience in software engineering or related
  • research experience
  • familiarity with adversarial machine learning

Some references:
[1]: Markus KettunenErik HärkönenJaakko Lehtinen E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles

[2]: Lucas BourtouleVarun ChandrasekaranChristopher Choquette-ChooHengrui JiaAdelin TraversBaiwu ZhangDavid LieNicolas Papernot. Machine Unlearing

[3]: A Visual Survey of Data Augmentation in NLP

[4]: Bian Yang, Fan Gu, Xiamu Niu. Block Mean Value Based Image Perceptual Hashing in IIH-MSP 06: Proceedings of the 2006 International Conference on Intelligent Information Hiding and Multimedia

[5]: Locally Sensitive Hashing

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.

Stealing GANs and Other Generative Models ML & SEC

In recent years, the field of black-box model extraction has been growing in popularity [1-5]. During a black-box model extraction attack, adversary queries data to the victim model sitting behind an API and obtains the predictions. It then uses the data together with the predictions to reconstruct the model locally. Model exctraction attacks are a threat to the Model-as-a-Service business model that is becoming ubiquitous choice for ML offerings. Unfortunately, existing defense mechanisms are not sufficient and it is likely that model extraction will always be a threat[5].

So far model extraction attacks have been limited to a) simple models of various application or b) complex DNN classification models - both in image and text domains. In this work, we are going to explore techniques that could steal generative models. Generative models are quite different from regression and classification problems in that input-output interaction between the user and the model is vastly different (e.g. style transfer).


Note: 1) There's going to be a programming pre-assignment. 2) Literature review can be done as a special assignment.

Requirements

  • MSc students in security, computer science, machine learning
  • familiarity with both standard ML techniques as well as deep neural networks
  • Good math and algorithmic skills
  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have

  • industry experience in software engineering or related
  • research experience
  • familiarity with adversarial machine learning

Some references:
[1]: Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In ACM Symposium on Information, Computer and Communications Security. ACM, 506–519.

[2]: Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In 25th USENIX Security Symposium. 601–618.

[3]: Mika Juuti, Sebastian Szyller, Samuel Marchal, and N. Asokan. 2019. PRADA: Protecting against DNN Model Stealing Attacks. In IEEE European Symposium on Security & Privacy. IEEE, 1–16.

[4]: Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2019. Knockoff Nets: Stealing Functionality of Black-Box Models. In CVPR. 4954–4963.

[5]: Buse Atli, Sebastian Szyller, Mika Juuti, Samuel Marchal and N. Asokan 2019. Extraction of Complex DNN Models: Real Threat or Boogeyman? To appear in AAAI-20 Workshop on Engineering Dependable and Secure Machine Learning Systems

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.

Ineffectiveness of Non-label-flipping Defenses and Watermarking Schemes Against Model Extraction Attacks ML & SEC

In recent years, the field of black-box model extraction has been growing in popularity [1-5]. During a black-box model extraction attack, adversary queries data to the victim model sitting behind an API and obtains the predictions. It then uses the data together with the predictions to reconstruct the model locally. Model exctraction attacks are a threat to the Model-as-a-Service business model that is becoming ubiquitous choice for ML offerings. 


Note: 1) There's going to be a programming pre-assignment. 2) Literature review can be done as a special assignment.

Requirements

  • MSc students in security, computer science, machine learning
  • familiarity with both standard ML techniques as well as deep neural networks
  • Good math and algorithmic skills
  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have

  • industry experience in software engineering or related
  • research experience
  • familiarity with adversarial machine learning

Some references:
[1]: Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In ACM Symposium on Information, Computer and Communications Security. ACM, 506–519.

[2]: Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In 25th USENIX Security Symposium. 601–618.

[3]: Mika Juuti, Sebastian Szyller, Samuel Marchal, and N. Asokan. 2019. PRADA: Protecting against DNN Model Stealing Attacks. In IEEE European Symposium on Security & Privacy. IEEE, 1–16.

[4]: Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2019. Knockoff Nets: Stealing Functionality of Black-Box Models. In CVPR. 4954–4963.

[5]: Buse Atli, Sebastian Szyller, Mika Juuti, Samuel Marchal and N. Asokan 2019. Extraction of Complex DNN Models: Real Threat or Boogeyman? To appear in AAAI-20 Workshop on Engineering Dependable and Secure Machine Learning Systems

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.

Real Time Universal Adversarial Attacks on Deep Policies  ML & SEC

It is well known that machine learning models are vulnerable to inputs which are constructed by adversaries to force misclassification. Adversarial examples [1] have been extensively studied in image classification models using deep neural networks (DNNs). Recent work [2] shows that reinforcement learning agents using deep reinforcement learning (DRL) algorithms are also susceptible to adversarial examples, since DRL uses neural networks to train a policy in the decision-making process [3]. Existing adversarial attacks [4-8] try to corrupt the observation of the agent by perturbing the sensor readings so that the corresponding observation is different from the true environment that the agent interacts. 

Current adversarial attacks against DRL agents [4-8] are generated per observation, which is not infeasible due to the dynamic nature of RL. Creating observation-specific adversarial samples is usually slower than the agent’s sampling rate, introducing a delay between old and new observations. The goal of this work is to design an approach for creating universal, additive adversarial perturbations that requires almost no computation in real time. This work will focus on different pre-computed attack strategies that are capable of changing the input before it is observed by the RL agent. We will analyze the effect of universal adversarial perturbations on Atari games, continuous task simulators and self driving car simulators trained with different DRL algorithms. 

Required skills:

  • MSc students in security, computer science, machine learning, robotics or automated systems

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have:

  • Familiarity with one of the topics: adversarial machine learning, reinforcement learning, statistics, control theory, artificial intelligence

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

  • Sufficient skills to work and interact in English

  • An interest to do research and Arcade games

Some references for reading:

[1] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. "Explaining and harnessing adversarial examples." 

[2] Huang, Sandy, et al. 2017. "Adversarial attacks on neural network policies.

[3] Mnih, Volodymyr, et al.  "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529.

[4] Lin, Yen-Chen, et al. IJCAI-2017. "Tactics of adversarial attack on deep reinforcement learning agents."

[5] Behzadan, Vahid, and Arslan Munir.  Springer, Cham, 2017. "Vulnerability of deep reinforcement learning to policy induction attacks."

[6] Kos, Jernej, and Dawn Song. 2017.  "Delving into adversarial attacks on deep policies." 

[7] Tretschk, Edgar, Seong Joon Oh, and Mario Fritz. 2018. "Sequential attacks on agents for long-term adversarial goals." 

[8] Xiao, Chaowei, et al. 2019. "Characterizing Attacks on Deep Reinforcement Learning." 

For further information:  Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.

Guaranatees of Differential Privacy in Overparameterised Models ML & SEC

Differential Privacy allows us to use data without revealing information about individuals (statistically). It is a parameterised approach that allows us to control the amount of privacy that we would like our system to have and provides us with statistical guarantees (epsilon,delta-bounded). In machine learning this means either training models that do not link to particular observations,  modifying the learning algorithm algorithm or the input data itself. However, deep neural networks leak information about training data (memebership and property inference) due to being overparameterised and the high-dimensional nature of both the input data and latent representaions of the model.

In this work, we are going to analyse the shortcoming of existing methods of incorporating differential privacy into machine learning models and work on creating a novel technique.

Note: 1) There's going to be a programming pre-assignment. 2) Literature review can be done as a special assignment.

Requirements

  • MSc students in security, computer science, machine learning
  • familiarity with both standard ML techniques as well as deep neural networks
  • Good math and algorithmic skills (stats and probability in particular)
  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have

  • industry experience in software engineering or related
  • research experience
  • familiarity with differential privacy and/or anonymization techniques

Some references

[1]: Cynthia Dwork, Aaron Roth. 2014. Algorithmic Foundations of Differential Privacy

[2]: Reza Shokri, Vitaly Shmatikov. 2015. Privacy Preserving Deep Learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security Pages 1310-1321

[3]: Congzheng Song, Vitaly Shmatikov. 2019. Overlearning Reveals Sensitive Attributes

[4]: Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., and Jana, S. 2019. Certified Robustness to Adversarial Examples with Differential PrivacyIn IEEE Symposium on Security and Privacy (SP) 2019

[5]: M. Abadi, A. Chu, I. Goodfellow, H. Brendan McMahan, I. Mironov, K. Talwar, and L. Zhang. 2016. Deep Learning with Differential PrivacyArXiv e-prints.

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.

Latent Representations in Model Extraction Attacks ML & SEC

In recent years, the field of black-box model extraction has been growing in popularity [1-5]. During a black-box model extraction attack, adversary queries data to the victim model sitting behind an API and obtains the predictions. It then uses the data together with the predictions to reconstruct the model locally. Model exctraction attacks are a threat to the Model-as-a-Service business model that is becoming ubiquitous choice for ML offerings. Unfortunately, existing defense mechanisms are not sufficient and it is likely that model extraction will always be a threat[5]. However, despite the threat, research community does not fully understand why model extraction works and what are its current shortcoming and limitations.

In this work, we are going to explore the quality of latent representations learned during model extraction attacks, study the relationship between the victim and stolen models. We will investigate the impact of robustness-increasing techniques (e.g. adversarial training) on the effectiveness of model extraction and finally, formalise the field of model extraction attacks through the lense of transfer learning.

Note: 1) There's going to be a programming pre-assignment. 2) Literature review can be done as a special assignment.

Requirements

  • MSc students in security, computer science, machine learning
  • familiarity with both standard ML techniques as well as deep neural networks
  • Good math and algorithmic skills
  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have

  • industry experience in software engineering or related
  • research experience
  • familiarity with adversarial machine learning

Some references:
[1]: Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In ACM Symposium on Information, Computer and Communications Security. ACM, 506–519.

[2]: Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In 25th USENIX Security Symposium. 601–618.

[3]: Mika Juuti, Sebastian Szyller, Samuel Marchal, and N. Asokan. 2019. PRADA: Protecting against DNN Model Stealing Attacks. In IEEE European Symposium on Security & Privacy. IEEE, 1–16.

[4]: Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2019. Knockoff Nets: Stealing Functionality of Black-Box Models. In CVPR. 4954–4963.

[5]: Buse Atli, Sebastian Szyller, Mika Juuti, Samuel Marchal and N. Asokan 2019. Extraction of Complex DNN Models: Real Threat or Boogeyman? To appear in AAAI-20 Workshop on Engineering Dependable and Secure Machine Learning Systems

[6]: Tero Karras, Samuli Laine, Timo Aila. A Style-Based Generator Architecture for Generative Adversarial Networks, in CVPR 2019

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.

ROP-gadget finder for PA-protected binaries  PLATSEC

Return-oriented programming [1,2] is an exploitation technique that enables run-time attacks to execute code in a vulnerable process in the presence of security defenses such as W⊕X memory protection policies (e.g. a NX bit). In ROP, the adversary exploits a memory vulnerability to manipulate return addresses stored on the stack, thereby altering the program’s backward-edge control flow. ROP allows Turing-complete attacks by chaining together multiple gadgets, i.e., adversary-chosen sequences of pre-existing program instructions ending in a return instruction that together perform the desired operations. Identifiying suitable ROP-gadget chains useful in attacks can be automated using gadget-finding tools such as ROPGadget [3] or ROPgenerator [4].

More advanced defenses that are effectively against ROP and other control-flow hijacking attacks are becoming commonplace, and modern processor architectures even deploy hardware-assisted defenses designed to thwart ROP attacks. An example is the recently added support for pointer authentication (PA) in the ARMv8-A processor architecture [5], commonly used in devices like smartphones. PA is a low-cost technique to authenticate pointers so as to resist memory vulnerabilities. It has been shown to enable practical protection against memory vulnerabilities that corrupt return addresses or function pointers. However, current PA-based defenses are vulnerable to reuse attacks, where the adversary can reuse previously observed valid protected pointers. Current implementations of PA-based return address protection in GCC and LLVM mitigate reuse attacks, but cannot completely prevent them [6].

The objective of this topic is to design and implement a ROP-gadget finder that takes into account PA-based defenses such as GCC's and LLVM's -msign-return-address (GCC < 9.0) [7] / -mbranch-protection=pac-ret[+leaf] (GCC 9.0 and newer) [8]. These defenses cryptographically bind the return addresses stored on the stack to the stack pointer value at the time the address is pushed to the stack. To exploit PA-protected return addresses in a ROP-chain, the adversary must obtain signed return addresses that correspond to the value of the stack pointer when the ROP-gadget executes it's return instruction using the reused protected address.

NOTE: Part of this topic will be performed as a special assignment, which is a pre-requisite for an eventual  thesis topic.

Required skills:

  • Basic understanding of control-flow hijacking and ROP attacks
  • Basic understanding of the C runtime environmentassembler programming and debugging tools (GDB).
  • Strong programming skills with one or more of the following programming languages:  C/C++PythonRubyRustGoJavaPerl (C and/or Python preferred)

Nice to have:

  • Prior experience with ARMv8-A assembler programming (AArch64 instruction set).
  • Prior experience with Capstone disassemby framework programming [9].
  • Basic understanding of ARM Pointer Authentication.

References:

[1]: Shacham. The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86). 
      In Proceedings of the 14th ACM conference on Computer and communications security (CCS '07). ACM, New York, NY, USA, 552-561. 2007.
      DOI: https://doi.org/10.1145/1315245.1315313
[2]: Kornau. Return Oriented Programming for the ARM Architecture. MSc thesis. Ruhr-Universität Bochum. 2009.
[3]: https://github.com/JonathanSalwan/ROPgadget
[4]: https://github.com/Boyan-MILANOV/ropgenerator
[5]: Qualcomm. Pointer Authentication on ARMv8.3. Whitepaper. 2017.
[6]: Liljestrand et al. PAC it up: Towards pointer integrity using ARM pointer authentication.
      In 28th USENIX Security Symposium, USENIX Security 2019, Santa Clara, CA, USA, August 14-16, pages 177–194, 2019
[7]: Using the GNU Compiler Collection (GCC 7.10): 3.18.1 AArch64 Options. [Retrieved 2019-09-10]
[8]: Using the GNU Compiler Collection (GCC 9.10): 3.18.1 AArch64 Options. [Retrieved 2019-09-10]
[9]: https://www.capstone-engine.org/

For further information: Please contact Thomas Nyman (thomas.nyman@aalto.fi), Hans Liljestrand (hans.liljestrand@aalto.fi) and prof. N. Asokan.


Byzantine fault tolerance with rich fault models  OTHER

Byzantine fault tolerance (BFT) has seen a resurgence due to the popularity of permissioned blockchains.  Unlike with Bitcoin-style proof-of-work-based consensus, BFT provides immediate confirmation of requests/transactions.  Existing BFT protocols can generally tolerate a third of participants being faulty, unlike Bitcoin, which can tolerate attackers controlling up to a third of hash rate.

We are working to build a BFT system that can tolerate a richer variety of failure modes, for example:

  • Up to f nodes are malicious (this is the classical BFT)
  • Nodes with CPU power at most are malicious (this is like Bitcoin)
  • All nodes running software X are malicious (e.g. a zero-day vulnerability is found in some piece of software)
  • All nodes owned by company X become malicious (e.g. someone steals administrator credentials)
  • ...Several of the above with different thresholds...

In this project, you will develop BFT protocols using our C++-based consensus platform that can tolerate more "real-world" types of fault like these, and gain experience in the development of distributed systems.

Requirements:

  • Basic knowledge of C++

Nice to have:

  • Experience with network programming
  • Theoretical distributed systems knowledge

References:

1: M Castrov, B Liskov, "Practical Byzantine fault tolerance", Proceedings of OSDI'99.
2: D Malkhi, M Reiter, "Byzantine quorum systems", Proceedings of STOC'97.

For further information: Please contact Lachlan Gunn (lachlan.gunn@aalto.fi) and Prof. N. Asokan.


Tor hidden service geolocation (special assignment only)  OTHER

Tor is the most well-known and most-used anonymity system, based on onion routing: data is relayed through several nodes with multiple layers of encryption. Each node strips a layer of encryption and routes the message to the next node in the chain until it reaches its destination.

Most users use Tor to provide client anonymity, but it can also provide server anonymity using a feature known as hidden services. This allows anyone to connect to a server using an onion address like abcdef123456789.onion. The address gives no information on the location of the server, but the time that it takes to communicate with it does.

In this project, you will build tools to measure the round-trip-times to hidden services, as well as within the Tor network. You will then build a statistical model of transit times through the network, which you will then use to estimate and visualise hidden service locations.

Requirements: Basic knowledge of probability and statistics.

Nice to have:

  • C programming skills.

  • Familiarity with some cloud computing platform.

  • Familiarity with network programming.

Resources:

[1]: Lachlan J. Gunn, Heiki Pikker, Olaf Maennel, Andrew Allison, Derek Abbott (2017). " Geolocation of Tor hidden services: Initial results".  3rd Interdisciplinary Cyber Research Workshop, Tallinn, Estonia, pp.   67  69.
[2]: Nicholas Hopper, Eugene Y. Vasserman, and Eric Chan-Tin (2010). "How Much Anonymity does Network Latency Leak?". ACM Transactions on Information and System Security, 13(2), pp. 13:1–13:28.
[3]: Frank Cangialosi Dave Levin Neil Spring (2015). "Ting: Measuring and Exploiting Latencies Between All Tor Nodes". Proceedings of the 2015 Internet Measurement Conference, pp. 289–302.

For further information: Please contact Lachlan Gunn (lachlan.gunn@aalto.fi) and Prof. N. Asokan.


Research Topics with our Industry Partners


Huawei Internship/Thesis position: IoT secure lifecycle management   NETSEC      


The intern to Mobile Security Laboratory, Huawei, Finland will participate in a research project where the target is to secure the lifecycle management (manufacturing, setup, pairing, use, and revocation) of IoT devices in the context of home IoT network. In the recent years, IoT capable home appliances have grown tremendously. Most of these appliances pair with smart phones, which play a central role as a command and control center for these appliances. Typically, the pairing protocols along with the command and control protocols between an IoT device and a smart phone are proprietary. As a result, users are required to install dedicated Apps to access/control each of these appliances. The project shall look into available specifications, like EAP-NOOB and LwM2M to achieve easy-to-use protocols and mechanisms, based on standards, for setting (i) up an IoT device into a user’s home network; (ii) operating the device from an authorized (controller) devices inside the home network as well as over the internet; and (iii) managing the devices.

In this project, the student will participate in the study of available industry standards, design protocols and communication stacks, and document them. Furthermore, a proof-of-concept will be implemented over a selected set of devices.

This internship can constitute a Master’s thesis work or a PhD internship. Therefore, we especially look for students who have completed all of MSc courses and are searching for an MSc thesis topic, or students with even further experience in networking designs and IoT. The internship can commence earliest 1.9 2019

We are looking for:

  • A person who has completed most of his/her M.Sc. courses (CS/E.Eng).

  • IoT / peer-to-peer networking experience

  • Protocol design or evaluation experience

  • Coding experience with some programming language preferably C and Java

  • Sufficient skills to work and interact in English

  • Good teamwork skills

The following we count as advantage

  • Background (courses) in platform security, cryptography or equivalent

  • An interest to do research and explore new challenges.

The Mobile Security Laboratory in Huawei, Helsinki drives renewal and mastery in the field of platform / device related security technologies for primarily the mobile device. Our topical expertise is in hardware-assisted isolation and system protection (hypervisor, TEE, Linux kernel hardening) as well as functions like device key management, access control, device attestation and integrity protection. We also conduct research in topics related to run-time / memory protection of compiled code and the distributed aspects of security in consumer IoT – platform security for sensors and actuators as well as security protocols and mechanisms for context setup and lifecycle management in home IoT.

Contact persons: sandeep.tamrakar@huawei.comphilip.ginzboorg@huawei.com




Huawei: Pointer Authentication in the Linux kernel PLATSEC 

This M.Sc. topic is in collaboration with the Mobile Security Laboratory, Huawei, Finland and is a part of a larger research project where the target is to use the recent ARMv8-A Pointer Authentication (PA) additions for memory safety. In this work, our aim is to prevent attacks that violate the integrity of in-kernel memory pointers. PA allows the embedding Pointer Authentication Codes into pointers, thereby providing a method to detect corrupted pointers and prevent their use. We will leverage current work which already implements a research prototype that uses PA, but is limited to user-space processes [1]. This MSc thesis work will focus on applying this protection to kernel-space and adapting the kernel to accommodate PA.

The work will allow the candidate to get a deep insight into modifications to the mobile kernel boot process and module loading. Existing PA kernel patches must also be modified to support PA management in the hypervisor or secure monitor. Finally, our existing research prototype will likely require modifications due to the difference between an OS kernel and how applications are structured and run. We are looking for a candidate with an interest in embedded platform security, with C programming skills and some familiarity with the kernel and/or LLVM.

You will work with our in-house experts as well as with collaborating industrial and academic partners on the subject, but the topic is selected to be a M.Sc. thesis, i.e. we will adapt the scope and timeframe to suit a thesis work, and possibly an academic publication.

Requirements:

- A M.Sc. student close to finishing (CS/E.Eng).
- System / embedded coding experience in C/C++
- Sufficient skills to work and interact in English
- Good team-working skills

Advantage:

- Background (courses) in systems programming, platform security, or equivalent 
- An interest to do research and explore new challenges.

References:

[1]: Liljestrand et al. "PAC it up: Towards Pointer Integrity using ARM Pointer Authentication", arXiv, 2018. https://arxiv.org/abs/1811.09189

For further information: Please contact Hans Liljestrand (hans.liljestrand@aalto.fi) and Jan-Erik Ekberg (jan.erik.ekberg@huawei.com)

The Mobile Security Laboratory in Huawei, Helsinki drives renewal and mastery in the field of platform / device related security technologies for the mobile device. Our topical expertise is in hardware-assisted isolation and system protection (hypervisor, TEE, kernel hardening) as well as functions like device key management, attestation and integrity.


Huawei: Application Memory-Space Isolation for Execution  PLATSEC 

In this internship with the Mobile Security Laboratory, Huawei, Finland you will participate in a research project where the target is to prototype isolation contexts in mobile phones within an application memory space. This is already available in servers using the intel SGX, but the ARM architecture provides a set of features (MMU management, execute-only memory) and company internal solutions where the ARM EL2 hypervisor mode is used as enforcement for memory protection ibn the kernel and applications) that we believe can be combined to achieve a similar, in-place isolation model inside the application for security-critical functionality.

You will work with Huawei in-house experts as well as with collaborating industrial and academic partners to explore these ideas with the goal to build a proof-of-concept of the architecture. This will contain a PoC on a mobile phone, but may also touch on development tools to achieve code separation needed for running the solution.

This work topic will involve exploring processors and the code execution fabric at a high level of detail. Therefore we do expect, from the applicants, a basic level of familiarity with the operation of a modern CPU (e.g. memory management, interrupt handling, privilege levels, as well as how the firmware and OS operates in their support for the application ecosystem.

We believe that this internship, although formulated as a M.Sc. thesis work, can produce results that, if properly published and refined, later could be published and be counted towards graduate studies.

Requirements:

- A candidate with most courses towards his/her M.Sc. completed (CS/E.Eng).
- System / embedded coding experience in C, ARM/X86 assembler
- Experience in bare-metal programming / processor design or equivalent
- Sufficient skills to work and interact in English
- Good teamwork skills
The following we count as advantage
- Background (courses) in platform security, cryptography or equivalent
- An interest to do research and explore new challenges.

For further information: Please contact Jan-Erik Ekberg (jan.erik.ekberg@huawei.com) and Hans Liljestrand (hans.liljestrand@aalto.fi)

The Mobile Security Laboratory in Huawei, Helsinki drives renewal and mastery in the field of platform / device related security technologies for the mobile device. Our topical expertise is in hardware-assisted isolation and system protection (hypervisor, TEE, kernel hardening) as well as functions like device key management, attestation and integrity.



Huawei: Hardware-assisted application attestation and authentication in Android  PLATSEC 

This M.Sc. topic in the Huawei Mobile Security Laboratory Finland consists of participation in a research project where the target is to apply DICE (Device Identifier Composition Engine) to a mobile phone use case. DICE is an emerging standard for “a family of hardware and software techniques for hardware-based cryptographic device identity, attestation, and data encryption”, especially targeted for IoT but also applicable to mobile phone ecosystems. This topic focuses on leveraging DICE in an application attestation scenario in a mobile phone, where we aim to build a proof-of-concept in which a networked service can attest its phone application counterpart in a straight-forward, easy manner.

The thesis subject touches on mobile phone (Android) system security, including the Android framework and Linux kernel, but also includes a service (networked) aspect. We are looking for a candidate with an interest in embedded / phone platform security, with both programming skills and a background in security / cryptography.

You will work with our in-house experts as well as with collaborating industrial and academic partners on the subject, but the topic is selected to be a M.Sc. thesis, i.e. we will adapt the scope and timeframe to suit a thesis work, and possibly an academic publication.

Requirements:

- A M.Sc. student close to finishing (CS/E.Eng).
- System / embedded coding experience in C, Java (Android)
- Sufficient skills to work and interact in English
- Good team-working skills

Advantage:

- Background (courses) in platform security, cryptography or equivalent
- An interest to do research and explore new challenges.

For further information: Please contact Jan-Erik Ekberg (jan.erik.ekberg@huawei.com) and Hans Liljestrand (hans.liljestrand@aalto.fi)

The Mobile Security Laboratory in Huawei, Helsinki drives renewal and mastery in the field of platform / device related security technologies for the mobile device. Our topical expertise is in hardware-assisted isolation and system protection (hypervisor, TEE, kernel hardening) as well as functions like device key management, attestation and integrity.



Huawei: Write-once memory subsystem for microcontrollers  PLATSEC 

This thesis work with the Huawei Mobile Security Laboratory extends an ongoing research project for write protection in Linux and mobile device kernels into microcontrollers and you will work with our in-house experts on this topic. The fundamental issue that we are solving in this work is that (data) write-protection provided by e.g. MPUs and MMUs is reversible, and can be undone in the case of a kernel attack. This is a likely avenue for the next generation of kernel attacks, when many other avenues of attack has been hindered by recent advances in memory protection. The use case is to ascertain that we can keep also IoT devices, such as sensors, functional and operating within parameters during their operational lifecycle.

This M.Sc. work implies making a proof of concept for write-only memory subsystems in microcontrollers. Depending on the targeted hardware platforms, it consists of a minimal hardware block change that implements the write-filtering of memory for a suitable microcontroller, and implementing the software framework (i.e., the write-once memory allocator) for the selected operating system.

Requirements:

- A M.Sc. student close to finishing (CS/E.Eng)
- System / embedded kernel coding experience in C/C++
- Sufficient skills to work and interact in English
- Good team-working skills

Advantage:

- Background (courses) in systems programming, platform security, or equivalent
- Linux kernel coding experience, course-work on RTOS, or equivalent
- Experience or interest in FPGA work

For further information: Please contact Jan-Erik Ekberg (jan.erik.ekberg@huawei.com) and Hans Liljestrand (hans.liljestrand@aalto.fi)

The Mobile Security Laboratory in Huawei, Helsinki drives renewal and mastery in the field of platform / device related security technologies for the mobile device. Our topical expertise is in hardware-assisted isolation and system protection (hypervisor, TEE, kernel hardening) as well as functions like device key management, attestation and integrity.



Huawei: First reference of the GP TPS security standard  PLATSEC 

In this internship with Mobile Security Laboratory, Huawei, Finland you will participate in a research project where the target is to prototype a new security API (under standardization) for Mobile phone and IoT use. The Global Platform (GP) has provided multiple specifications on about Secure Components (SC), such as Trusted Execution Environments (TEE), or Secure Element (SE). There exists also other types of non-GP SCs like TPM and HSMs. Adoption of these technologies, especially in consumer devices, has been surprisingly slow. One of the biggest reason for slow adoption is that application developers do not have consistent ways to use SCs. One exception is Google’s Android KeyStore, which provides wide range of SC functionality to REE (Android) developers via Java Cryptography Architecture (JCA), but unfortunately only for Android phones. As a consequence, the TPS Committee in GP is developing a new keystore solution, with emphasis on that the new key store will be

1) based on industrial standard not on a specification from a single vendor
2) lightweight and therefore applicable also for IoT endpoint devices

The Mobile Security Laboratory in Huawei is one of the driving forces in this new standardization activity, and we propose a thesis work with the intent to make a world-first prototype implementation of the TPS specification on a mobile phone. As master thesis writer you will work in a small team with local experts to design and implement TPS KeyStore API. The implementation work consists of one or several of the individual tasks:

1) To implement the Android Java TPS API binding, supporting all necessary algorithms and cryptographic modes, but also a design that is extendable and maintainable over time. 
2) Development of a System translation library, that converts cryptographic Java calls to native (serialized) TPS KeyStore calls for different SCs. This library is linked to the application code.
3) Implementation of a TPS KeyStore Trusted Application inside the Huawei TEE

Requirements:

- Almost completed coursework for a M.Sc. (CS / E.Eng).
- System / embedded coding experience in C, Android programming background counted as a plus.
- Sufficient skills to work and interact in English
- Good teamwork skills

Advantages:

- Background (courses) in platform security, cryptography or equivalent
- Experience with smart cards or trusted execution environments 
- An interest to do research and explore new challenges.

For further information: Please contact Jan-Erik Ekberg (jan.erik.ekberg@huawei.com) and Hans Liljestrand (hans.liljestrand@aalto.fi)

The Mobile Security Laboratory in Huawei, Helsinki drives renewal and mastery in the field of platform / device related security technologies for the mobile device. Our topical expertise is in hardware-assisted isolation and system protection (hypervisor, TEE, kernel hardening) as well as functions like device key management, attestation and integrity.




Huawei: M.Sc. thesis: ARM TrustZone-M support in RTOS  PLATSEC 


The intern to Mobile Security Laboratory, Huawei, Finland will participate in a research project where the target is to secure Internet of Things (IoT) devices. One of the leading technologies in IoT security is Arm TrustZone for Cortex-M [1]. The Arm TrustZone divides the microcontroller into two modes, according to memory address, called Secure (Trusted) and Non-Secure (Non-trusted) worlds. The secure world isolates its resources such as memories and peripherals to protect code and data loaded inside it to protect against Non-secure world. Code running inside secure world can access both secure and non-secure memories and peripherals. However, code running at Non-secure world is limited to access only non-secure memory and peripherals.

In this project, the student shall use Huawei LiteOS [2] as the real time operating system (RTOS) for IoT devices. Huawei LiteOS is an open-source lightweight OS for designed for IoT devices with Arm Cortex-M microcontrollers. At present, LiteOS does not include support for TrustZone Cortex-M technology. Therefore, the primary task of this project is to enhance the security of LiteOS by adding TrustZone support for which the student will be involved in:

  • Exploring Arm TrustZone for Cortex-M technology at high level of detail. Therefore we expect from the applicants, a basic familiarity with the operation of a modern microcontrollers.
  • Designing and implementation of
    • Boot sequence that initializes both the Secure and Non-secure world for LiteOS on Arm TrustZone based Cortex-M device.
    • An application that runs within a Secure world.
    • APIs for accessing Secure world functionality from a Non-secure world application.
  • Demonstrating how a Non-secure world application can call Secure world functionality.

This internship can constitute a Master’s thesis work. Therefore, we especially look for students who have completed the bulk of their MSc courses and are searching for an MSc thesis topic.

Requirements:

  • A person who has completed most of his/her M.Sc. courses (CS/E.Eng).

  • Coding experience for embedded systems in C, ARM assembler

  • Experience in bare-metal programming or equivalent

  • Sufficient skills to work and interact in English

  • Good teamwork skills

Advantage:

  • Background (courses) in platform security, cryptography or equivalent

  • An interest to do research and explore new challenges.

References:

[1]: Arm TrustZone  https://developer.arm.com/technologies/trustzone 
[2]: 
Huawei LiteOS  https://www.huawei.com/minisite/liteos/en/

For further information: Please contact Jan-Erik Ekberg (jan.erik.ekberg@huawei.com) and Hans Liljestrand (hans.liljestrand@aalto.fi)

The Mobile Security Laboratory in Huawei, Helsinki drives renewal and mastery in the field of platform / device related security technologies for the mobile device. Our topical expertise is in hardware-assisted isolation and system protection (hypervisor, TEE, kernel hardening) as well as functions like device key management, attestation and integrity.




Reserved Research Topics

  • No labels