Skip to end of metadata
Go to start of metadata


This page lists the research topics that are currently available in the Secure Systems group or with our industry partners. Each topic can be structured either as a special assignment or as an MSc thesis depending on the interests, background and experience of the students. If you are interested in a particular topic, send e-mail to the contact person (as well as the professor responsible: N. Asokan, Tuomas Aura or Janne Lindqvist) listed explaining your background and interests.

All topics have one or more of the following keywords: 

PLATSEC Platform Security

NETSEC Network Security

ML & SEC Machine learning and Security/Privacy

USABLE Usable security

OTHER Other security research themes



Available research topics for students in the Secure Systems Group


Security engineering and usable security security  USABLE

We are looking for students interested in security engineering, usable security and human-computer interaction. Examples of work done in the group can be found at old website for the group https://www.lindqvistlab.org/.

Supervisor: Prof. Janne Lindqvist (Department of Computer Science, Aalto University), Director Helsinki-Aalto Center for Information Security (HAIC).

Requirements: Background and interest in systems security, security engineering, data science, machine learning, modeling, human-computer interaction or social and behavioral sciences is required.

Nice to have:

References:

For further information: Please email me Janne Lindqvist


Understanding video streaming user experiences USABLE

We are looking for students interested in understanding video streaming user experiences. Examples of work done in the group can be found at the old website for the group https://www.lindqvistlab.org/.

Supervisor: Prof. Janne Lindqvist (Department of Computer Science, Aalto University), Director Helsinki-Aalto Center for Information Security (HAIC).

Requirements: Background and interest in measuring user experience, modeling, human-computer interaction, computer science or social and behavioral sciences is required.

Nice to have:

References:

For further information: Please email me Janne Lindqvist


Artificial intelligence and machine learning for systems security and privacy ML & SEC

We are looking for students interested in developing novel artificial intelligence and machine learning approaches to security engineering and systems security and privacy. Examples of work done in the group can be found at the old website for the group https://www.lindqvistlab.org/. Please see specific examples also http://jannelindqvist.com/publications/IMWUT19-fails.pdf http://jannelindqvist.com/publications/NDSS19-robustmetrics.pdf 

Supervisor: Prof. Janne Lindqvist (Department of Computer Science, Aalto University), Director Helsinki-Aalto Center for Information Security (HAIC).

Requirements: Background and interest in data science, machine learning, statistics and computational approaches to computer science are required.

Nice to have:

References:

For further information: Please email me Janne Lindqvist


Multitasking and productivity tools USABLE

We are looking for students interested in understanding productivity tools and multitasking. Examples of work done in the group can be found at the old website for the group https://www.lindqvistlab.org/.

Supervisor: Prof. Janne Lindqvist (Department of Computer Science, Aalto University), Director Helsinki-Aalto Center for Information Security (HAIC).

Requirements: Background and interest in measuring user experience, modeling, human-computer interaction, computer science or social and behavioral sciences is required.

Nice to have:

References:

For further information: Please email me Janne Lindqvist


Mixed methods HCI and security research OTHER

We are looking for students interested in pushing the envelope in mixed methods HCI and security research. Examples of work done in the group can be found at the old website for the group https://www.lindqvistlab.org/.

Supervisor: Prof. Janne Lindqvist (Department of Computer Science, Aalto University), Director Helsinki-Aalto Center for Information Security (HAIC).

Requirements: Background and interest in measuring either qualitative methods or quantitative methods, and interested to learning new methods, user experience, modeling, human-computer interaction, computer science or social and behavioral sciences is required.

Nice to have:

References:

For further information: Please email me Janne Lindqvist


Latent Representations in Model Extraction Attacks ML & SEC

In recent years, the field of black-box model extraction has been growing in popularity [1-5]. During a black-box model extraction attack, adversary queries data to the victim model sitting behind an API and obtains the predictions. It then uses the data together with the predictions to reconstruct the model locally. Model exctraction attacks are a threat to the Model-as-a-Service business model that is becoming ubiquitous choice for ML offerings. Unfortunately, existing defense mechanisms are not sufficient and it is likely that model extraction will always be a threat[5]. However, despite the threat, research community does not fully understand why model extraction works and what are its current shortcoming and limitations.

In this work, we are going to explore the quality of latent representations learned during model extraction attacks, study the relationship between the victim and stolen models. We will investigate the impact of robustness-increasing techniques (e.g. adversarial training) on the effectiveness of model extraction and finally, formalise the field of model extraction attacks through the lense of transfer learning.


Note: 1) There's going to be a programming pre-assignment. 2) Literature review can be done as a special assignment.

Requirements

  • MSc students in security, computer science, machine learning
  • familiarity with both standard ML techniques as well as deep neural networks
  • Good math and algorithmic skills
  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have

  • industry experience in software engineering or related
  • research experience
  • familiarity with adversarial machine learning

Some references:
[1]: Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In ACM Symposium on Information, Computer and Communications Security. ACM, 506–519.

[2]: Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In 25th USENIX Security Symposium. 601–618.

[3]: Mika Juuti, Sebastian Szyller, Samuel Marchal, and N. Asokan. 2019. PRADA: Protecting against DNN Model Stealing Attacks. In IEEE European Symposium on Security & Privacy. IEEE, 1–16.

[4]: Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2019. Knockoff Nets: Stealing Functionality of Black-Box Models. In CVPR. 4954–4963.

[5]: Buse Atli, Sebastian Szyller, Mika Juuti, Samuel Marchal and N. Asokan 2019. Extraction of Complex DNN Models: Real Threat or Boogeyman? To appear in AAAI-20 Workshop on Engineering Dependable and Secure Machine Learning Systems

[6]: Tero Karras, Samuli Laine, Timo Aila. A Style-Based Generator Architecture for Generative Adversarial Networks, in CVPR 2019

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.


Guaranatees of Differential Privacy in Overparameterised Models ML & SEC

Differential Privacy allows us to use data without revealing information about individuals (statistically). It is a parameterised approach that allows us to control the amount of privacy that we would like our system to have and provides us with statistical guarantees (epsilon,delta-bounded). In machine learning this means either training models that do not link to particular observations,  modifying the learning algorithm algorithm or the input data itself. However, deep neural networks leak information about training data (memebership and property inference) due to being overparameterised and the high-dimensional nature of both the input data and latent representaions of the model.

In this work, we are going to analyse the shortcoming of existing methods of incorporating differential privacy into machine learning models and work on creating a novel technique


Note: 1) There's going to be a programming pre-assignment. 2) Literature review can be done as a special assignment.

Requirements

  • MSc students in security, computer science, machine learning
  • familiarity with both standard ML techniques as well as deep neural networks
  • Good math and algorithmic skills (stats and probability in particular)
  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have

  • industry experience in software engineering or related
  • research experience
  • familiarity with differential privacy and/or anonymization techniques

Some references

[1]: Cynthia Dwork, Aaron Roth. 2014. Algorithmic Foundations of Differential Privacy

[2]: Reza Shokri, Vitaly Shmatikov. 2015. Privacy Preserving Deep Learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security Pages 1310-1321

[3]: Congzheng Song, Vitaly Shmatikov. 2019. Overlearning Reveals Sensitive Attributes

[4]: Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., and Jana, S. 2019. Certified Robustness to Adversarial Examples with Differential Privacy. In IEEE Symposium on Security and Privacy (SP) 2019

[5]: M. Abadi, A. Chu, I. Goodfellow, H. Brendan McMahan, I. Mironov, K. Talwar, and L. Zhang. 2016. Deep Learning with Differential Privacy. ArXiv e-prints.

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.


Ineffectiveness of Non-label-flipping Defenses and Watermarking Schemes Against Model Extraction Attacks ML & SEC

In recent years, the field of black-box model extraction has been growing in popularity [1-5]. During a black-box model extraction attack, adversary queries data to the victim model sitting behind an API and obtains the predictions. It then uses the data together with the predictions to reconstruct the model locally. Model exctraction attacks are a threat to the Model-as-a-Service business model that is becoming ubiquitous choice for ML offerings.


Note: 1) There's going to be a programming pre-assignment. 2) Literature review can be done as a special assignment.

Requirements

  • MSc students in security, computer science, machine learning
  • familiarity with both standard ML techniques as well as deep neural networks
  • Good math and algorithmic skills
  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have

  • industry experience in software engineering or related
  • research experience
  • familiarity with adversarial machine learning

Some references:
[1]: Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In ACM Symposium on Information, Computer and Communications Security. ACM, 506–519.

[2]: Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In 25th USENIX Security Symposium. 601–618.

[3]: Mika Juuti, Sebastian Szyller, Samuel Marchal, and N. Asokan. 2019. PRADA: Protecting against DNN Model Stealing Attacks. In IEEE European Symposium on Security & Privacy. IEEE, 1–16.

[4]: Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2019. Knockoff Nets: Stealing Functionality of Black-Box Models. In CVPR. 4954–4963.

[5]: Buse Atli, Sebastian Szyller, Mika Juuti, Samuel Marchal and N. Asokan 2019. Extraction of Complex DNN Models: Real Threat or Boogeyman? To appear in AAAI-20 Workshop on Engineering Dependable and Secure Machine Learning Systems

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.


Locally Sensitive Neural Based Perceptual Hashing ML & SEC

Note: Special Assignment Only

Locally sensitive hashing (LSH) is an embedding function that for two similar inputs A and B, produces hashes Ah and Ab that are similar. This is different from from hashing in a cryptographic sense - where distribution of embeddings should not reveal information about the inputs. LSH is extremely useful when it comes to efficient search and storage. However, for many domains the similarity of two inputs may be straightforward to determine for a human but not for the hashing function that is not aware of things such as context, meaning, representation. Hence, we need solutions that can accommodate for these deficiencies.

In these projects, we are going to investigate and design novel locally sensitive hash functions based on neural networks. Our focus is on two domains:

1) Perceptual hashing of images: image that was subject to change of contrast, shifting, rotation, inversion should produce similar hashes to the original.

2) Contextual text hashing of text: sentences that have roughly the same meaning and similar grammatical structure should have similar hashes, e.g. I think it was a great movie vs In my opinion, this was a nice film.

Requirements

  • MSc students in security, computer science, machine learning
  • familiarity with both standard ML techniques as well as deep neural networks (NLP and image classification)
  • Good math and algorithmic skills (hashing in particular)
  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have

  • industry experience in software engineering or related
  • research experience
  • familiarity with adversarial machine learning

Some references:
[1]: Markus KettunenErik HärkönenJaakko Lehtinen E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles

[2]: Lucas BourtouleVarun ChandrasekaranChristopher Choquette-ChooHengrui JiaAdelin TraversBaiwu ZhangDavid LieNicolas Papernot. Machine Unlearing

[3]: A Visual Survey of Data Augmentation in NLP

[4]: Bian Yang, Fan Gu, Xiamu Niu. Block Mean Value Based Image Perceptual Hashing in IIH-MSP 06: Proceedings of the 2006 International Conference on Intelligent Information Hiding and Multimedia

[5]: Locally Sensitive Hashing

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.


Federated Learning: Adaptive Attacks and Defenses in Model Watermarking ML & SEC

Note: Special Assignment Only

Watermarking deep neural networks has become a well-known approach to prove ownership of machine learning models in case they are stolen [1,2,3]. Watermarking methods should be robust against post-processing methods that aim to remove the watermark from the model. Post processing methods can be applied by the adversary who stole the model after the model is deployed as a service. 

Unlike traditional machine learning approaches, federated learning allows training machine learning models at the edge devices (referred as to clients or data owners) and then combines the results of all models into a single global model stored in a server [4]. Watermarking solutions can be integrated into federated learning models when the server is the model owner and clients are data owners [5]. However, unlike the post processing methods, adversaries as malicious clients can directly manipulate the model in the training phase to remove the effect of watermark from the global model. In this work, we are going to design an adaptive attacker that tries to remove the watermark in the training phase of the federated learning and propose new defense strategies against this type of attackers. 

Notes:

1) You will be required to complete a programming pre-assignment as part of the interview process.

2) This topic can be structured as either a short (3-4 month) internship or a thesis topic, depending on the background and skills of the selected student.

Required skills:

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

  • Sufficient skills to work and interact in English

Nice to have:

  • Familiarity with federated learning or statistics

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

References: 

[1] Adi, Yossi, et al. 2018. "Turning your weakness into a strength: Watermarking deep neural networks by backdooring." 27th USENIX Security Symposium.

[2] Zhang, Jialong, et al. 2018. "Protecting intellectual property of deep neural networks with watermarking." Proceedings of the 2018 on Asia Conference on Computer and Communications Security.

[3] Darvish Rouhani et al. 2019. "DeepSigns: an end-to-end watermarking framework for ownership protection of deep neural networks." Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems.

[4] McMahan, Brendan, et al. "Communication-efficient learning of deep networks from decentralized data." Artificial Intelligence and Statistics. PMLR, 2017.

[5] Atli, Buse Gul, et al. 2020. "WAFFLE: Watermarking in Federated Learning." arXiv preprint arXiv:2008.07298

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.


Making high-integrity security policies usable in the real world PLATSEC

SELinux and other mandatory access control mechanisms have been used to secure operating systems, such as Android and normal Linux distributions. But with the great power of these mechanisms comes great complexity, and so far SELinux is used primarily by specialists as part of operating system development, and not by application developers. In this project you will remedy this by developing semi-automated and automated methods that analyse a system's access-control graph, using both data provided by application developers and heuristic application complexity measures, and provide useful data to the developer when deciding on their security boundaries. You will then incorporate these methods into a graphical tool that can be used by normal developers to design security policies that ensure the integrity of their applications.

Required skills:
- Familiarity with Linux operating system security
- Basic programming skills in your preferred language

Nice to have:
- Familiarity with graph theory
- GUI programming experience

For further information: Contact Lachlan Gunn (lachlan.gunn@aalto.fi) and Prof. N. Asokan



Attack-tolerant execution environments PLATSEC

Modern computing devices contain a trusted execution environment (TEE) for handling sensitive operations and data, such cryptographic keys, in an isolated environment.  TEEs are intended to protect trusted applications from potentially dangerous application software, or even a compromised operating system. An important aspect of TEEs is remote attestation, the ability to provide evidence of the state of a TEE to a third party, for example that a trusted application it is communicating with is in fact running within a TEE. 

Modern processors are complex and incorporate several mechanisms like caching, speculation, and out-of-order execution to improve performance. Several recent attacks, like the well-known Spectre and Meltdown attacks, exploit this complexity to compromise integrity and confidentiality of computation. Variants of these attacks, like Foreshadow, are applicable against TEEs. In particular, strong adversaries capable of mounting Foreshadow-like attacks can not only compromise data confidentiality, but also remote attestation guarantees by leaking the keys used by the platform. 

In this project, you will explore how to mitigate the impact of such attacks. As a first step, we plan to explore how remote attestation guarantees can still be retained even in the presence of attacks that may compromise confidentiality of trusted application data. We will use a TEE based on seL4, a microkernel operating system with formally proven isolation guarantees, and use it together with a smaller FPGA-based fixed-function TEE used only for cryptographic operations involved in the attestation protocol. Working with senior researchers, you will design and implement the FPGA-based TEE functionality that underpins this system. 

In the longer term, we want to explore more advanced hardware and software innovations that can retain application data confidentiality even in the presence of such strong adversaries.  This will involve modifying existing open-source processor designs to develop new defences against runtime and side-channel attacks.  Depending on your background and progress, this may also form part of this project.   

Requirements:

  • C programming experience 
  • Basic cryptographic knowledge (hash functions, digital signatures, etc.) 

Nice to have:

  • FPGA development experience 
  • Familiarity with computer architecture 

For further information: Contact Lachlan Gunn (lachlan.gunn@aalto.fi) and Prof. N. Asokan


Research Topics with our Industry Partners


Huawei: Runtime code adaptation on heterogeneous hardware (Internship)

Posted on November 25

The target of this internship, i.e. to study and solve some of the issues of compatibility (with an express intent to secure the end result), where a binary codebase may need to run and quickly migrate between processor cores with different supported micro-architectural functionality (assembler commands).

So, in this research project, we want to explore how a piece of SW can take advantage of some CPU features (and specifically security-relevant features, say memory tagging) when they are available and emulate them when not available in a CPU – all this in a transparent manner. This requirement brings many challenges, especially when running in an OS with preemption and transparent migration between CPUs.

In this work, the intern will work with our experts to analyze this problem more deeply, come up with a viable solution, and implement a prototype or a conceptual demonstrator. This internship can constitute a Master’s thesis work or a PhD internship. Therefore, we especially look for students who have completed all of MSc courses and are searching for an MSc thesis topic, or students with even further experience in security or OSs.

We are looking for:

  • Students who have completed most of their M.Sc. Courses (CS/E.Eng), or higher,
    preferably some background in Security / Operating systems
  • System coding experience (C, ASM), compiler experience is a plus
  • Sufficient skills to work and interact in English
  • Good team-working skills
  • Students with an interest to do research and explore new challenges.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/C198447844/



Nokia Bell Labs: Domain-specific threat modeling framework for mobile communication systems NETSEC

We, at Nokia Bell Labs, are building on a domain-specific threat modeling framework for mobile communication systems. Our framework is designed to model adversarial behavior in terms of its attack phases and to be used as a common taxonomy matrix. Taking inspiration from the MITRE ATT&CK framework, we have systematically organized publicly known attacks into various tactics and techniques to form the matrix by gleaning through a diverse and large body of security literature [1, 2].

This has already sought the attention of many key players in the mobile communication industry and picking up momentum. Nokia Bell Labs is pioneering and leading the next steps by transforming this research activity into a community-driven initiative. We are looking for a student intern to contribute to these efforts. It can also part of your master's thesis or research topic. The student will be working with the security research team of Nokia Bell Labs in Espoo.  

Tasks include, 

  • Building a web tool to annotate, visualize, and export the threat matrix in a sharable format (e.g. JSON). An example of such a web-based tool can be found here
  • Modeling publicly known attacks on mobile communication systems using the matrix+ web tool  
    • Derive qualitative and quantitative insights from such attack statistics. 
  • Contributing towards extending the threat model and necessary backend infrastructure 
  • Contributing to research (conference) publications 
  • Collaborating with other stakeholders (e.g., mobile operators and equipment vendors)

Required background and skills

  • Full-stack web app development skills; preferably Node.js or Python. 
  • Experience with version controlling (e.g., git, SVN) and docker containers. 
  • Previous contributions to open source software and communities is very much beneficial. 
  • Background in information and network security is NOT required. But interest in these topics is very much expected.
  • Students from underrepresented communities (e.g., LGBTQ) are encouraged to apply. Please mention your preferred pronouns in the application materials. 

Please contact Sid Rao (sid.rao@nokia-bell-labs.com) or Yoan Miche (yoan.miche@nokia-bell-labs.com) if you have any further questions.  

References:

[1] Rao, Siddharth Prakash, Silke Holtmanns, and Tuomas Aura. "Threat modeling framework for mobile communication systems." arXiv preprint.

[2] Patrick Donegan. "Harden Stance briefing – An ATT&CK-like framework for telcos". [Link]


Huawei: Conditional Hardware tracing using Memory Tagging (Internship)

Posted on October 28

In this work, the intern to HSSL will participate in a research project where we intend to tie memory tagging to conditional execution, and explore how such a feature could be used – how it could be beneficial – for computer code. One practical, often occurring need, would be the ability to enable or disable traces in runtime with minimal execution impact. This is usually achieved by patching the code in runtime (see e.g. linux kernel static keys), but with the abovementioned solution, the main idea would be to have the tracing code embedded in the binary, but disable it (execute a NOP) in case instructions are tagged. In the AArch64 case, 1 bit of the memory tag will be associated with one instruction.

This internship can constitute a Master’s thesis work or a PhD internship. Therefore, we especially look for students who have completed all of MSc courses and are searching for an MSc thesis topic, or students with even further experience in networking designs and IoT.

Requirements:

  • Students who have completed most of their M.Sc. Courses (CS/E.Eng), or higher,
    preferably some background in Security / Crypto / Operating systems / Compilers
  • System coding experience (C, assembler)
  • Sufficient skills to work and interact in English
  • Good team-working skills
  • Students with an interest to do research and explore new challenges.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/1D896DBA51/


Huawei: Securing Secure Enclaves (Internship)

Posted on October 28

In this work, the intern to HSSL will participate in a research project where we intend to further the evolution of memory-protecting the secure enclave code, specifically with the Rust language and a memory-safe run-time. This is itself nothing very novel, there are many research activities promoting Rust code for enclaves, since there is a good match: The enclave is inherently secured by hardware, and the promise of Rust is to generate memory-safe code, ergo, the enclave becomes very resistant against internal or external attacks. However, we intend to leverage the sweet-spot in the intersection between WASM (WebAssembly) interpretation, allowing us to make live-migratable enclaves even between architectures, and Rust. As it happens, Wasmtime, a popular wasm runtime is implemented in Rust and at the same time, wasm is well supported target for Rust. In this work, as part of the research team, the intern will together with our experts design and prototype a few ideas in this area.

This internship can constitute a Master’s thesis work or a PhD internship. Therefore, we especially look for students who have completed all of MSc courses and are searching for an MSc thesis topic, or students with even further experience in security or OSs.

Requirements:

  • Students who have completed most of their M.Sc. Courses (CS/E.Eng), or higher,
    preferably some background in Security / Operating systems
  • System coding experience (C, Rust, WASM?)
  • Sufficient skills to work and interact in English
  • Good team-working skills
  • Students with an interest to do research and explore new challenges.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/ECA53D8196/


Huawei: Secure Configuration and Management of IoT Devices (Internship)

Posted on October 28

EAP use for lifecycle management of IoT devices in SOHO networks is an open research problem. In order to address the challenge, the student will participate in a research project that explores how to configure the operation credentials in the device via EAP. This is necessary, but there is no standard way to do this currently. The goal is to design and experimentally verify ways to use EAP to bootstrap devices including long term credential provisioning. In this project, the student will participate in the study of available industry standards, design protocols and communication stacks, and document them. Furthermore, a proof-of-concept will be implemented over a selected set of devices.

This internship can constitute a Master’s thesis work or a PhD internship. Therefore, we especially look for students who have completed all of MSc courses and are searching for an MSc thesis topic, or students with even further experience in networking designs and IoT.

Requirements

  • Students who have completed most of their M.Sc. Courses (CS/E.Eng), or higher,
    preferably some background in Security / Crypto / Operating systems / Embedded / CPUs/MCUs
  • System coding experience (C, assembler)
  • Sufficient skills to work and interact in English
  • Good team-working skills
  • Students with an interest to do research and explore new challenges.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/B7343940E4/


Huawei: Securing Registers Spills in the Compiler (Internship)

Posted on October 28

The goal in this M.Sc. project is to study the impact of register spilling in the context of memory protection, and to design and implement remedies against this issue. As part of the research team, the intern will together with our experts’ prototype modifications to the CLANG/LLVM compiler with the intent to curtail the bad effects of register spilling, while accounting for the performance impacts of the end result.

This internship can constitute a Master’s thesis work or a PhD internship. Therefore, we especially look for students who have completed all of MSc courses and are searching for an MSc thesis topic, or students with even further experience in networking designs and IoT.

Requirements

  • Students who have completed most of their M.Sc. Courses (CS/E.Eng), or higher,
    preferably some background in Security / Crypto / Operating systems / Compilers
  • System coding experience (C, assembler)
  • Sufficient skills to work and interact in English
  • Good team-working skills
  • Students with an interest to do research and explore new challenges

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/E66BC6DE8A/


Huawei: CV/NLP Research Intern

Posted on October 28

We are looking for a motivated Research Intern who can take part in and contribute to our research in the area of Natural Language Processing (NLP), speech recognition or Computer Vision (CV).

Requirements

  • Be a Master student or PhD candidate (preferred) in computer science, maths, statistics, computational linguistics, computer vision, ML & security or related areas.
  • Have course credits in AI/ML.
  • Be excited about using AI/ML in NLP or Computer Vision or Speech Recognition.
  • Have good software and Python programming skills.
  • Good level of English.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/FD9951B7B2/


Huawei: AI/NLP Research Intern

Posted on October 28

Now we are looking for a motivated AI/NLP Research Intern who can take part in and contribute to our research in the area of Natural Language Processing (NLP) or AI/Machine Learning, intersecting with cloud/application security or privacy.

The start time of this trainee position is negotiable, duration is 4-6 months.

Requirements

  • Be a master's student or PhD candidate (preferred) in computer science, mathematics, computational linguistics, machine learning & security or related areas.
  • Have course credits in AI/ML and Security.
  • Be excited about using AI/ML/NLP in the context of security or privacy such as federated learning, web security, end to end encryption applications.
  • Have good software and Python programming skills.
  • Good level of English.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/1C8D4EF809/



Reserved Research Topics


Dataset Watermarking ML & SEC 

Suppose Alice has collected a valuable dataset. She licenses it to Bob to be used for specific purposes (e.g., computing various aggregate statistics, or to evaluate different data anonymization techniques). Alice does not want Bob to build a machine learning (ML) model using her dataset and monetize the model, for example, by making its prediction API accessible via a cloud server to paying customers. Alice wants a mechanism by which if Bob misbehaves by building a model and making it available to customers, she will be able to demonstrate to an independent verifier that Bob's model was in fact built using her dataset. This is the problem of dataset watermarking.

A related problem is ML model watermarking where Alice inserts a watermark into ML models that she builds which she can later use to demonstrate her ownership of the model. Watermarking is intended to deter model theft. In recent years, a number of ML model watermarking techniques have been proposed, against both white-box model theft [1] where the adversary (Bob) steals the model physically and black-box model extraction [2] where the adversary builds a surrogate model by querying the prediction API of Alice's victim model. 

While model watermarking has been studied extensively, dataset watermarking is relatively under--explored. Recently, two simple dataset watermarking methods have been proposed for image [3] and audio [4] classification datasets. 

In this internship project, you will have the opportunity to work with senior researchers to analyse existing techniques for dataset watermarking, attempt to design circumvention methods against them, for example, by statistical analysis of latent space, and explore whether existing techniques can be improved or whether it is possible to design entirely new techniques that are effective and robust against such circumvention approaches.

Notes:

1) You will be required to complete a programming pre-assignment as part of the interview process.

2) This topic can be structured as either a short (3-4 month) internship or a thesis topic, depending on the background and skills of the selected student.

If you are a CCIS MSc student (majoring in security, ML, or a related discipline), and have the following "required skills", this internship will suit you. The "nice-to-have" skills listed below will be strong plusses.

Required skills:

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

  • Strong ability to work and interact in English

Nice to have:

  • Familiarity with statistical signal processing

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

References: 

[1] Adi, Yossi, et al. 2018. "Turning your weakness into a strength: Watermarking deep neural networks by backdooring." 27th USENIX Security Symposium.

[2]  Szyller, Sebastian et al. 2019. "DAWN: Dynamic adversarial watermarking of neural networks

[3] Sablayrolles, Alexandre, et al. 2020. "Radioactive data: tracing through training." (Open source Github code)

[4] Kim, Wansoo, and Kyogu Lee. 2020. "Digital Watermarking For Protecting Audio Classification Datasets." ICASSP IEEE 2020.

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.


Black-box Adversarial Attacks on Neural Policies  ML & SEC

It is well known that machine learning models are vulnerable to inputs which are constructed by adversaries to force misclassification. Adversarial examples have been extensively studied in image classification models using deep neural networks (DNNs). Recent work [1] shows that reinforcement learning agents using deep reinforcement learning (DRL) algorithms are also susceptible to adversarial examples, since DRL uses neural networks to train a policy in the decision-making process [2]. Existing adversarial attacks [3-6] try to corrupt the observation of the agent by perturbing the sensor readings so that the corresponding observation is different from the true environment that the agent interacts. 

This work will focus on black box adversarial attacks, where the attacker has no knowledge about the agent but can only observe how the agent interacts with the environment. We will analyze how to use and combine imitation learning, apprenticeship learning via inverse reinforcement learning [7] and transferability property of adversarial examples to generate black-box adversarial attacks against DRL. 

Note: There's going to be a programming pre-assignment.

Required skills:

  • MSc students in security, computer science, machine learning, robotics or automated systems

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have:

  • Familiarity with one of the topics: adversarial machine learning, reinforcement learning, statistics, control theory, artificial intelligence

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

  • Sufficient skills to work and interact in English

  • An interest for Arcade games

References:

[1] Huang, Sandy, et al. 2017. "Adversarial attacks on neural network policies.

[2] Mnih, Volodymyr, et al.  "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529.

[3] Lin, Yen-Chen, et al. IJCAI-2017. "Tactics of adversarial attack on deep reinforcement learning agents."

[4] Kos, Jernej, and Dawn Song. 2017.  "Delving into adversarial attacks on deep policies." 

[5] Xiao, Chaowei, et al. 2019. "Characterizing Attacks on Deep Reinforcement Learning." 

[6] Hussenot, Léonard et al. 2019.  "CopyCAT: Taking Control of Neural Policies with Constant Attacks." arXiv:1905.12282 

[7] B. Piot, at al. 2017.  "Bridging the Gap Between Imitation Learning and Inverse Reinforcement Learning." in IEEE Transactions on Neural Networks and Learning Systems.

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.


Reserved topics should be moved to this section and taken away from the available sections!



  • No labels