Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.

Federated Learning: Adaptive Attacks and Defenses in Model Watermarking ML & SEC

Note: Special Assignment Only

Watermarking deep neural networks has become a well-known approach to prove ownership of machine learning models in case they are stolen [1,2,3]. Watermarking methods should be robust against post-processing methods that aim to remove the watermark from the model. Post processing methods can be applied by the adversary who stole the model after the model is deployed as a service. 

Unlike traditional machine learning approaches, federated learning allows training machine learning models at the edge devices (referred as to clients or data owners) and then combines the results of all models into a single global model stored in a server [4]. Watermarking solutions can be integrated into federated learning models when the server is the model owner and clients are data owners [5]. However, unlike the post processing methods, adversaries as malicious clients can directly manipulate the model in the training phase to remove the effect of watermark from the global model. In this work, we are going to design an adaptive attacker that tries to remove the watermark in the training phase of the federated learning and propose new defense strategies against this type of attackers. 

Notes:

1) You will be required to complete a programming pre-assignment as part of the interview process.

2) This topic can be structured as either a short (3-4 month) internship or a thesis topic, depending on the background and skills of the selected student.

Required skills:

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

  • Sufficient skills to work and interact in English

Nice to have:

  • Familiarity with federated learning or statistics

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

References: 

[1] Adi, Yossi, et al. 2018. "Turning your weakness into a strength: Watermarking deep neural networks by backdooring." 27th USENIX Security Symposium.

[2] Zhang, Jialong, et al. 2018. "Protecting intellectual property of deep neural networks with watermarking." Proceedings of the 2018 on Asia Conference on Computer and Communications Security.

...

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.

Making high-integrity security policies usable in the real world PLATSEC

...

For further information: Contact Lachlan Gunn (lachlan.gunn@aalto.fi) and Prof. N. Asokan

Byzantine fault tolerance with rich fault models PLATSEC

Byzantine fault tolerance (BFT) has seen a resurgence due to the popularity of permissioned blockchains.  Unlike with Bitcoin-style proof-of-work-based consensus, BFT provides immediate confirmation of requests/transactions.  Existing BFT protocols can generally tolerate a third of participants being faulty, unlike Bitcoin, which can tolerate attackers controlling up to a third of hash rate.

We are working to build a BFT system that can tolerate a richer variety of failure modes, for example:

  • Up to f nodes are malicious (this is the classical BFT)
  • Nodes with CPU power at most are malicious (this is like Bitcoin)
  • All nodes running software X are malicious (e.g. a zero-day vulnerability is found in some piece of software)
  • All nodes owned by company X become malicious (e.g. someone steals administrator credentials)
  • ...Several of the above with different thresholds...

In this project, you will develop systems that can tolerate more "real-world" types of fault like these, and gain experience in the development of distributed systems.

Requirements:

  • Basic network programming experience

Nice to have:

  • Knowledge of C++ and/or Rust
  • Theoretical distributed systems knowledge

References:

1: M Castrov, B Liskov, "Practical Byzantine fault tolerance", Proceedings of OSDI'99.
2: D Malkhi, M Reiter, "Byzantine quorum systems", Proceedings of STOC'97.

For further information: Please contact Lachlan Gunn (lachlan.gunn@aalto.fi)

Research Topics with our Industry Partners

Posted on April 8, 2021

SSH: IT Support Trainee

SSH Communications Security Oyj. is looking for an IT Support Trainee. SSH IT provides the necessary services for all SSH personnel to enable people to succeed in their work. We run services from mainframe platforms to cutting-edge lambda functions in the cloud. If you are interested in getting familiar with how IT operates and would like to learn skills in device management, what better way to spend the summer than joining SSH IT!

We are looking for a person who is/has:

  • Interested in challenging her/himself
  • Good in identifying a problem and especially finding a solution
  • Basic understanding of how computers work
  • Basic understanding of how computers communicate over a network
  • Good communication skills in English
  • Any programming skill is a plus

Please fill out the application form at https://careers.ssh.com and introduce yourself to us. Tell us also the period you would be available to work.

Nokia Bell Labs: Domain-specific threat modeling framework formobile communication systems NETSEC

We, at Nokia Bell Labs, are building on a domain-specific threat modeling framework for mobile communication systems. Our framework is designed to model adversarial behavior in terms of its attack phases and to be used as a common taxonomy matrix. Taking inspiration from the MITRE ATT&CK framework, we have systematically organized publicly known attacks into various tactics and techniques to form the matrix by gleaning through a diverse and large body of security literature [1, 2].

This has already sought the attention of many key players in the mobile communication industry and picking up momentum. Nokia Bell Labs is pioneering and leading the next steps by transforming this research activity into a community-driven initiative. We are looking for a student intern to contribute to these efforts. It can also part of your master's thesis or research topic. The student will be working with the security research team of Nokia Bell Labs in Espoo.  

Tasks include, 

  • Building a web tool to annotate, visualize, and export the threat matrix in a sharable format (e.g. JSON). An example of such a web-based tool can be found here
  • Modeling publicly known attacks on mobile communication systems using the matrix+ web tool  
    • Derive qualitative and quantitative insights from such attack statistics. 
  • Contributing towards extending the threat model and necessary backend infrastructure 
  • Contributing to research (conference) publications 
  • Collaborating with other stakeholders (e.g., mobile operators and equipment vendors)

Required background and skills

  • Full-stack web app development skills; preferably Node.js or Python. 
  • Experience with version controlling (e.g., git, SVN) and docker containers. 
  • Previous contributions to open source software and communities is very much beneficial. 
  • Background in information and network security is NOT required. But interest in these topics is very much expected.
  • Students from underrepresented communities (e.g., LGBTQ) are encouraged to apply. Please mention your preferred pronouns in the application materials. 

Please contact Sid Rao (sid.rao@nokia-bell-labs.com) or Yoan Miche (yoan.miche@nokia-bell-labs.com) if you have any further questions.  

References:

[1] Rao, Siddharth Prakash, Silke Holtmanns, and Tuomas Aura. "Threat modeling framework for mobile communication systems." arXiv preprint.

[2] Patrick Donegan. "Harden Stance briefing – An ATT&CK-like framework for telcos". [Link]

Huawei: CV/NLP Research Intern

Posted on October 28

We are looking for a motivated Research Intern who can take part in and contribute to our research in the area of Natural Language Processing (NLP), speech recognition or Computer Vision (CV).

Requirements

  • Be a Master student or PhD candidate (preferred) in computer science, maths, statistics, computational linguistics, computer vision, ML & security or related areas.
  • Have course credits in AI/ML.
  • Be excited about using AI/ML in NLP or Computer Vision or Speech Recognition.
  • Have good software and Python programming skills.
  • Good level of English.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/FD9951B7B2/

Huawei: AI/NLP Research Intern

Posted on October 28

Now we are looking for a motivated AI/NLP Research Intern who can take part in and contribute to our research in the area of Natural Language Processing (NLP) or AI/Machine Learning, intersecting with cloud/application security or privacy.

The start time of this trainee position is negotiable, duration is 4-6 months.

Requirements

  • Be a master's student or PhD candidate (preferred) in computer science, mathematics, computational linguistics, machine learning & security or related areas.
  • Have course credits in AI/ML and Security.
  • Be excited about using AI/ML/NLP in the context of security or privacy such as federated learning, web security, end to end encryption applications.
  • Have good software and Python programming skills.
  • Good level of English.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/1C8D4EF809/

Reserved Research Topics

Dataset Watermarking ML & SEC 

Suppose Alice has collected a valuable dataset. She licenses it to Bob to be used for specific purposes (e.g., computing various aggregate statistics, or to evaluate different data anonymization techniques). Alice does not want Bob to build a machine learning (ML) model using her dataset and monetize the model, for example, by making its prediction API accessible via a cloud server to paying customers. Alice wants a mechanism by which if Bob misbehaves by building a model and making it available to customers, she will be able to demonstrate to an independent verifier that Bob's model was in fact built using her dataset. This is the problem of dataset watermarking.

A related problem is ML model watermarking where Alice inserts a watermark into ML models that she builds which she can later use to demonstrate her ownership of the model. Watermarking is intended to deter model theft. In recent years, a number of ML model watermarking techniques have been proposed, against both white-box model theft [1] where the adversary (Bob) steals the model physically and black-box model extraction [2] where the adversary builds a surrogate model by querying the prediction API of Alice's victim model. 

While model watermarking has been studied extensively, dataset watermarking is relatively under--explored. Recently, two simple dataset watermarking methods have been proposed for image [3] and audio [4] classification datasets. 

In this internship project, you will have the opportunity to work with senior researchers to analyse existing techniques for dataset watermarking, attempt to design circumvention methods against them, for example, by statistical analysis of latent space, and explore whether existing techniques can be improved or whether it is possible to design entirely new techniques that are effective and robust against such circumvention approaches.

Notes:

1) You will be required to complete a programming pre-assignment as part of the interview process.

2) This topic can be structured as either a short (3-4 month) internship or a thesis topic, depending on the background and skills of the selected student.

If you are a CCIS MSc student (majoring in security, ML, or a related discipline), and have the following "required skills", this internship will suit you. The "nice-to-have" skills listed below will be strong plusses.

Required skills:

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

  • Strong ability to work and interact in English

Nice to have:

  • Familiarity with statistical signal processing

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

References: 

[1] Adi, Yossi, et al. 2018. "Turning your weakness into a strength: Watermarking deep neural networks by backdooring." 27th USENIX Security Symposium.

[2]  Szyller, Sebastian et al. 2019. "DAWN: Dynamic adversarial watermarking of neural networks

[3] Sablayrolles, Alexandre, et al. 2020. "Radioactive data: tracing through training." (Open source Github code)

[4] Kim, Wansoo, and Kyogu Lee. 2020. "Digital Watermarking For Protecting Audio Classification Datasets." ICASSP IEEE 2020.

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.

Black-box Adversarial Attacks on Neural Policies  ML & SEC

It is well known that machine learning models are vulnerable to inputs which are constructed by adversaries to force misclassification. Adversarial examples have been extensively studied in image classification models using deep neural networks (DNNs). Recent work [1] shows that reinforcement learning agents using deep reinforcement learning (DRL) algorithms are also susceptible to adversarial examples, since DRL uses neural networks to train a policy in the decision-making process [2]. Existing adversarial attacks [3-6] try to corrupt the observation of the agent by perturbing the sensor readings so that the corresponding observation is different from the true environment that the agent interacts. 

This work will focus on black box adversarial attacks, where the attacker has no knowledge about the agent but can only observe how the agent interacts with the environment. We will analyze how to use and combine imitation learning, apprenticeship learning via inverse reinforcement learning [7] and transferability property of adversarial examples to generate black-box adversarial attacks against DRL. 

Note: There's going to be a programming pre-assignment.

Required skills:

...

MSc students in security, computer science, machine learning, robotics or automated systems

Making high-integrity security policies usable in the real world PLATSEC

SELinux and other mandatory access control mechanisms have been used to secure operating systems, such as Android and normal Linux distributions. But with the great power of these mechanisms comes great complexity, and so far SELinux is used primarily by specialists as part of operating system development, and not by application developers. In this project you will remedy this by developing semi-automated and automated methods that analyse a system's access-control graph, using both data provided by application developers and heuristic application complexity measures, and provide useful data to the developer when deciding on their security boundaries. You will then incorporate these methods into a graphical tool that can be used by normal developers to design security policies that ensure the integrity of their applications.

Required skills:
- Familiarity with Linux operating system security
- Basic programming skills in your preferred language

Nice to have:
- Familiarity with graph theory
- GUI programming experience

For further information: Contact Lachlan Gunn (lachlan.gunn@aalto.fi) and Prof. N. Asokan


...

Byzantine fault tolerance with rich fault models PLATSEC

Byzantine fault tolerance (BFT) has seen a resurgence due to the popularity of permissioned blockchains.  Unlike with Bitcoin-style proof-of-work-based consensus, BFT provides immediate confirmation of requests/transactions.  Existing BFT protocols can generally tolerate a third of participants being faulty, unlike Bitcoin, which can tolerate attackers controlling up to a third of hash rate.

We are working to build a BFT system that can tolerate a richer variety of failure modes, for example:

  • Up to f nodes are malicious (this is the classical BFT)
  • Nodes with CPU power at most are malicious (this is like Bitcoin)
  • All nodes running software X are malicious (e.g. a zero-day vulnerability is found in some piece of software)
  • All nodes owned by company X become malicious (e.g. someone steals administrator credentials)
  • ...Several of the above with different thresholds...

In this project, you will develop systems that can tolerate more "real-world" types of fault like these, and gain experience in the development of distributed systems.

Requirements:

  • Basic network programming experience

Nice to have:

  • Knowledge of C++ and/or Rust
  • Theoretical distributed systems knowledge

References:

1: M Castrov, B Liskov, "Practical Byzantine fault tolerance", Proceedings of OSDI'99.
2: D Malkhi, M Reiter, "Byzantine quorum systems", Proceedings of STOC'97.

For further information: Please contact Lachlan Gunn (lachlan.gunn@aalto.fi)

...

Research Topics with our Industry Partners

...

Posted on April 8, 2021

SSH: IT Support Trainee

SSH Communications Security Oyj. is looking for an IT Support Trainee. SSH IT provides the necessary services for all SSH personnel to enable people to succeed in their work. We run services from mainframe platforms to cutting-edge lambda functions in the cloud. If you are interested in getting familiar with how IT operates and would like to learn skills in device management, what better way to spend the summer than joining SSH IT!

We are looking for a person who is/has:

  • Interested in challenging her/himself
  • Good in identifying a problem and especially finding a solution
  • Basic understanding of how computers work
  • Basic understanding of how computers communicate over a network
  • Good communication skills in English
  • Any programming skill is a plus

Please fill out the application form at https://careers.ssh.com and introduce yourself to us. Tell us also the period you would be available to work.


...

Nokia Bell Labs: Domain-specific threat modeling framework formobile communication systems NETSEC

We, at Nokia Bell Labs, are building on a domain-specific threat modeling framework for mobile communication systems. Our framework is designed to model adversarial behavior in terms of its attack phases and to be used as a common taxonomy matrix. Taking inspiration from the MITRE ATT&CK framework, we have systematically organized publicly known attacks into various tactics and techniques to form the matrix by gleaning through a diverse and large body of security literature [1, 2].

This has already sought the attention of many key players in the mobile communication industry and picking up momentum. Nokia Bell Labs is pioneering and leading the next steps by transforming this research activity into a community-driven initiative. We are looking for a student intern to contribute to these efforts. It can also part of your master's thesis or research topic. The student will be working with the security research team of Nokia Bell Labs in Espoo.  

Tasks include, 

  • Building a web tool to annotate, visualize, and export the threat matrix in a sharable format (e.g. JSON). An example of such a web-based tool can be found here
  • Modeling publicly known attacks on mobile communication systems using the matrix+ web tool  
    • Derive qualitative and quantitative insights from such attack statistics. 
  • Contributing towards extending the threat model and necessary backend infrastructure 
  • Contributing to research (conference) publications 
  • Collaborating with other stakeholders (e.g., mobile operators and equipment vendors)

Required background and skills

  • Full-stack web app development skills; preferably Node.js or Python. 
  • Experience with version controlling (e.g., git, SVN) and docker containers. 
  • Previous contributions to open source software and communities is very much beneficial. 
  • Background in information and network security is NOT required. But interest in these topics is very much expected.
  • Students from underrepresented communities (e.g., LGBTQ) are encouraged to apply. Please mention your preferred pronouns in the application materials. 

Please contact Sid Rao (sid.rao@nokia-bell-labs.com) or Yoan Miche (yoan.miche@nokia-bell-labs.com) if you have any further questions.  

References:

[1] Rao, Siddharth Prakash, Silke Holtmanns, and Tuomas Aura. "Threat modeling framework for mobile communication systems." arXiv preprint.

[2] Patrick Donegan. "Harden Stance briefing – An ATT&CK-like framework for telcos". [Link]

...

Huawei: CV/NLP Research Intern

Posted on October 28

We are looking for a motivated Research Intern who can take part in and contribute to our research in the area of Natural Language Processing (NLP), speech recognition or Computer Vision (CV).

Requirements

  • Be a Master student or PhD candidate (preferred) in computer science, maths, statistics, computational linguistics, computer vision, ML & security or related areas.
  • Have course credits in AI/ML.
  • Be excited about using AI/ML in NLP or Computer Vision or Speech Recognition.
  • Have good software and Python programming skills.
  • Good level of English.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/FD9951B7B2/

...

Huawei: AI/NLP Research Intern

Posted on October 28

Now we are looking for a motivated AI/NLP Research Intern who can take part in and contribute to our research in the area of Natural Language Processing (NLP) or AI/Machine Learning, intersecting with cloud/application security or privacy.

The start time of this trainee position is negotiable, duration is 4-6 months.

Requirements

  • Be a master's student or PhD candidate (preferred) in computer science, mathematics, computational linguistics, machine learning & security or related areas.
  • Have course credits in AI/ML and Security.
  • Be excited about using AI/ML/NLP in the context of security or privacy such as federated learning, web security, end to end encryption applications.
  • Have good software and Python programming skills.
  • Good level of English.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/1C8D4EF809/

...

Reserved Research Topics

...

Dataset Watermarking ML & SEC 

Suppose Alice has collected a valuable dataset. She licenses it to Bob to be used for specific purposes (e.g., computing various aggregate statistics, or to evaluate different data anonymization techniques). Alice does not want Bob to build a machine learning (ML) model using her dataset and monetize the model, for example, by making its prediction API accessible via a cloud server to paying customers. Alice wants a mechanism by which if Bob misbehaves by building a model and making it available to customers, she will be able to demonstrate to an independent verifier that Bob's model was in fact built using her dataset. This is the problem of dataset watermarking.

A related problem is ML model watermarking where Alice inserts a watermark into ML models that she builds which she can later use to demonstrate her ownership of the model. Watermarking is intended to deter model theft. In recent years, a number of ML model watermarking techniques have been proposed, against both white-box model theft [1] where the adversary (Bob) steals the model physically and black-box model extraction [2] where the adversary builds a surrogate model by querying the prediction API of Alice's victim model. 

While model watermarking has been studied extensively, dataset watermarking is relatively under--explored. Recently, two simple dataset watermarking methods have been proposed for image [3] and audio [4] classification datasets. 

In this internship project, you will have the opportunity to work with senior researchers to analyse existing techniques for dataset watermarking, attempt to design circumvention methods against them, for example, by statistical analysis of latent space, and explore whether existing techniques can be improved or whether it is possible to design entirely new techniques that are effective and robust against such circumvention approaches.

Notes:

1) You will be required to complete a programming pre-assignment as part of the interview process.

2) This topic can be structured as either a short (3-4 month) internship or a thesis topic, depending on the background and skills of the selected student.

If you are a CCIS MSc student (majoring in security, ML, or a related discipline), and have the following "required skills", this internship will suit you. The "nice-to-have" skills listed below will be strong plusses.

Required skills:

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

  • Strong ability to work and interact in English

Nice to have:

  • Familiarity with statistical signal processing

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

References: 

[1] Adi, Yossi, et al. 2018. "Turning your weakness into a strength: Watermarking deep neural networks by backdooring." 27th USENIX Security Symposium.

[2]  Szyller, Sebastian et al. 2019. "DAWN: Dynamic adversarial watermarking of neural networks

[3] Sablayrolles, Alexandre, et al. 2020. "Radioactive data: tracing through training." (Open source Github code)

[4] Kim, Wansoo, and Kyogu Lee. 2020. "Digital Watermarking For Protecting Audio Classification Datasets." ICASSP IEEE 2020.

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.

...

Black-box Adversarial Attacks on Neural Policies  ML & SEC

It is well known that machine learning models are vulnerable to inputs which are constructed by adversaries to force misclassification. Adversarial examples have been extensively studied in image classification models using deep neural networks (DNNs). Recent work [1] shows that reinforcement learning agents using deep reinforcement learning (DRL) algorithms are also susceptible to adversarial examples, since DRL uses neural networks to train a policy in the decision-making process [2]. Existing adversarial attacks [3-6] try to corrupt the observation of the agent by perturbing the sensor readings so that the corresponding observation is different from the true environment that the agent interacts. 

This work will focus on black box adversarial attacks, where the attacker has no knowledge about the agent but can only observe how the agent interacts with the environment. We will analyze how to use and combine imitation learning, apprenticeship learning via inverse reinforcement learning [7] and transferability property of adversarial examples to generate black-box adversarial attacks against DRL. 

Note: There's going to be a programming pre-assignment.

Required skills:

  • MSc students in security, computer science, machine learning, robotics or automated systems

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have:

  • Familiarity with one of the topics: adversarial machine learning, reinforcement learning, statistics, control theory, artificial intelligence

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

  • Sufficient skills to work and interact in English

  • An interest for Arcade games

References:

[1] Huang, Sandy, et al. 2017. "Adversarial attacks on neural network policies.

[2] Mnih, Volodymyr, et al.  "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529.

[3] Lin, Yen-Chen, et al. IJCAI-2017. "Tactics of adversarial attack on deep reinforcement learning agents."

[4] Kos, Jernej, and Dawn Song. 2017.  "Delving into adversarial attacks on deep policies." 

[5] Xiao, Chaowei, et al. 2019. "Characterizing Attacks on Deep Reinforcement Learning." 

[6] Hussenot, Léonard et al. 2019.  "CopyCAT: Taking Control of Neural Policies with Constant Attacks."arXiv:1905.12282 

[7] B. Piot, at al. 2017.  "Bridging the Gap Between Imitation Learning and Inverse Reinforcement Learning." in IEEE Transactions on Neural Networks and Learning Systems.

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.

...

Federated Learning: Adaptive Attacks and Defenses in Model Watermarking ML & SEC

Note: Special Assignment Only

Watermarking deep neural networks has become a well-known approach to prove ownership of machine learning models in case they are stolen [1,2,3]. Watermarking methods should be robust against post-processing methods that aim to remove the watermark from the model. Post processing methods can be applied by the adversary who stole the model after the model is deployed as a service. 

Unlike traditional machine learning approaches, federated learning allows training machine learning models at the edge devices (referred as to clients or data owners) and then combines the results of all models into a single global model stored in a server [4]. Watermarking solutions can be integrated into federated learning models when the server is the model owner and clients are data owners [5]. However, unlike the post processing methods, adversaries as malicious clients can directly manipulate the model in the training phase to remove the effect of watermark from the global model. In this work, we are going to design an adaptive attacker that tries to remove the watermark in the training phase of the federated learning and propose new defense strategies against this type of attackers. 

Notes:

1) You will be required to complete a programming pre-assignment as part of the interview process.

2) This topic can be structured as either a short (3-4 month) internship or a thesis topic, depending on the background and skills of the selected student.

Required skills:

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

  • Sufficient skills to work and interact in English

Nice to have:

  • Familiarity with one of the topics: adversarial machine learning, reinforcement learning, statistics, control theory, artificial intelligencefederated learning or statistics

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

  • Sufficient skills to work and interact in English

  • An interest for Arcade games

References:

[1] Huang, Sandy, et al. 2017. "Adversarial attacks on neural network policies.

[2] Mnih, Volodymyr, et al.  "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529.

[3] Lin, Yen-Chen

[1] Adi, Yossi, et al. IJCAI-2017. "Tactics of adversarial attack on deep reinforcement learning agents."

[4] Kos, Jernej, and Dawn Song. 2017.  "Delving into adversarial attacks on deep policies." 

[5] Xiao, Chaowei, et al. 2019. "Characterizing Attacks on Deep Reinforcement Learning." 

[6] Hussenot, Léonard et al. 2019.  "CopyCAT: Taking Control of Neural Policies with Constant Attacks."arXiv:1905.12282 [7] B. Piot, at al. 2017.  "Bridging the Gap Between Imitation Learning and Inverse Reinforcement Learning." in IEEE Transactions on Neural Networks and Learning Systems.2018. "Turning your weakness into a strength: Watermarking deep neural networks by backdooring." 27th USENIX Security Symposium.

[2] Zhang, Jialong, et al. 2018. "Protecting intellectual property of deep neural networks with watermarking." Proceedings of the 2018 on Asia Conference on Computer and Communications Security.

[3] Darvish Rouhani et al. 2019. "DeepSigns: an end-to-end watermarking framework for ownership protection of deep neural networks."Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems.

[4] McMahan, Brendan, et al. "Communication-efficient learning of deep networks from decentralized data."Artificial Intelligence and Statistics. PMLR, 2017.

[5] Atli, Buse Gul, et al. 2020. "WAFFLE: Watermarking in Federated Learning."arXiv preprint arXiv:2008.07298

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.Reserved topics should be moved to this section and taken away from the available sections!

...

Attack-tolerant execution environments PLATSEC

...