Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.

Guaranatees of Differential Privacy in Overparameterised Models ML & SEC

...

In this work, we are going to analyse the shortcoming of existing methods of incorporating differential privacy into machine learning models and work on creating a novel technique

...

Requirements

  • MSc students in security, computer science, machine learning
  • familiarity with both standard ML techniques as well as deep neural networks
  • Good math and algorithmic skills (stats and probability in particular)
  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have

  • industry experience in software engineering or related
  • research experience
  • familiarity with differential privacy and/or anonymization techniques

...

[2]: Reza Shokri, Vitaly Shmatikov. 2015. Privacy Preserving Deep Learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security Pages 1310-1321

[3]: Congzheng Song, Vitaly Shmatikov. 2019. Overlearning Reveals Sensitive Attributes

[4]: Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., and Jana, S. 2019. Certified Robustness to Adversarial Examples with Differential Privacy. In IEEE Symposium on Security and Privacy (SP) 2019

[5]: M. Abadi, A. Chu, I. Goodfellow, H. Brendan McMahan, I. Mironov, K. Talwar, and L. Zhang. 2016. Deep Learning with Differential Privacy. ArXiv e-prints.

For further information: Sebastian Szyller (sebastian.szyller@aalto.fi) and Prof. N. Asokan.

Ineffectiveness of Non-label-flipping Defenses and Watermarking Schemes Against Model Extraction Attacks ML & SEC

...