Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

OTHER Other systems security research themes


Table of Contents


...

Fingerprinting Neural Network Models using their learned representation ML & SEC

Creators of machine learning models need solutions to demonstrate their ownership if their models are stolen. Watermarking has been proposed as a technique to enable demonstration of ownership. For instance, several recent proposals watermark deep neural network (DNN) models using backdooring: training them with additional mislabeled data [a,b]. 

In contrast to watermarking, fingerprinting models would enable to identify and prove ownership of ML models without the need for modifying them (by adding a watermark). Recently several methods have been introduced to extract and compare the representations learned by ML models, especially Neural Networks: NN [c]. The goal of this project is to assess if Neural Network representations can be used as a reliable technique to fingerprint NN models. It entails implementing and comparing existing Neural Network representations: CKA [d], CCA [e], Cosine similarity, etc. and evaluating their ability to quantify the high similarity between models derived from a same ML model (using fine-tuning, pruning, distillation [f]) but also the low similarity between different models not derived from a common model.

Requirements:

  • Programming skill in Python / PyTorch
  • Knowledge of Deep Learning, DNN/CNN model training and parametrisation 
  • Knowledge about image classification and how to train such models
  • Knowledge of fine-tuning and pruning is a plus

References:

  • [a] Adi, Y., Baum, C., Cisse, M., Pinkas, B., Keshet, J.: Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In: 27th Security Symposium (Security 18). pp. 1615–1631 (2018)
  • [b] Zhang, J., Gu, Z., Jang, J., Wu, H., Stoecklin, M.P., Huang, H., Molloy, I.: Protecting intellectual property of deep neural networks with watermarking. In: Proceedings of the 2018 on Asia Conference on Computer and Communications Security. pp. 159–172 (2018)
  • [c] Morcos, A., Raghu, M. and Bengio, S., 2018. Insights on representational similarity in neural networks with canonical correlation. In Advances in Neural Information Processing Systems (pp. 5727-5736).
  • [d] Kornblith, Simon, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. "Similarity of neural network representations revisited." arXiv preprint arXiv:1905.00414 (2019).
  • Raghu, M., Gilmer, J., Yosinski, J. and Sohl-Dickstein, J., 2017.
  • [e] Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems (pp. 6076-6085).
  • [f] Liu, K., Dolan-Gavitt, B., Garg, S.: Fine-pruning: Defending against backdooring attacks on deep neural networks. In: International Symposium on Research in Attacks, Intrusions, and Defenses. pp. 273–294. Springer (2018)

For further information: Please contact Samuel Marchal (samuel.marchal@aalto.fi)

Locally Sensitive Neural Based- Perceptual Hashing and Contextual Hashing ML & SEC

...