Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

OTHER Other systems security research themes


Table of Contents


...

Real

...

Time Universal Adversarial Attacks on Deep

...

Policies  ML & SEC

It is well known that machine learning models are vulnerable to inputs which are constructed by adversaries to force misclassification. Adversarial examples [1] have been extensively studied in image classification models using deep neural networks (DNNs). Recent work [2] shows that reinforcement learning agents using deep reinforcement learning (DRL) algorithms are also susceptible to adversarial examples, since DRL uses neural networks to train a policy in the decision-making process [3]. Existing adversarial attacks [4-8] try to corrupt the observation of the agent by perturbing the sensor readings so that the corresponding observation is different from the true environment that the agent interacts. 

Current adversarial attacks against DRL agents [4-8] are generated per observation, which is not infeasible due to the dynamic nature of RL. Creating observation-specific adversarial samples is usually slower than the agent’s sampling rate, introducing a delay between old and new observations. The goal of this work is to design an approach for creating universal, additive adversarial perturbations that requires almost no computation in real time. This work will focus on different pre-computed attack strategies that are capable of changing the input before it is observed by the RL agent. We will analyze the effect of universal adversarial perturbations on Atari games, continuous task simulators and self driving car simulators trained with different DRL algorithms. 

...

Required skills:

  • MSc students in security, computer science, machine learning, robotics or automated systems

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have:

  • Familiarity with one of the topics: adversarial machine learning, reinforcement learning, statistics, control theory, artificial intelligence

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

  • Sufficient skills to work and interact in English

  • An interest to do research and Arcade games

...

[1] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).

[2] Huang, Sandy, et al. 2017. "Adversarial attacks on neural network policies." arXiv preprint arXiv:1702.02284 (2017).

[3] Mnih, Volodymyr, et al.  "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529.

[4] Lin, Yen-Chen, et al. IJCAI-2017. "Tactics of adversarial attack on deep reinforcement learning agents."arXiv preprint arXiv:1703.06748 (2017).

[5] Behzadan, Vahid, and Arslan Munir.  Springer, Cham, 2017. "Vulnerability of deep reinforcement learning to policy induction attacks."International Conference on Machine Learning and Data Mining in Pattern Recognition. Springer, Cham, 2017.

[6] Kos, Jernej, and Dawn Song. 2017.  "Delving into adversarial attacks on deep policies."arXiv preprint arXiv:1705.06452 (2017). 

[7] Tretschk, Edgar, Seong Joon Oh, and Mario Fritz. 2018. "Sequential attacks on agents for long-term adversarial goals." arXiv preprint arXiv:1805.12487 (2018).

[8] Xiao, Chaowei, et al. 2019. "Characterizing Attacks on Deep Reinforcement Learning."arXiv preprint arXiv:1907.09470 (2019). 

For further information:  Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.

...