Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Move ATEE project to "reserved"

...

For further information: Contact Lachlan Gunn (lachlan.gunn@aalto.fi) and Prof. N. Asokan

Attack-tolerant execution environments PLATSEC

Modern computing devices contain a trusted execution environment (TEE) for handling sensitive operations and data, such cryptographic keys, in an isolated environment.  TEEs are intended to protect trusted applications from potentially dangerous application software, or even a compromised operating system. An important aspect of TEEs is remote attestation, the ability to provide evidence of the state of a TEE to a third party, for example that a trusted application it is communicating with is in fact running within a TEE. 

Modern processors are complex and incorporate several mechanisms like caching, speculation, and out-of-order execution to improve performance. Several recent attacks, like the well-known Spectre and Meltdown attacks, exploit this complexity to compromise integrity and confidentiality of computation. Variants of these attacks, like Foreshadow, are applicable against TEEs. In particular, strong adversaries capable of mounting Foreshadow-like attacks can not only compromise data confidentiality, but also remote attestation guarantees by leaking the keys used by the platform. 

In this project, you will explore how to mitigate the impact of such attacks. As a first step, we plan to explore how remote attestation guarantees can still be retained even in the presence of attacks that may compromise confidentiality of trusted application data. We will use a TEE based on seL4, a microkernel operating system with formally proven isolation guarantees, and use it together with a smaller FPGA-based fixed-function TEE used only for cryptographic operations involved in the attestation protocol. Working with senior researchers, you will design and implement the FPGA-based TEE functionality that underpins this system. 

In the longer term, we want to explore more advanced hardware and software innovations that can retain application data confidentiality even in the presence of such strong adversaries.  This will involve modifying existing open-source processor designs to develop new defences against runtime and side-channel attacks.  Depending on your background and progress, this may also form part of this project.   

Requirements:

  • C programming experience 
  • Basic cryptographic knowledge (hash functions, digital signatures, etc.) 

Nice to have:

  • FPGA development experience 
  • Familiarity with computer architecture 

For further information: Contact Lachlan Gunn (lachlan.gunn@aalto.fi) and Prof. N. Asokan

Byzantine fault tolerance with rich fault models PLATSEC

Byzantine fault tolerance (BFT) has seen a resurgence due to the popularity of permissioned blockchains.  Unlike with Bitcoin-style proof-of-work-based consensus, BFT provides immediate confirmation of requests/transactions.  Existing BFT protocols can generally tolerate a third of participants being faulty, unlike Bitcoin, which can tolerate attackers controlling up to a third of hash rate.

We are working to build a BFT system that can tolerate a richer variety of failure modes, for example:

  • Up to f nodes are malicious (this is the classical BFT)
  • Nodes with CPU power at most are malicious (this is like Bitcoin)
  • All nodes running software X are malicious (e.g. a zero-day vulnerability is found in some piece of software)
  • All nodes owned by company X become malicious (e.g. someone steals administrator credentials)
  • ...Several of the above with different thresholds...

In this project, you will develop systems that can tolerate more "real-world" types of fault like these, and gain experience in the development of distributed systems.

Requirements:

  • Basic network programming experience

Nice to have:

  • Knowledge of C++ and/or Rust
  • Theoretical distributed systems knowledge

References:

1: M Castrov, B Liskov, "Practical Byzantine fault tolerance", Proceedings of OSDI'99.
2: D Malkhi, M Reiter, "Byzantine quorum systems", Proceedings of STOC'97.

For further information: Please contact Lachlan Gunn (lachlan.gunn@aalto.fi)

Research Topics with our Industry Partners

Nokia Bell Labs: Domain-specific threat modeling framework for mobile communication systems NETSEC

We, at Nokia Bell Labs, are building on a domain-specific threat modeling framework for mobile communication systems. Our framework is designed to model adversarial behavior in terms of its attack phases and to be used as a common taxonomy matrix. Taking inspiration from the MITRE ATT&CK framework, we have systematically organized publicly known attacks into various tactics and techniques to form the matrix by gleaning through a diverse and large body of security literature [1, 2].

This has already sought the attention of many key players in the mobile communication industry and picking up momentum. Nokia Bell Labs is pioneering and leading the next steps by transforming this research activity into a community-driven initiative. We are looking for a student intern to contribute to these efforts. It can also part of your master's thesis or research topic. The student will be working with the security research team of Nokia Bell Labs in Espoo.  

Tasks include, 

  • Building a web tool to annotate, visualize, and export the threat matrix in a sharable format (e.g. JSON). An example of such a web-based tool can be found here
  • Modeling publicly known attacks on mobile communication systems using the matrix+ web tool  
    • Derive qualitative and quantitative insights from such attack statistics. 
  • Contributing towards extending the threat model and necessary backend infrastructure 
  • Contributing to research (conference) publications 
  • Collaborating with other stakeholders (e.g., mobile operators and equipment vendors)

Required background and skills

  • Full-stack web app development skills; preferably Node.js or Python. 
  • Experience with version controlling (e.g., git, SVN) and docker containers. 
  • Previous contributions to open source software and communities is very much beneficial. 
  • Background in information and network security is NOT required. But interest in these topics is very much expected.
  • Students from underrepresented communities (e.g., LGBTQ) are encouraged to apply. Please mention your preferred pronouns in the application materials. 

Please contact Sid Rao (sid.rao@nokia-bell-labs.com) or Yoan Miche (yoan.miche@nokia-bell-labs.com) if you have any further questions.  

References:

[1] Rao, Siddharth Prakash, Silke Holtmanns, and Tuomas Aura. "Threat modeling framework for mobile communication systems." arXiv preprint.

[2] Patrick Donegan. "Harden Stance briefing – An ATT&CK-like framework for telcos". [Link]

Huawei: CV/NLP Research Intern

Posted on October 28

We are looking for a motivated Research Intern who can take part in and contribute to our research in the area of Natural Language Processing (NLP), speech recognition or Computer Vision (CV).

Requirements

  • Be a Master student or PhD candidate (preferred) in computer science, maths, statistics, computational linguistics, computer vision, ML & security or related areas.
  • Have course credits in AI/ML.
  • Be excited about using AI/ML in NLP or Computer Vision or Speech Recognition.
  • Have good software and Python programming skills.
  • Good level of English.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/FD9951B7B2/

Huawei: AI/NLP Research Intern

Posted on October 28

Now we are looking for a motivated AI/NLP Research Intern who can take part in and contribute to our research in the area of Natural Language Processing (NLP) or AI/Machine Learning, intersecting with cloud/application security or privacy.

The start time of this trainee position is negotiable, duration is 4-6 months.

Requirements

  • Be a master's student or PhD candidate (preferred) in computer science, mathematics, computational linguistics, machine learning & security or related areas.
  • Have course credits in AI/ML and Security.
  • Be excited about using AI/ML/NLP in the context of security or privacy such as federated learning, web security, end to end encryption applications.
  • Have good software and Python programming skills.
  • Good level of English.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/1C8D4EF809/

Reserved Research Topics

Dataset Watermarking ML & SEC 

Suppose Alice has collected a valuable dataset. She licenses it to Bob to be used for specific purposes (e.g., computing various aggregate statistics, or to evaluate different data anonymization techniques). Alice does not want Bob to build a machine learning (ML) model using her dataset and monetize the model, for example, by making its prediction API accessible via a cloud server to paying customers. Alice wants a mechanism by which if Bob misbehaves by building a model and making it available to customers, she will be able to demonstrate to an independent verifier that Bob's model was in fact built using her dataset. This is the problem of dataset watermarking.

A related problem is ML model watermarking where Alice inserts a watermark into ML models that she builds which she can later use to demonstrate her ownership of the model. Watermarking is intended to deter model theft. In recent years, a number of ML model watermarking techniques have been proposed, against both white-box model theft [1] where the adversary (Bob) steals the model physically and black-box model extraction [2] where the adversary builds a surrogate model by querying the prediction API of Alice's victim model. 

While model watermarking has been studied extensively, dataset watermarking is relatively under--explored. Recently, two simple dataset watermarking methods have been proposed for image [3] and audio [4] classification datasets. 

In this internship project, you will have the opportunity to work with senior researchers to analyse existing techniques for dataset watermarking, attempt to design circumvention methods against them, for example, by statistical analysis of latent space, and explore whether existing techniques can be improved or whether it is possible to design entirely new techniques that are effective and robust against such circumvention approaches.

Notes:

1) You will be required to complete a programming pre-assignment as part of the interview process.

2) This topic can be structured as either a short (3-4 month) internship or a thesis topic, depending on the background and skills of the selected student.

If you are a CCIS MSc student (majoring in security, ML, or a related discipline), and have the following "required skills", this internship will suit you. The "nice-to-have" skills listed below will be strong plusses.

Required skills:

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

  • Strong ability to work and interact in English

Nice to have:

  • Familiarity with statistical signal processing

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

References: 

[1] Adi, Yossi, et al. 2018. "Turning your weakness into a strength: Watermarking deep neural networks by backdooring." 27th USENIX Security Symposium.

[2]  Szyller, Sebastian et al. 2019. "DAWN: Dynamic adversarial watermarking of neural networks

[3] Sablayrolles, Alexandre, et al. 2020. "Radioactive data: tracing through training." (Open source Github code)

[4] Kim, Wansoo, and Kyogu Lee. 2020. "Digital Watermarking For Protecting Audio Classification Datasets." ICASSP IEEE 2020.

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.

Black-box Adversarial Attacks on Neural Policies  ML & SEC

It is well known that machine learning models are vulnerable to inputs which are constructed by adversaries to force misclassification. Adversarial examples have been extensively studied in image classification models using deep neural networks (DNNs). Recent work [1] shows that reinforcement learning agents using deep reinforcement learning (DRL) algorithms are also susceptible to adversarial examples, since DRL uses neural networks to train a policy in the decision-making process [2]. Existing adversarial attacks [3-6] try to corrupt the observation of the agent by perturbing the sensor readings so that the corresponding observation is different from the true environment that the agent interacts. 

This work will focus on black box adversarial attacks, where the attacker has no knowledge about the agent but can only observe how the agent interacts with the environment. We will analyze how to use and combine imitation learning, apprenticeship learning via inverse reinforcement learning [7] and transferability property of adversarial examples to generate black-box adversarial attacks against DRL. 

Note: There's going to be a programming pre-assignment.

Required skills:

  • MSc students in security, computer science, machine learning, robotics or automated systems

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have:

  • Familiarity with one of the topics: adversarial machine learning, reinforcement learning, statistics, control theory, artificial intelligence

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

  • Sufficient skills to work and interact in English

  • An interest for Arcade games

References:

[1] Huang, Sandy, et al. 2017. "Adversarial attacks on neural network policies.

[2] Mnih, Volodymyr, et al.  "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529.

[3] Lin, Yen-Chen, et al. IJCAI-2017. "Tactics of adversarial attack on deep reinforcement learning agents."

[4] Kos, Jernej, and Dawn Song. 2017.  "Delving into adversarial attacks on deep policies." 

[5] Xiao, Chaowei, et al. 2019. "Characterizing Attacks on Deep Reinforcement Learning." 

...


...

[7] B. Piot, at al. 2017.  "Bridging the Gap Between Imitation Learning and Inverse Reinforcement Learning." in IEEE Transactions on Neural Networks and Learning Systems.

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.

...

Byzantine fault tolerance with rich fault models PLATSEC

Byzantine fault tolerance (BFT) has seen a resurgence due to the popularity of permissioned blockchains.  Unlike with Bitcoin-style proof-of-work-based consensus, BFT provides immediate confirmation of requests/transactions.  Existing BFT protocols can generally tolerate a third of participants being faulty, unlike Bitcoin, which can tolerate attackers controlling up to a third of hash rate.

We are working to build a BFT system that can tolerate a richer variety of failure modes, for example:

  • Up to f nodes are malicious (this is the classical BFT)
  • Nodes with CPU power at most are malicious (this is like Bitcoin)
  • All nodes running software X are malicious (e.g. a zero-day vulnerability is found in some piece of software)
  • All nodes owned by company X become malicious (e.g. someone steals administrator credentials)
  • ...Several of the above with different thresholds...

In this project, you will develop systems that can tolerate more "real-world" types of fault like these, and gain experience in the development of distributed systems.

Requirements:

  • Basic network programming experience

Nice to have:

  • Knowledge of C++ and/or Rust
  • Theoretical distributed systems knowledge

References:

1: M Castrov, B Liskov, "Practical Byzantine fault tolerance", Proceedings of OSDI'99.
2: D Malkhi, M Reiter, "Byzantine quorum systems", Proceedings of STOC'97.

For further information: Please contact Lachlan Gunn (lachlan.gunn@aalto.fi)

...

Research Topics with our Industry Partners


...

Nokia Bell Labs: Domain-specific threat modeling framework for mobile communication systems NETSEC

We, at Nokia Bell Labs, are building on a domain-specific threat modeling framework for mobile communication systems. Our framework is designed to model adversarial behavior in terms of its attack phases and to be used as a common taxonomy matrix. Taking inspiration from the MITRE ATT&CK framework, we have systematically organized publicly known attacks into various tactics and techniques to form the matrix by gleaning through a diverse and large body of security literature [1, 2].

This has already sought the attention of many key players in the mobile communication industry and picking up momentum. Nokia Bell Labs is pioneering and leading the next steps by transforming this research activity into a community-driven initiative. We are looking for a student intern to contribute to these efforts. It can also part of your master's thesis or research topic. The student will be working with the security research team of Nokia Bell Labs in Espoo.  

Tasks include, 

  • Building a web tool to annotate, visualize, and export the threat matrix in a sharable format (e.g. JSON). An example of such a web-based tool can be found here
  • Modeling publicly known attacks on mobile communication systems using the matrix+ web tool  
    • Derive qualitative and quantitative insights from such attack statistics. 
  • Contributing towards extending the threat model and necessary backend infrastructure 
  • Contributing to research (conference) publications 
  • Collaborating with other stakeholders (e.g., mobile operators and equipment vendors)

Required background and skills

  • Full-stack web app development skills; preferably Node.js or Python. 
  • Experience with version controlling (e.g., git, SVN) and docker containers. 
  • Previous contributions to open source software and communities is very much beneficial. 
  • Background in information and network security is NOT required. But interest in these topics is very much expected.
  • Students from underrepresented communities (e.g., LGBTQ) are encouraged to apply. Please mention your preferred pronouns in the application materials. 

Please contact Sid Rao (sid.rao@nokia-bell-labs.com) or Yoan Miche (yoan.miche@nokia-bell-labs.com) if you have any further questions.  

References:

[1] Rao, Siddharth Prakash, Silke Holtmanns, and Tuomas Aura. "Threat modeling framework for mobile communication systems." arXiv preprint.

[2] Patrick Donegan. "Harden Stance briefing – An ATT&CK-like framework for telcos". [Link]

...

Huawei: CV/NLP Research Intern

Posted on October 28

We are looking for a motivated Research Intern who can take part in and contribute to our research in the area of Natural Language Processing (NLP), speech recognition or Computer Vision (CV).

Requirements

  • Be a Master student or PhD candidate (preferred) in computer science, maths, statistics, computational linguistics, computer vision, ML & security or related areas.
  • Have course credits in AI/ML.
  • Be excited about using AI/ML in NLP or Computer Vision or Speech Recognition.
  • Have good software and Python programming skills.
  • Good level of English.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/FD9951B7B2/

...

Huawei: AI/NLP Research Intern

Posted on October 28

Now we are looking for a motivated AI/NLP Research Intern who can take part in and contribute to our research in the area of Natural Language Processing (NLP) or AI/Machine Learning, intersecting with cloud/application security or privacy.

The start time of this trainee position is negotiable, duration is 4-6 months.

Requirements

  • Be a master's student or PhD candidate (preferred) in computer science, mathematics, computational linguistics, machine learning & security or related areas.
  • Have course credits in AI/ML and Security.
  • Be excited about using AI/ML/NLP in the context of security or privacy such as federated learning, web security, end to end encryption applications.
  • Have good software and Python programming skills.
  • Good level of English.

More information: https://apply.workable.com/huawei-technologies-oy-finland/j/1C8D4EF809/

...

Reserved Research Topics

...

Dataset Watermarking ML & SEC 

Suppose Alice has collected a valuable dataset. She licenses it to Bob to be used for specific purposes (e.g., computing various aggregate statistics, or to evaluate different data anonymization techniques). Alice does not want Bob to build a machine learning (ML) model using her dataset and monetize the model, for example, by making its prediction API accessible via a cloud server to paying customers. Alice wants a mechanism by which if Bob misbehaves by building a model and making it available to customers, she will be able to demonstrate to an independent verifier that Bob's model was in fact built using her dataset. This is the problem of dataset watermarking.

A related problem is ML model watermarking where Alice inserts a watermark into ML models that she builds which she can later use to demonstrate her ownership of the model. Watermarking is intended to deter model theft. In recent years, a number of ML model watermarking techniques have been proposed, against both white-box model theft [1] where the adversary (Bob) steals the model physically and black-box model extraction [2] where the adversary builds a surrogate model by querying the prediction API of Alice's victim model. 

While model watermarking has been studied extensively, dataset watermarking is relatively under--explored. Recently, two simple dataset watermarking methods have been proposed for image [3] and audio [4] classification datasets. 

In this internship project, you will have the opportunity to work with senior researchers to analyse existing techniques for dataset watermarking, attempt to design circumvention methods against them, for example, by statistical analysis of latent space, and explore whether existing techniques can be improved or whether it is possible to design entirely new techniques that are effective and robust against such circumvention approaches.

Notes:

1) You will be required to complete a programming pre-assignment as part of the interview process.

2) This topic can be structured as either a short (3-4 month) internship or a thesis topic, depending on the background and skills of the selected student.

If you are a CCIS MSc student (majoring in security, ML, or a related discipline), and have the following "required skills", this internship will suit you. The "nice-to-have" skills listed below will be strong plusses.

Required skills:

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

  • Strong ability to work and interact in English

Nice to have:

  • Familiarity with statistical signal processing

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

References: 

[1] Adi, Yossi, et al. 2018. "Turning your weakness into a strength: Watermarking deep neural networks by backdooring." 27th USENIX Security Symposium.

[2]  Szyller, Sebastian et al. 2019. "DAWN: Dynamic adversarial watermarking of neural networks

[3] Sablayrolles, Alexandre, et al. 2020. "Radioactive data: tracing through training." (Open source Github code)

[4] Kim, Wansoo, and Kyogu Lee. 2020. "Digital Watermarking For Protecting Audio Classification Datasets." ICASSP IEEE 2020.

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.

...

Black-box Adversarial Attacks on Neural Policies  ML & SEC

It is well known that machine learning models are vulnerable to inputs which are constructed by adversaries to force misclassification. Adversarial examples have been extensively studied in image classification models using deep neural networks (DNNs). Recent work [1] shows that reinforcement learning agents using deep reinforcement learning (DRL) algorithms are also susceptible to adversarial examples, since DRL uses neural networks to train a policy in the decision-making process [2]. Existing adversarial attacks [3-6] try to corrupt the observation of the agent by perturbing the sensor readings so that the corresponding observation is different from the true environment that the agent interacts. 

This work will focus on black box adversarial attacks, where the attacker has no knowledge about the agent but can only observe how the agent interacts with the environment. We will analyze how to use and combine imitation learning, apprenticeship learning via inverse reinforcement learning [7] and transferability property of adversarial examples to generate black-box adversarial attacks against DRL. 

Note: There's going to be a programming pre-assignment.

Required skills:

  • MSc students in security, computer science, machine learning, robotics or automated systems

  • Basic understanding of both standard ML techniques as well as deep neural networks (You should at least take Machine learning: basic principles course from SCI department or some other similar course)

  • Good knowledge of mathematical methods and algorithmic skills

  • Strong programming skills in Python/Java/Scala/C/C++/Javascript (Python preferred as de facto language)

Nice to have:

  • Familiarity with one of the topics: adversarial machine learning, reinforcement learning, statistics, control theory, artificial intelligence

  • Familiarity with deep learning frameworks (PyTorch, Tensorflow, Keras etc.)

  • Sufficient skills to work and interact in English

  • An interest for Arcade games

References:

[1] Huang, Sandy, et al. 2017. "Adversarial attacks on neural network policies.

[2] Mnih, Volodymyr, et al.  "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529.

[3] Lin, Yen-Chen, et al. IJCAI-2017. "Tactics of adversarial attack on deep reinforcement learning agents."

[4] Kos, Jernej, and Dawn Song. 2017.  "Delving into adversarial attacks on deep policies." 

[5] Xiao, Chaowei, et al. 2019. "Characterizing Attacks on Deep Reinforcement Learning." 

[6] Hussenot, Léonard et al. 2019.  "CopyCAT: Taking Control of Neural Policies with Constant Attacks."arXiv:1905.12282 

[7] B. Piot, at al. 2017.  "Bridging the Gap Between Imitation Learning and Inverse Reinforcement Learning." in IEEE Transactions on Neural Networks and Learning Systems.

For further information: Please contact Buse G. A. Tekgul  (buse.atli@aalto.fi), and prof. N. Asokan.


Reserved topics should be moved to this section and taken away from the available sections!


...

Attack-tolerant execution environments PLATSEC

Modern computing devices contain a trusted execution environment (TEE) for handling sensitive operations and data, such cryptographic keys, in an isolated environment.  TEEs are intended to protect trusted applications from potentially dangerous application software, or even a compromised operating system. An important aspect of TEEs is remote attestation, the ability to provide evidence of the state of a TEE to a third party, for example that a trusted application it is communicating with is in fact running within a TEE. 

Modern processors are complex and incorporate several mechanisms like caching, speculation, and out-of-order execution to improve performance. Several recent attacks, like the well-known Spectre and Meltdown attacks, exploit this complexity to compromise integrity and confidentiality of computation. Variants of these attacks, like Foreshadow, are applicable against TEEs. In particular, strong adversaries capable of mounting Foreshadow-like attacks can not only compromise data confidentiality, but also remote attestation guarantees by leaking the keys used by the platform. 

In this project, you will explore how to mitigate the impact of such attacks. As a first step, we plan to explore how remote attestation guarantees can still be retained even in the presence of attacks that may compromise confidentiality of trusted application data. We will use a TEE based on seL4, a microkernel operating system with formally proven isolation guarantees, and use it together with a smaller FPGA-based fixed-function TEE used only for cryptographic operations involved in the attestation protocol. Working with senior researchers, you will design and implement the FPGA-based TEE functionality that underpins this system. 

In the longer term, we want to explore more advanced hardware and software innovations that can retain application data confidentiality even in the presence of such strong adversaries.  This will involve modifying existing open-source processor designs to develop new defences against runtime and side-channel attacks.  Depending on your background and progress, this may also form part of this project.   

Requirements:

  • C programming experience 
  • Basic cryptographic knowledge (hash functions, digital signatures, etc.) 

Nice to have:

  • FPGA development experience 
  • Familiarity with computer architecture 

For further information: Contact Lachlan Gunn (lachlan.gunn@aalto.fi) and Prof. N. Asokan