Child pages
  • CloSer Workshop 2017: Posters and Demos

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.



Private Membership Test Protocol with Low Communication Complexity

Poster and DemoSara Ramezanian

We introduce a protocol with low communication complexity, that enables clients to search through server's database privately. We use homomorphic encryption to hide client's search item and perform computation on server's database.

The Circle Game: Scalable Private Membership Test Using Trusted HardwarePosterSandeep Tamrakar and Andrew Paverd

Malware checking is changing from being a local service to a cloud-assisted one where users’ devices query a cloud server, which hosts a dictionary of malware signatures, to check if particular applications are potentially malware. Whilst such an architecture gains all the benefits of cloud-based services, it opens up a major privacy concern since the cloud service can infer personal traits of the users based on the lists of applications queried by their devices.

In the poster, we present a simple PMT approach using a carousel: circling the entire dictionary through trusted hardware on the cloud server. We show how the carousel approach, using different data structures to represent the dictionary, can be realized on two different commercial hardware security architectures (ARM TrustZone and Intel SGX). Through extensive experimental analysis, we show that our carousel approach surprisingly outperforms Path ORAM on the same hardware by supporting a much higher query arrival rate while guaranteeing acceptable response latency for individual queries.

Privacy-Preserving Deep Learning PredictionPoster and Demo and PaperJian Liu

Cloud-based prediction models are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive data to the server. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model. We present a technique that transforms a learned neural network to its privacy-preserving form.