scispace - formally typeset
G

George Danezis

Researcher at University College London

Publications -  213
Citations -  12903

George Danezis is an academic researcher from University College London. The author has contributed to research in topics: Anonymity & Traffic analysis. The author has an hindex of 59, co-authored 209 publications receiving 11516 citations. Previous affiliations of George Danezis include University of Cambridge & Microsoft.

Papers
More filters
Journal ArticleDOI

PriPAYD: Privacy-Friendly Pay-As-You-Drive Insurance

TL;DR: This work presents PriPAYD, a system where the premium calculations are performed locally in the vehicle, and only aggregated data are sent to the insurance company, without leaking location information.
Proceedings ArticleDOI

Sphinx: A Compact and Provably Secure Mix Format

TL;DR: Sphinx as mentioned in this paper is a cryptographic message format used to relay anonymized messages within a mix network, which is more compact than any comparable scheme, and supports a full set of security features: indistinguishable replies, hiding the path length and relay position, as well as providing unlinkability for each leg of the message's journey over the network.
Book ChapterDOI

Quantifying location privacy: the case of sporadic location exposure

TL;DR: In this article, the authors propose a systematic way to quantify users' location privacy by modeling both the location-based applications and the locationprivacy preserving mechanisms (LPPMs), and by considering a well-defined adversary model.
Book ChapterDOI

Mix-Networks with Restricted Routes

TL;DR: In this article, a mix network topology based on sparse expander graphs is presented, with each mix only communicating with a few neighboring others, and the anonymous topology is compared with fully connected mix networks and mix cascades.
Proceedings ArticleDOI

Learning Universal Adversarial Perturbations with Generative Models

TL;DR: This work introduces universal adversarial networks, a generative network that is capable of fooling a target classifier when it's generated output is added to a clean sample from a dataset.