scispace - formally typeset
Search or ask a question
Author

Carmela Troncoso

Other affiliations: IMDEA, Katholieke Universiteit Leuven, University of Vigo  ...read more
Bio: Carmela Troncoso is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Traffic analysis & Anonymity. The author has an hindex of 29, co-authored 127 publications receiving 3383 citations. Previous affiliations of Carmela Troncoso include IMDEA & Katholieke Universiteit Leuven.


Papers
More filters
Proceedings ArticleDOI
16 Oct 2012
TL;DR: This work proposes the first methodology, to the best of the knowledge, that enables a designer to find the optimal LPPM for a LBS given each user's service quality constraints against an adversary implementing the optimal inference algorithm and develops two linear programs that output the best LPPM strategy and its corresponding optimal inference attack.
Abstract: The mainstream approach to protecting the location-privacy of mobile users in location-based services (LBSs) is to alter the users' actual locations in order to reduce the location information exposed to the service provider. The location obfuscation algorithm behind an effective location-privacy preserving mechanism (LPPM) must consider three fundamental elements: the privacy requirements of the users, the adversary's knowledge and capabilities, and the maximal tolerated service quality degradation stemming from the obfuscation of true locations. We propose the first methodology, to the best of our knowledge, that enables a designer to find the optimal LPPM for a LBS given each user's service quality constraints against an adversary implementing the optimal inference algorithm. Such LPPM is the one that maximizes the expected distortion (error) that the optimal adversary incurs in reconstructing the actual location of a user, while fulfilling the user's service-quality requirement. We formalize the mutual optimization of user-adversary objectives (location privacy vs. correctness of localization) by using the framework of Stackelberg Bayesian games. In such setting, we develop two linear programs that output the best LPPM strategy and its corresponding optimal inference attack. Our optimal user-centric LPPM can be easily integrated in the users' mobile devices they use to access LBSs. We validate the efficacy of our game theoretic method against real location traces. Our evaluation confirms that the optimal LPPM strategy is superior to a straightforward obfuscation method, and that the optimal localization attack performs better compared to a Bayesian inference attack.

423 citations

Proceedings Article
25 Jan 2011
TL;DR: This paper argues that engineering systems according to the privacy by design principles requires the development of generalizable methodologies that build upon the principle of data minimization, and presents a summary of two case studies in which privacy is achieved by minimizing different types of data.
Abstract: The design and implementation of privacy requirements in systems is a difficult problem and requires the translation of complex social, legal and ethical concerns into systems requirements. The concept of “privacy by design” has been proposed to serve as a guideline on how to address these concerns. “Privacy by design” consists of a number of principles that can be applied from the onset of systems development to mitigate privacy concerns and achieve data protection compliance. However, these principles remain vague and leave many open questions about their application when engineering systems. In this paper we show how starting from data minimization is a necessary and foundational first step to engineer systems in line with the principles of privacy by design. We first discuss what data minimization can mean from a security engineering perspective. We then present a summary of two case studies in which privacy is achieved by minimizing different types of data, according to the purpose of each application. First, we present a privacypreserving ePetition system, in which user’s privacy is guaranteed by hiding their identity from the provider while revealing their votes. Secondly, we study a road tolling system, in which users have to be identified for billing reasons and data minimization is applied to protect further sensitive information (in this case location information). The case studies make evident that the application of data minimization does not necessarily imply anonymity, but may also be achieved by means of concealing information related to identifiable individuals. In fact, different kinds of data minimization are possible, and each system requires careful crafting of data minimization best suited for its purpose. Most importantly, the two case studies underline that the interpretation of privacy by design principles requires specific engineering expertise, contextual analysis, and a balancing of multilateral security and privacy interests. They show that privacy by design is a productive space in which there is no one way of solving the problems. Based on our analysis of the two case studies, we argue that engineering systems according to the privacy by design principles requires the development of generalizable methodologies that build upon the principle of data minimization. However, the complexity of this engineering task demands caution against reducing such methodologies to “privacy by design check lists” that can easily be ticked away for compliance reasons while not mitigating some of the risks that privacy by design is meant to address.

268 citations

Proceedings ArticleDOI
01 Feb 2018
TL;DR: In this article, the authors present a game-based definition of membership inference on aggregate location time-series, and cast it as a classification problem where machine learning can be used to distinguish whether or not a target user is part of the aggregates.
Abstract: Aggregate location data is often used to support smart services and applications, such as generating live traffic maps or predicting visits to businesses. In this paper, we present the first study on the feasibility of membership inference attacks on aggregate location time-series. We introduce a game-based definition of the adversarial task, and cast it as a classification problem where machine learning can be used to distinguish whether or not a target user is part of the aggregates. We empirically evaluate the power of these attacks on both raw and differentially private aggregates using two real-world mobility datasets. We find that membership inference is a serious privacy threat, and show how its effectiveness depends on the adversary's prior knowledge, the characteristics of the underlying location data, as well as the number of users and the timeframe on which aggregation is performed. Although differentially private defenses can indeed reduce the extent of the attacks, they also yield a significant loss in utility. Moreover, a strategic adversary mimicking the behavior of the defense mechanism can greatly limit the protection they provide. Overall, our work presents a novel methodology geared to evaluate membership inference on aggregate location data in real-world settings and can be used by providers to assess the quality of privacy protection before data release or by regulators to detect violations.

212 citations

Posted Content
TL;DR: This system, referred to as DP3T, provides a technological foundation to help slow the spread of SARS-CoV-2 by simplifying and accelerating the process of notifying people who might have been exposed to the virus so that they can take appropriate measures to break its transmission chain.
Abstract: This document describes and analyzes a system for secure and privacy-preserving proximity tracing at large scale. This system, referred to as DP3T, provides a technological foundation to help slow the spread of SARS-CoV-2 by simplifying and accelerating the process of notifying people who might have been exposed to the virus so that they can take appropriate measures to break its transmission chain. The system aims to minimise privacy and security risks for individuals and communities and guarantee the highest level of data protection. The goal of our proximity tracing system is to determine who has been in close physical proximity to a COVID-19 positive person and thus exposed to the virus, without revealing the contact's identity or where the contact occurred. To achieve this goal, users run a smartphone app that continually broadcasts an ephemeral, pseudo-random ID representing the user's phone and also records the pseudo-random IDs observed from smartphones in close proximity. When a patient is diagnosed with COVID-19, she can upload pseudo-random IDs previously broadcast from her phone to a central server. Prior to the upload, all data remains exclusively on the user's phone. Other users' apps can use data from the server to locally estimate whether the device's owner was exposed to the virus through close-range physical proximity to a COVID-19 positive person who has uploaded their data. In case the app detects a high risk, it will inform the user.

172 citations

Proceedings ArticleDOI
04 Oct 2010
TL;DR: It is shown that constructing cloaking regions based on the users' locations does not reliably relate to location privacy, and it is argued that this technique may even be detrimental to users' location privacy.
Abstract: There is a rich collection of literature that aims at protecting the privacy of users querying location-based services. One of the most popular location privacy techniques consists in cloaking users' locations such that k users appear as potential senders of a query, thus achieving k-anonymity. This paper analyzes the effectiveness of k-anonymity approaches for protecting location privacy in the presence of various types of adversaries. The unraveling of the scheme unfolds the inconsistency between its components, mainly the cloaking mechanism and the k-anonymity metric. We show that constructing cloaking regions based on the users' locations does not reliably relate to location privacy, and argue that this technique may even be detrimental to users' location privacy. The uncovered flaws imply that existing k-anonymity scheme is a tattered cloak for protecting location privacy.

158 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI

2,415 citations

Book ChapterDOI
04 Oct 2019
TL;DR: Permission to copy without fee all or part of this material is granted provided that the copies arc not made or distributed for direct commercial advantage.
Abstract: Usually, a proof of a theorem contains more knowledge than the mere fact that the theorem is true. For instance, to prove that a graph is Hamiltonian it suffices to exhibit a Hamiltonian tour in it; however, this seems to contain more knowledge than the single bit Hamiltonian/non-Hamiltonian.In this paper a computational complexity theory of the “knowledge” contained in a proof is developed. Zero-knowledge proofs are defined as those proofs that convey no additional knowledge other than the correctness of the proposition in question. Examples of zero-knowledge proof systems are given for the languages of quadratic residuosity and 'quadratic nonresiduosity. These are the first examples of zero-knowledge proofs for languages not known to be efficiently recognizable.

1,962 citations