scispace - formally typeset
Search or ask a question
Topic

Information privacy

About: Information privacy is a research topic. Over the lifetime, 25412 publications have been published within this topic receiving 579611 citations. The topic is also known as: data privacy & data protection.


Papers
More filters
Journal ArticleDOI
19 Mar 2012
TL;DR: This paper utilizes periodic pseudonyms generated using blind signature and relies on reputation transfer between these pseudonyms, and is robust against reputation corruption and a prototype implementation demonstrates that the associated overheads are minimal.
Abstract: Reputation systems rate the contributions to participatory sensing campaigns from each user by associating a reputation score. The reputation scores are used to weed out incorrect sensor readings. However, an adversary can deanonmyize the users even when they use pseudonyms by linking the reputation scores associated with multiple contributions. Since the contributed readings are usually annotated with spatiotemporal information, this poses a serious breach of privacy for the users. In this paper, we address this privacy threat by proposing a framework called IncogniSense. Our system utilizes periodic pseudonyms generated using blind signature and relies on reputation transfer between these pseudonyms. The reputation transfer process has an inherent trade-off between anonymity protection and loss in reputation. We investigate by means of extensive simulations several reputation cloaking schemes that address this tradeoff in different ways. Our system is robust against reputation corruption and a prototype implementation demonstrates that the associated overheads are minimal.

124 citations

Journal ArticleDOI
TL;DR: The datafication model, wherein new personal information is deduced by employing predictive analytics on already-gathered data, will take the thinking beyond current preoccupation with whether or not individuals’ consent was secured for data collection to privacy issues arising from the development of new information on individuals' likely behavior through analysis of already collected data.
Abstract: In the age of big data we need to think differently about privacy. We need to shift our thinking from definitions of privacy characteristics of privacy to models of privacy how privacy works. Moreover, in addition to the existing models of privacy—the surveillance model and capture model—we need to also consider a new model: the datafication model presented in this article, wherein new personal information is deduced by employing predictive analytics on already-gathered data. These three models of privacy supplement each other; they are not competing understandings of privacy. This broadened approach will take our thinking beyond current preoccupation with whether or not individuals’ consent was secured for data collection to privacy issues arising from the development of new information on individuals' likely behavior through analysis of already collected data—this new information can violate privacy but does not call for consent.

124 citations

Journal ArticleDOI
01 Jan 2017
TL;DR: In this paper, the authors proposed the plausible deniability criterion for releasing sensitive data, where an output record can be released only if a certain amount of input records are indistinguishable, up to a privacy parameter.
Abstract: Releasing full data records is one of the most challenging problems in data privacy. On the one hand, many of the popular techniques such as data de-identification are problematic because of their dependence on the background knowledge of adversaries. On the other hand, rigorous methods such as the exponential mechanism for differential privacy are often computationally impractical to use for releasing high dimensional data or cannot preserve high utility of original data due to their extensive data perturbation.This paper presents a criterion called plausible deniability that provides a formal privacy guarantee, notably for releasing sensitive datasets: an output record can be released only if a certain amount of input records are indistinguishable, up to a privacy parameter. This notion does not depend on the background knowledge of an adversary. Also, it can efficiently be checked by privacy tests. We present mechanisms to generate synthetic datasets with similar statistical properties to the input data and the same format. We study this technique both theoretically and experimentally. A key theoretical result shows that, with proper randomization, the plausible deniability mechanism generates differentially private synthetic data. We demonstrate the efficiency of this generative technique on a large dataset; it is shown to preserve the utility of original data with respect to various statistical analysis and machine learning measures.

124 citations

Journal ArticleDOI
Gerard Salton1
01 Mar 1981

124 citations

Journal ArticleDOI
TL;DR: This paper considers the problem of secure data aggregation in a distributed setting, while ensuring differential privacy of the result, and introduces a new distributed privacy mechanism with noise drawn from the Laplace distribution, which achieves smaller redundant noise with efficiency.
Abstract: This paper considers the problem of secure data aggregation (mainly summation) in a distributed setting, while ensuring differential privacy of the result. We study secure multiparty addition protocols using well known security schemes: Shamir’s secret sharing, perturbation-based, and various encryptions. We supplement our study with our new enhanced encryption scheme EFT, which is efficient and fault tolerant.Differential privacy of the final result is achieved by either distributed Laplace or Geometric mechanism (respectively DLPA or DGPA), while approximated differential privacy is achieved by diluted mechanisms. Distributed random noise is generated collectively by all participants, which draw random variables from one of several distributions: Gamma, Gauss, Geometric, or their diluted versions. We introduce a new distributed privacy mechanism with noise drawn from the Laplace distribution, which achieves smaller redundant noise with efficiency. We compare complexity and security characteristics of the protocols with different differential privacy mechanisms and security schemes. More importantly, we implemented all protocols and present an experimental comparison on their performance and scalability in a real distributed environment. Based on the evaluations, we identify our security scheme and Laplace DLPA as the most efficient for secure distributed data aggregation with differential privacy.

124 citations


Network Information
Related Topics (5)
The Internet
213.2K papers, 3.8M citations
88% related
Server
79.5K papers, 1.4M citations
85% related
Encryption
98.3K papers, 1.4M citations
84% related
Social network
42.9K papers, 1.5M citations
83% related
Wireless network
122.5K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023562
20221,226
20211,535
20201,634
20191,255
20181,277