scispace - formally typeset
Search or ask a question
Topic

Privacy software

About: Privacy software is a research topic. Over the lifetime, 8597 publications have been published within this topic receiving 237304 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors designed an experiment in which a shopping search engine interface clearly and compactly displays privacy policy information, and they found that when privacy information is made more salient and accessible, some consumers are willing to pay a premium to purchase from privacy protective websites.
Abstract: Although online retailers detail their privacy practices in online privacy policies, this information often remains invisible to consumers, who seldom make the effort to read and understand those policies. This paper reports on research undertaken to determine whether a more prominent display of privacy information will cause consumers to incorporate privacy considerations into their online purchasing decisions. We designed an experiment in which a shopping search engine interface clearly and compactly displays privacy policy information. When such information is made available, consumers tend to purchase from online retailers who better protect their privacy. In fact, our study indicates that when privacy information is made more salient and accessible, some consumers are willing to pay a premium to purchase from privacy protective websites. This result suggests that businesses may be able to leverage privacy protection as a selling point.

823 citations

Journal ArticleDOI
TL;DR: This survey will explore the most relevant limitations of IoT devices and their solutions, and present the classification of IoT attacks, and analyze the security issues in different layers.
Abstract: Internet-of-Things (IoT) are everywhere in our daily life. They are used in our homes, in hospitals, deployed outside to control and report the changes in environment, prevent fires, and many more beneficial functionality. However, all those benefits can come of huge risks of privacy loss and security issues. To secure the IoT devices, many research works have been conducted to countermeasure those problems and find a better way to eliminate those risks, or at least minimize their effects on the user’s privacy and security requirements. The survey consists of four segments. The first segment will explore the most relevant limitations of IoT devices and their solutions. The second one will present the classification of IoT attacks. The next segment will focus on the mechanisms and architectures for authentication and access control. The last segment will analyze the security issues in different layers.

804 citations

Journal ArticleDOI

795 citations

Proceedings ArticleDOI
Frank McSherry1, Ilya Mironov1
28 Jun 2009
TL;DR: This work considers the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users, and finds that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy.
Abstract: We consider the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users. Specifically, we consider the Netflix Prize data set, and its leading algorithms, adapted to the framework of differential privacy.Unlike prior privacy work concerned with cryptographically securing the computation of recommendations, differential privacy constrains a computation in a way that precludes any inference about the underlying records from its output. Such algorithms necessarily introduce uncertainty--i.e., noise--to computations, trading accuracy for privacy.We find that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy. To adapt these algorithms, we explicitly factor them into two parts, an aggregation/learning phase that can be performed with differential privacy guarantees, and an individual recommendation phase that uses the learned correlations and an individual's data to provide personalized recommendations. The adaptations are non-trivial, and involve both careful analysis of the per-record sensitivity of the algorithms to calibrate noise, as well as new post-processing steps to mitigate the impact of this noise.We measure the empirical trade-off between accuracy and privacy in these adaptations, and find that we can provide non-trivial formal privacy guarantees while still outperforming the Cinematch baseline Netflix provides.

750 citations

Proceedings ArticleDOI
22 May 2011
TL;DR: This paper provides a formal framework for the analysis of LPPMs, it captures the prior information that might be available to the attacker, and various attacks that he can perform, and clarifies the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness.
Abstract: It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual locations.

742 citations


Network Information
Related Topics (5)
The Internet
213.2K papers, 3.8M citations
82% related
Encryption
98.3K papers, 1.4M citations
82% related
Social network
42.9K papers, 1.5M citations
80% related
Server
79.5K papers, 1.4M citations
79% related
Wireless sensor network
142K papers, 2.4M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202385
2022190
202110
20204
20199
201859