scispace - formally typeset
Open AccessPosted Content

The Limits of Differential Privacy (and its Misuse in Data Release and Machine Learning)

TLDR
Differential privacy is not a silver bullet for all privacy problems, but it can be a step forward in the right direction.
Abstract
Differential privacy (DP) is a neat privacy definition that can co-exist with certain well-defined data uses in the context of interactive queries. However, DP is neither a silver bullet for all privacy problems nor a replacement for all previous privacy models. In fact, extreme care should be exercised when trying to extend its use beyond the setting it was designed for. This paper reviews the limitations of DP and its misuse for individual data collection, individual data release, and machine learning.

read more

Citations
More filters
Posted Content

Graph Neural Networks in Recommender Systems: A Survey

TL;DR: This article provides a taxonomy of GNN-based recommendation models according to the types of information used and recommendation tasks and systematically analyze the challenges of applying GNN on different types of data.
Posted Content

Achieving Security and Privacy in Federated Learning Systems: Survey, Research Challenges and Future Directions

TL;DR: This paper examines security and privacy attacks to FL and critically survey solutions proposed in the literature to mitigate each attack and sketches ways to tackle this open problem and attain bothSecurity and privacy protection.
Journal ArticleDOI

Statistically Valid Inferences from Privacy-Protected Data

TL;DR: This work builds on the standard of “differential privacy,” correct for biases induced by the privacy-preserving procedures, provide a proper accounting of uncertainty, and impose minimal constraints on the choice of statistical methods and quantities estimated.
Posted Content

$k$-Anonymity in Practice: How Generalisation and Suppression Affect Machine Learning Classifiers

TL;DR: A systematic comparison and detailed investigation into the effects of different k-anonymisation algorithms on the results of machine learning models and Mondrian can be considered as the method with the most appealing properties for subsequent classification.
Journal ArticleDOI

Balancing data privacy and usability in the federal statistical system

TL;DR: This essay argues that the discussion of federal statistical system change has not given proper consideration to the reduced social benefits of data availability and their usability relative to the value of increased levels of privacy protection, and recommends that a more balanced benefit-cost framework should be used to assess these trade-offs.
References
More filters
Book

The Algorithmic Foundations of Differential Privacy

TL;DR: The preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example.
Proceedings ArticleDOI

Deep Learning with Differential Privacy

TL;DR: In this paper, the authors develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrate that they can train deep neural networks with nonconvex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Proceedings Article

Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing

TL;DR: It is concluded that current DP mechanisms do not simultaneously improve genomic privacy while retaining desirable clinical efficacy, highlighting the need for new mechanisms that should be evaluated in situ using the general methodology introduced by this work.
Proceedings ArticleDOI

No free lunch in data privacy

TL;DR: This paper argues that privacy of an individual is preserved when it is possible to limit the inference of an attacker about the participation of the individual in the data generating process, different from limiting the inference about the presence of a tuple.
Journal ArticleDOI

Federated Learning With Differential Privacy: Algorithms and Performance Analysis

TL;DR: Wang et al. as mentioned in this paper proposed a novel framework based on the concept of differential privacy, in which artificial noise is added to parameters at the clients' side before aggregating, namely, noising before model aggregation FL (NbAFL).