scispace - formally typeset
Open AccessBook ChapterDOI

Our data, ourselves: privacy via distributed noise generation

TLDR
In this paper, a distributed protocol for generating shares of random noise, secure against malicious participants, was proposed, where the purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers.
Abstract
In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form ∑if(di), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.

read more

Citations
More filters
Book

The Algorithmic Foundations of Differential Privacy

TL;DR: The preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example.
Journal Article

Calibrating noise to sensitivity in private data analysis

TL;DR: The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output.
Book ChapterDOI

Differential privacy: a survey of results

TL;DR: This survey recalls the definition of differential privacy and two basic techniques for achieving it, and shows some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.
Proceedings ArticleDOI

Deep Learning with Differential Privacy

TL;DR: In this paper, the authors develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrate that they can train deep neural networks with nonconvex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Proceedings ArticleDOI

Practical Secure Aggregation for Privacy-Preserving Machine Learning

TL;DR: In this paper, the authors proposed a secure aggregation of high-dimensional data for federated deep neural networks, which allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner without learning each user's individual contribution.
References
More filters
Proceedings ArticleDOI

Randomized byzantine generals

TL;DR: A randomized solution for the Byzantine Generals Problems that produces Byzantine Agreement within a fixed small expected number of computational rounds, independent of the number n of processes and the bound t on the number of faulty processes.
Proceedings Article

Randomness is linear in space

TL;DR: Of independent interest is the main technical tool: a procedure which extracts randomness from a defective random source using a small additional number of truly random bits.
Proceedings ArticleDOI

The bit extraction problem or t-resilient functions

TL;DR: The question addressed is for what values of n, m and t does the adversary necessarily fail in biasing the outcome of f :{0,1}n → {0, 1}m, when being restricted to set t of the input bits of f.
Book ChapterDOI

Unconditionally secure constant-rounds multi-party computation for equality, comparison, bits and exponentiation

TL;DR: In this paper, it was shown that if a set of players hold shares of a value $a \in \mathbb{F}_p $ for some prime p (where the set of shares is written [a]p), it is possible to compute, in constant rounds and with unconditional security, sharings of the bits of a, i.e., compute sharings [a0]p,..., [al−−1]p such that l = ⌈ log2p ⌉, a0,..., al−1∈
Journal Article

Recent Developments in Explicit Constructions of Extractors.

TL;DR: This manuscript is a survey of recent developments in extractors and focuses on explicit constructions of extractors following Trevisan’s breakthrough result.