Proceedings ArticleDOI
SecureML: A System for Scalable Privacy-Preserving Machine Learning
Payman Mohassel,Yupeng Zhang +1 more
- pp 19-38
Reads0
Chats0
TLDR
This paper presents new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method, and implements the first privacy preserving system for training neural networks.Abstract:
Machine learning is widely used in practice to produce predictive models for applications such as image processing, speech and text recognition. These models are more accurate when trained on large amount of data collected from different sources. However, the massive data collection raises privacy concerns. In this paper, we present new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method. Our protocols fall in the two-server model where data owners distribute their private data among two non-colluding servers who train various models on the joint data using secure two-party computation (2PC). We develop new techniques to support secure arithmetic operations on shared decimal numbers, and propose MPC-friendly alternatives to non-linear functions such as sigmoid and softmax that are superior to prior work. We implement our system in C++. Our experiments validate that our protocols are several orders of magnitude faster than the state of the art implementations for privacy preserving linear and logistic regressions, and scale to millions of data samples with thousands of features. We also implement the first privacy preserving system for training neural networks.read more
Citations
More filters
Journal ArticleDOI
Federated Machine Learning: Concept and Applications
TL;DR: This work introduces a comprehensive secure federated-learning framework, which includes horizontal federated learning, vertical federatedLearning, and federated transfer learning, and provides a comprehensive survey of existing works on this subject.
Posted Content
Federated Machine Learning: Concept and Applications
TL;DR: This work proposes building data networks among organizations based on federated mechanisms as an effective solution to allow knowledge to be shared without compromising user privacy.
Posted Content
Advances and Open Problems in Federated Learning
Peter Kairouz,H. Brendan McMahan,Brendan Avent,Aurélien Bellet,Mehdi Bennis,Arjun Nitin Bhagoji,Kallista Bonawitz,Zachary Charles,Graham Cormode,Rachel Cummings,Rafael G. L. D'Oliveira,Hubert Eichner,Salim El Rouayheb,David Evans,Josh Gardner,Zachary Garrett,Adrià Gascón,Badih Ghazi,Phillip B. Gibbons,Marco Gruteser,Zaid Harchaoui,Chaoyang He,Lie He,Zhouyuan Huo,Ben Hutchinson,Justin Hsu,Martin Jaggi,Tara Javidi,Gauri Joshi,Mikhail Khodak,Jakub Konečný,Aleksandra Korolova,Farinaz Koushanfar,Sanmi Koyejo,Tancrède Lepoint,Yang Liu,Prateek Mittal,Mehryar Mohri,Richard Nock,Ayfer Ozgur,Rasmus Pagh,Mariana Raykova,Hang Qi,Daniel Ramage,Ramesh Raskar,Dawn Song,Weikang Song,Sebastian U. Stich,Ziteng Sun,Ananda Theertha Suresh,Florian Tramèr,Praneeth Vepakomma,Jianyu Wang,Li Xiong,Zheng Xu,Qiang Yang,Felix X. Yu,Han Yu,Sen Zhao +58 more
TL;DR: Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
Proceedings ArticleDOI
Exploiting Unintended Feature Leakage in Collaborative Learning
TL;DR: In this article, passive and active inference attacks are proposed to exploit the leakage of information about participants' training data in federated learning, where each participant can infer the presence of exact data points and properties that hold only for a subset of the training data and are independent of the properties of the joint model.
Journal ArticleDOI
Current and future perspectives of liquid biopsies in genomics-driven oncology.
TL;DR: The potential of liquid biopsies is highlighted by studies that show they can track the evolutionary dynamics and heterogeneity of tumours and can detect very early emergence of therapy resistance, residual disease and recurrence, but their analytical validity and clinical utility must be rigorously demonstrated before this potential can be realized.
References
More filters
Book ChapterDOI
Public-key cryptosystems based on composite degree residuosity classes
TL;DR: A new trapdoor mechanism is proposed and three encryption schemes are derived : a trapdoor permutation and two homomorphic probabilistic encryption schemes computationally comparable to RSA, which are provably secure under appropriate assumptions in the standard model.
Proceedings ArticleDOI
Universally composable security: a new paradigm for cryptographic protocols
TL;DR: The notion of universally composable security was introduced in this paper for defining security of cryptographic protocols, which guarantees security even when a secure protocol is composed of an arbitrary set of protocols, or more generally when the protocol is used as a component of a system.
Proceedings ArticleDOI
Deep Learning with Differential Privacy
TL;DR: In this paper, the authors develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrate that they can train deep neural networks with nonconvex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Proceedings ArticleDOI
Protocols for secure computations
TL;DR: The author gives a precise formulation of this general problem and describes three ways of solving it by use of one-way functions, which have applications to secret voting, private querying of database, oblivious negotiation, playing mental poker, etc.
Proceedings ArticleDOI
Privacy-Preserving Deep Learning
Reza Shokri,Vitaly Shmatikov +1 more
TL;DR: This paper presents a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets, and exploits the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously.