scispace - formally typeset
Journal ArticleDOI

Privacy-Preserving Distributed Multi-Task Learning against Inference Attack in Cloud Computing

TLDR
In this paper, a machine learning as a service (MLaaS) has recently been valued by the organizations for machine learning training over SaaS over a period of time.
Abstract
Because of the powerful computing and storage capability in cloud computing, machine learning as a service (MLaaS) has recently been valued by the organizations for machine learning training over s...

read more

Citations
More filters
Journal ArticleDOI

Learning in Your “Pocket”: Secure Collaborative Deep Learning With Membership Privacy

TL;DR: Sigma as discussed by the authors is a privacy-preserving collaborative deep learning mechanism, which allows participating organizations to train a collective model without exposing their local training data to the others by using a single-server-aided private collaborative architecture.
Journal ArticleDOI

A Review on Security Issues and Solutions for Precision Health in Internet-of-Medical-Things Systems

TL;DR: In this paper , the authors present an IoMT system model consisting of three layers: the sensing layer, the network layer and the cloud infrastructure layer, and discuss the security vulnerabilities and threats, and review the existing security techniques and schemes corresponding to the system components.
Journal ArticleDOI

Multi-Task Model Personalization for Federated Supervised SVM in Heterogeneous Networks

TL;DR: In this paper , an efficient iterative distributed method based on the alternating direction method of multipliers (ADMM) for support vector machines (SVMs), which tackles federated classification and regression, is proposed.
References
More filters
Proceedings ArticleDOI

Membership Inference Attacks Against Machine Learning Models

TL;DR: This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon.
Proceedings ArticleDOI

Privacy-Preserving Deep Learning

TL;DR: This paper presents a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets, and exploits the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously.
Proceedings ArticleDOI

Exploiting Unintended Feature Leakage in Collaborative Learning

TL;DR: In this article, passive and active inference attacks are proposed to exploit the leakage of information about participants' training data in federated learning, where each participant can infer the presence of exact data points and properties that hold only for a subset of the training data and are independent of the properties of the joint model.
Journal ArticleDOI

Differentially Private Empirical Risk Minimization

TL;DR: This work proposes a new method, objective perturbation, for privacy-preserving machine learning algorithm design, and shows that both theoretically and empirically, this method is superior to the previous state-of-the-art, output perturbations, in managing the inherent tradeoff between privacy and learning performance.
Journal ArticleDOI

Privacy-Preserving Deep Learning via Additively Homomorphic Encryption

TL;DR: This paper presented a privacy-preserving deep learning system in which many learning participants perform neural network-based deep learning over a combined dataset of all, without revealing the participant's identity.
Related Papers (5)