Journal ArticleDOI
Privacy-Preserving Distributed Multi-Task Learning against Inference Attack in Cloud Computing
TLDR
In this paper, a machine learning as a service (MLaaS) has recently been valued by the organizations for machine learning training over SaaS over a period of time.Abstract:
Because of the powerful computing and storage capability in cloud computing, machine learning as a service (MLaaS) has recently been valued by the organizations for machine learning training over s...read more
Citations
More filters
Proceedings ArticleDOI
The Promising Role of Representation Learning for Distributed Computing Continuum Systems
TL;DR: This paper discusses the promising role of ReL for DCCS in terms of different aspects, including device condition monitoring, predictions, management of the systems, etc, and provides a list of Re L algorithms and their pitfalls which helps D CCS by considering various constraints.
Journal ArticleDOI
DisBezant: Secure and Robust Federated Learning Against Byzantine Attack in IoT-Enabled MTS
TL;DR: DisBezant as mentioned in this paper proposes a credibility-based mechanism to resist the Byzantine attack in non-iid (not independent and identically distributed) dataset which is usually gathered from heterogeneous ships.
Journal ArticleDOI
DisBezant: Secure and Robust Federated Learning Against Byzantine Attack in IoT-Enabled MTS
TL;DR: DisBezant as mentioned in this paper proposes a credibility-based mechanism to resist the Byzantine attack in non-iid (not independent and identically distributed) dataset which is usually gathered from heterogeneous ships.
Proceedings ArticleDOI
The Promising Role of Representation Learning for Distributed Computing Continuum Systems
TL;DR: In this paper , the authors discuss the promising role of ReL for DCCS in terms of different aspects, including device condition monitoring, predictions, management of the systems, etc.
Journal ArticleDOI
Governance and sustainability of distributed continuum systems: a big data approach
TL;DR: In this paper , a general governance and sustainable architecture for distributed computing continuum systems (DCCS) is proposed, which reflects the human body's self-healing model, and the proposed model has three stages: first, it analyzes system data to acquire knowledge; second it can leverage the knowledge to monitor and predict future conditions; and third it takes further actions to autonomously solve any issue or to alert administrators.
References
More filters
Journal ArticleDOI
Multi-key privacy-preserving deep learning in cloud computing
TL;DR: This work presents a basic scheme based on multi-key fully homomorphic encryption (MK-FHE), and proposes a hybrid structure scheme by combining the double decryption mechanism and FHE, and proves that these two multi- key privacy-preserving deep learning schemes over encrypted data are secure.
Journal ArticleDOI
Privacy Preserving Deep Computation Model on Cloud for Big Data Feature Learning
TL;DR: To improve the efficiency of big data feature learning, the paper proposes a privacy preserving deep computation model by offloading the expensive operations to the cloud by using the BGV encryption scheme and employing cloud servers to perform the high-order back-propagation algorithm on the encrypted data efficiently forDeep computation model training.
Journal ArticleDOI
Calibrating Noise to Sensitivity in Private Data Analysis
TL;DR: A very clean definition of privacy---now known as differential privacy---and measure of its loss are provided, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the {\em sensitivity} of the function $f$.
Journal ArticleDOI
An Efficient Privacy-Preserving Outsourced Calculation Toolkit With Multiple Keys
TL;DR: It is proved that the proposed EPOM achieves the goal of secure integer number processing without resulting in privacy leakage of data to unauthorized parties.
Journal ArticleDOI
Dynamic Differential Privacy for ADMM-Based Distributed Classification Learning
Tao Zhang,Quanyan Zhu +1 more
TL;DR: This paper develops two methods to provide differential privacy to distributed learning algorithms over a network by decentralizing the learning algorithm using the alternating direction method of multipliers, and proposing the methods of dual variable perturbation and primal variable perturgation to provide dynamic differential privacy.