Journal ArticleDOI
DisBezant: Secure and Robust Federated Learning Against Byzantine Attack in IoT-Enabled MTS
Xindi Ma,Qi Jiang,Mohammad Shojafar,Mamoun Alazab,Sachin Kumar,Saru Kumari +5 more
- Vol. 24, pp 2492-2502
TLDR
DisBezant as mentioned in this paper proposes a credibility-based mechanism to resist the Byzantine attack in non-iid (not independent and identically distributed) dataset which is usually gathered from heterogeneous ships.Abstract:
With the intelligentization of Maritime Transportation System (MTS), Internet of Thing (IoT) and machine learning technologies have been widely used to achieve the intelligent control and routing planning for ships. As an important branch of machine learning, federated learning is the first choice to train an accurate joint model without sharing ships’ data directly. However, there are still many unsolved challenges while using federated learning in IoT-enabled MTS, such as the privacy preservation and Byzantine attacks. To surmount the above challenges, a novel mechanism, namely DisBezant, is designed to achieve the secure and Byzantine-robust federated learning in IoT-enabled MTS. Specifically, a credibility-based mechanism is proposed to resist the Byzantine attack in non-iid (not independent and identically distributed) dataset which is usually gathered from heterogeneous ships. The credibility is introduced to measure the trustworthiness of uploaded knowledge from ships and is updated based on their shared information in each epoch. Then, we design an efficient privacy-preserving gradient aggregation protocol based on a secure two-party calculation protocol. With the help of a central server, we can accurately recognise the Byzantine attackers and update the global model parameters privately. Furthermore, we theoretically discussed the privacy preservation and efficiency of DisBezant. To verify the effectiveness of our DisBezant, we evaluate it over three real datasets and the results demonstrate that DisBezant can efficiently and effectively achieve the Byzantine-robust federated learning. Although there are 40% nodes are Byzantine attackers in participants, our DisBezant can still recognise them and ensure the accurate model training.read more
Citations
More filters
Journal ArticleDOI
A survey on federated learning: challenges and applications
TL;DR: Federated learning (FL) as discussed by the authors is a secure distributed machine learning paradigm that addresses the issue of data silos in building a joint model and its unique distributed training mode and the advantages of security aggregation mechanism are very suitable for various practical applications with strict privacy requirements.
Journal ArticleDOI
Federated learning for green shipping optimization and management
TL;DR: In this paper , a two-stage method based on federated learning and optimization techniques was developed to predict ship fuel consumption and optimize ship sailing speed, which can achieve both information sharing and data privacy protection.
Journal ArticleDOI
RTGA: Robust ternary gradients aggregation for federated learning
TL;DR: In this article , the authors proposed the robust ternary gradient aggregation (RTGA) algorithm, which can efficiently handle different attacks using two novel mechanisms: client-side quantization mechanism compresses gradients using only two bits to store each coordinate of the gradient vector.
Journal ArticleDOI
Federated learning for energy constrained devices: a systematic mapping study
TL;DR: In this paper , the authors conduct a systematic mapping study on Fed ML optimization techniques for energy-constrained IoT devices and provide a structured overview of the field using a set of carefully chosen research questions.
Peer Review
On the Pitfalls of Security Evaluation of Robust Federated Learning
TL;DR: In this paper , the experimental setup used to evaluate the robustness of FL poisoning defenses is investigated, and the potential repercussions of such experimental setups on the key conclusions made by these works about the robusts of the proposed defenses are discussed.
References
More filters
Posted Content
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konečný,H. Brendan McMahan,Felix X. Yu,Peter Richtárik,Ananda Theertha Suresh,Dave Bacon +5 more
TL;DR: Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Proceedings ArticleDOI
Practical Secure Aggregation for Privacy-Preserving Machine Learning
Keith Bonawitz,Vladimir Ivanov,Ben Kreuter,Antonio Marcedone,H. Brendan McMahan,Sarvar Patel,Daniel Ramage,Aaron Segal,Karn Seth +8 more
TL;DR: In this paper, the authors proposed a secure aggregation of high-dimensional data for federated deep neural networks, which allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner without learning each user's individual contribution.
Proceedings ArticleDOI
Privacy-Preserving Deep Learning
Reza Shokri,Vitaly Shmatikov +1 more
TL;DR: This paper presents a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets, and exploits the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously.
Proceedings ArticleDOI
Deep Learning with Differential Privacy
TL;DR: This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Proceedings ArticleDOI
SecureML: A System for Scalable Privacy-Preserving Machine Learning
Payman Mohassel,Yupeng Zhang +1 more
TL;DR: This paper presents new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method, and implements the first privacy preserving system for training neural networks.