scispace - formally typeset
Open AccessJournal ArticleDOI

Distributed additive encryption and quantization for privacy preserving federated deep learning

Reads0
Chats0
TLDR
This work develops a practical, computationally efficient encryption based protocol for federated deep learning, where the key pairs are collaboratively generated without the help of a third party by quantization of the model parameters on the clients and an approximated aggregation on the server.
About
This article is published in Neurocomputing.The article was published on 2021-11-06 and is currently open access. It has received 21 citations till now. The article focuses on the topics: Encryption & Distributed key generation.

read more

Citations
More filters
Journal ArticleDOI

Federated learning on non-IID data: A survey

TL;DR: In this article, a detailed analysis of the influence of non-IID data on both parametric and non-parametric machine learning models in both horizontal and vertical federated learning is provided.
Journal ArticleDOI

Application of Robust Zero-Watermarking Scheme Based on Federated Learning for Securing the Healthcare Data

TL;DR: In this article , a robust zero-watermarking scheme based on federated learning is proposed to solve the privacy and security issues of the teledermatology healthcare framework, which is suitable for the specific requirements of medical images, which neither changes the important information contained in medical images nor divulges privacy data.
Journal ArticleDOI

A federated data-driven evolutionary algorithm

TL;DR: In this paper, a federated data-driven evolutionary optimization framework that is able to perform data driven optimization when the data is distributed on multiple devices has been proposed, where a sorted model aggregation method is developed for aggregating local surrogates based on radial-basis function networks.
Journal ArticleDOI

Privacy-Preserving Aggregation in Federated Learning: A Survey

TL;DR: This survey aims to bridge the gap between a large number of studies on PPFL, where PPAgg is adopted to provide a privacy guarantee, and the lack of a comprehensive survey on the PPAGG protocols applied in FL systems.
Journal ArticleDOI

Practical Private Aggregation in Federated Learning Against Inference Attack

TL;DR: Wang et al. as mentioned in this paper proposed a federated learning framework, which can protect the data privacy of worker devices against the inference attacks with minimal accuracy cost and low computation and communication cost, and does not rely on the secure pairwise communication channels.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Journal ArticleDOI

ImageNet classification with deep convolutional neural networks

TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Related Papers (5)