Privacy-Preserving Deep Learning via Additively Homomorphic Encryption.
Citations
2,593 citations
Cites background or methods from "Privacy-Preserving Deep Learning vi..."
...The above architecture is proved to protect data leakage against the semihonest server if gradient aggregation is done with SMC [9] or homomorphic encryption [51]....
[...]
...However, no security guarantee is provided and the leakage of these gradients may actually leak important data information [51] when exposed together with data structure, such as in the case of image pixels....
[...]
...A horizontal federated learning system typically assumes honest participants and security against an honest-but-curious server [9, 51]....
[...]
...The authors of [51] used additively homomorphic encryption to preserve the privacy of gradients and enhance the security of the system....
[...]
...A typical assumption is that the participants are honest whereas the server is honest but curious; therefore, no leakage of information from any participants to the server is allowed [51]....
[...]
1,317 citations
Cites background or methods from "Privacy-Preserving Deep Learning vi..."
...al model parameters. Security Analysis. The above architecture is proved to protect data leakage against the semihonest server, if gradients aggregation is done with SMC [9] or Homomorphic Encryption [51]. But it may be subject to attack in another security model by a malicious participant training a Generative Adversarial Network (GAN) in the collaborative learning process [29]. 2.4.2 Vertical Federa...
[...]
...om an optimization algorithm like Stochastic Gradient Descent (SGD) [41, 58], however no security guarantee is provided and the leakage of these gradients may actually leak important data information [51] when exposed together with data structure such as in the case of image pixels. Researchers have considered the situation when one of the members of a federated learning system maliciously attacks oth...
[...]
...e centralized model together with other data owners. A secure aggregation scheme to protect the privacy of aggregated user updates under their federated learning framework is also introduced [9]. Ref [51] uses additively homomorphic encryption for model paramter aggregation to provide security against the central server. In[60],amulti-taskstylefederatedlearningsystemisproposedtoallowmultiplesitestocom...
[...]
...: X i = X j, Y i = Y j, I i , I j, ∀D i,D j,i , j (2) Security Definition.A horizontal federated learning system typically assumes honest participants and security against a honest-but-curious server [9, 51]. That is, only the server can compromise ACM Trans. Intell. Syst. Technol., Vol. 10, No. 2, Article 12. Publication date: February 2019. 12:6 Q. Yang et al. (a) Horizontal Federated Learning (b) Vert...
[...]
... or cloud server. A typical assumption is that the participants are honest whereas the server is honest-but-curious, therefore no leakage of information from any participants to the server is allowed [51]. The training process of such a system usually contain the following four steps: •Step 1: participants locally compute training gradients, mask a selection of gradients with encryption [51], differen...
[...]
701 citations
Cites methods from "Privacy-Preserving Deep Learning vi..."
...Although both the encryption techniques presented in [153] and [79] can prevent the curious server from extracting infor-...
[...]
...In [153], the homomorphic encryption technique is introduced to protect privacy of participants’ shared parameters from a honest-but-curious server....
[...]
565 citations
450 citations
References
7,244 citations
7,008 citations
"Privacy-Preserving Deep Learning vi..." refers background in this paper
...For decryption and CPA security, see the paper [25]....
[...]
5,311 citations
3,475 citations
"Privacy-Preserving Deep Learning vi..." refers background in this paper
...1) Asynchronous SGD (ASGD) [16], [27], No Privacy Protection: Both our system and that of [28] rely on the fact that neural networks can be trained via a variant of SGD called asynchronous SGD [16], [27] with data parallelism and model parallelism....
[...]
...Our system achieves identical accuracy to a corresponding deep learning system (i.e., asynchronous SGD (ASGD)) trained over the joint dataset of all participants....
[...]
...3) Our System: Our system can be called gradientsencrypted ASGD for the following reasons....
[...]
...5] can be called gradients-selective ASGD for the following reasons....
[...]
...1) Asynchronous SGD (ASGD) [16], [27], No Privacy Pro-...
[...]
2,944 citations