Open AccessPosted Content
Secure Outsourced Matrix Computation and Application to Neural Networks.
Reads0
Chats0
TLDR
In this paper, the authors proposed a secure matrix computation mechanism based on homomorphic encryption, which can be applied to most of the existing HE schemes and achieves reasonable performance for practical use; for example, their implementation takes 9.21 seconds to multiply two encrypted square matrices and 2.56 seconds to transpose a square matrix of order 64.Abstract:
Homomorphic Encryption (HE) is a powerful cryptographic primitive to address privacy and security issues in outsourcing computation on sensitive data to an untrusted computation environment. Comparing to secure Multi-Party Computation (MPC), HE has advantages in supporting non-interactive operations and saving on communication costs. However, it has not come up with an optimal solution for modern learning frameworks, partially due to a lack of efficient matrix computation mechanisms. In this work, we present a practical solution to encrypt a matrix homomorphically and perform arithmetic operations on encrypted matrices. Our solution includes a novel matrix encoding method and an efficient evaluation strategy for basic matrix operations such as addition, multiplication, and transposition. We also explain how to encrypt more than one matrix in a single ciphertext, yielding better amortized performance. Our solution is generic in the sense that it can be applied to most of the existing HE schemes. It also achieves reasonable performance for practical use; for example, our implementation takes 9.21 seconds to multiply two encrypted square matrices of order 64 and 2.56 seconds to transpose a square matrix of order 64. Our secure matrix computation mechanism has a wide applicability to our new framework EDM, which stands for encrypted data and encrypted model. To the best of our knowledge, this is the first work that supports secure evaluation of the prediction phase based on both encrypted data and encrypted model, whereas previous work only supported applying a plain model to encrypted data. As a benchmark, we report an experimental result to classify handwritten images using convolutional neural networks (CNN). Our implementation on the MNIST dataset takes 28.59 seconds to compute ten likelihoods of 64 input images simultaneously, yielding an amortized rate of 0.45 seconds per image.read more
Citations
More filters
Proceedings ArticleDOI
CHET: an optimizing compiler for fully-homomorphic neural-network inferencing
Roshan Dathathri,Olli Saarikivi,Hao Chen,Kim Laine,Kristin E. Lauter,Saeed Maleki,Madanlal Musuvathi,Todd Mytkowicz +7 more
TL;DR: CHET is a domain-specific optimizing compiler designed to make the task of programming FHE applications easier, and generates homomorphic circuits that outperform expert-tuned circuits and makes it easy to switch across different encryption schemes.
Proceedings ArticleDOI
Efficient Multi-Key Homomorphic Encryption with Packed Ciphertexts with Application to Oblivious Neural Network Inference
TL;DR: This paper presents multi-key variants of two HE schemes with packed ciphertexts, and presents new relinearization algorithms which are simpler and faster than previous method by Chen et al. (TCC 2017).
Journal ArticleDOI
Privacy-preserving cloud computing on sensitive data: A survey of methods, products and challenges
TL;DR: This survey covers technologies that allow privacy-aware outsourcing of storage and processing of sensitive data to public clouds and reviews masking methods for outsourced data based on data splitting and anonymization, in addition to cryptographic methods covered in other surveys.
Journal ArticleDOI
A training-integrity privacy-preserving federated learning scheme with trusted execution environment
TL;DR: A new privacy-preserving federated learning scheme that guarantees the integrity of deep learning processes, based on the Trusted Execution Environment (TEE), and a training-integrity protocol for this scheme, in which causative attacks can be detected.
Posted Content
HEAX: An Architecture for Computing on Encrypted Data
TL;DR: HEAX is presented, a novel hardware architecture for FHE that achieves unprecedented performance improvements and a new highly-parallelizable architecture for number-theoretic transform (NTT) which can be of independent interest as NTT is frequently used in many lattice-based cryptography systems.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI
ImageNet classification with deep convolutional neural networks
TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Posted Content
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi,Ashish Agarwal,Paul Barham,Eugene Brevdo,Zhifeng Chen,Craig Citro,Greg S. Corrado,Andy Davis,Jeffrey Dean,Matthieu Devin,Sanjay Ghemawat,Ian Goodfellow,Andrew Harp,Geoffrey Irving,Michael Isard,Yangqing Jia,Rafal Jozefowicz,Lukasz Kaiser,Manjunath Kudlur,Josh Levenberg,Dan Mané,Rajat Monga,Sherry Moore,Derek G. Murray,Chris Olah,Mike Schuster,Jonathon Shlens,Benoit Steiner,Ilya Sutskever,Kunal Talwar,Paul A. Tucker,Vincent Vanhoucke,Vijay K. Vasudevan,Fernanda B. Viégas,Oriol Vinyals,Pete Warden,Martin Wattenberg,Martin Wicke,Yuan Yu,Xiaoqiang Zheng +39 more
TL;DR: The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields.
Posted Content
ADADELTA: An Adaptive Learning Rate Method
TL;DR: A novel per-dimension learning rate method for gradient descent called ADADELTA that dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent is presented.