scispace - formally typeset
Open AccessPosted Content

Secure Outsourced Matrix Computation and Application to Neural Networks.

Reads0
Chats0
TLDR
In this paper, the authors proposed a secure matrix computation mechanism based on homomorphic encryption, which can be applied to most of the existing HE schemes and achieves reasonable performance for practical use; for example, their implementation takes 9.21 seconds to multiply two encrypted square matrices and 2.56 seconds to transpose a square matrix of order 64.
Abstract
Homomorphic Encryption (HE) is a powerful cryptographic primitive to address privacy and security issues in outsourcing computation on sensitive data to an untrusted computation environment. Comparing to secure Multi-Party Computation (MPC), HE has advantages in supporting non-interactive operations and saving on communication costs. However, it has not come up with an optimal solution for modern learning frameworks, partially due to a lack of efficient matrix computation mechanisms. In this work, we present a practical solution to encrypt a matrix homomorphically and perform arithmetic operations on encrypted matrices. Our solution includes a novel matrix encoding method and an efficient evaluation strategy for basic matrix operations such as addition, multiplication, and transposition. We also explain how to encrypt more than one matrix in a single ciphertext, yielding better amortized performance. Our solution is generic in the sense that it can be applied to most of the existing HE schemes. It also achieves reasonable performance for practical use; for example, our implementation takes 9.21 seconds to multiply two encrypted square matrices of order 64 and 2.56 seconds to transpose a square matrix of order 64. Our secure matrix computation mechanism has a wide applicability to our new framework EDM, which stands for encrypted data and encrypted model. To the best of our knowledge, this is the first work that supports secure evaluation of the prediction phase based on both encrypted data and encrypted model, whereas previous work only supported applying a plain model to encrypted data. As a benchmark, we report an experimental result to classify handwritten images using convolutional neural networks (CNN). Our implementation on the MNIST dataset takes 28.59 seconds to compute ten likelihoods of 64 input images simultaneously, yielding an amortized rate of 0.45 seconds per image.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

CHET: an optimizing compiler for fully-homomorphic neural-network inferencing

TL;DR: CHET is a domain-specific optimizing compiler designed to make the task of programming FHE applications easier, and generates homomorphic circuits that outperform expert-tuned circuits and makes it easy to switch across different encryption schemes.
Proceedings ArticleDOI

Efficient Multi-Key Homomorphic Encryption with Packed Ciphertexts with Application to Oblivious Neural Network Inference

TL;DR: This paper presents multi-key variants of two HE schemes with packed ciphertexts, and presents new relinearization algorithms which are simpler and faster than previous method by Chen et al. (TCC 2017).
Journal ArticleDOI

Privacy-preserving cloud computing on sensitive data: A survey of methods, products and challenges

TL;DR: This survey covers technologies that allow privacy-aware outsourcing of storage and processing of sensitive data to public clouds and reviews masking methods for outsourced data based on data splitting and anonymization, in addition to cryptographic methods covered in other surveys.
Journal ArticleDOI

A training-integrity privacy-preserving federated learning scheme with trusted execution environment

TL;DR: A new privacy-preserving federated learning scheme that guarantees the integrity of deep learning processes, based on the Trusted Execution Environment (TEE), and a training-integrity protocol for this scheme, in which causative attacks can be detected.
Posted Content

HEAX: An Architecture for Computing on Encrypted Data

TL;DR: HEAX is presented, a novel hardware architecture for FHE that achieves unprecedented performance improvements and a new highly-parallelizable architecture for number-theoretic transform (NTT) which can be of independent interest as NTT is frequently used in many lattice-based cryptography systems.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

ImageNet classification with deep convolutional neural networks

TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Posted Content

ADADELTA: An Adaptive Learning Rate Method

Matthew D. Zeiler
- 22 Dec 2012 - 
TL;DR: A novel per-dimension learning rate method for gradient descent called ADADELTA that dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent is presented.
Related Papers (5)