scispace - formally typeset
Open AccessProceedings Article

Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

TLDR
Slalom as mentioned in this paper is a framework that securely delegates execution of all linear layers in a DNN from a TEE to a faster, yet untrusted, co-located processor.
Abstract
As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations. A pragmatic solution comes from Trusted Execution Environments (TEEs), which use hardware and software protections to isolate sensitive computations from the untrusted software stack. However, these isolation guarantees come at a price in performance, compared to untrusted alternatives. This paper initiates the study of high performance execution of Deep Neural Networks (DNNs) in TEEs by efficiently partitioning DNN computations between trusted and untrusted devices. Building upon an efficient outsourcing scheme for matrix multiplication, we propose Slalom, a framework that securely delegates execution of all linear layers in a DNN from a TEE (e.g., Intel SGX or Sanctum) to a faster, yet untrusted, co-located processor. We evaluate Slalom by running DNNs in an Intel SGX enclave, which selectively delegates work to an untrusted GPU. For canonical DNNs (VGG16, MobileNet and ResNet variants) we obtain 6x to 20x increases in throughput for verifiable inference, and 4x to 11x for verifiable and private inference.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

VerifyNet: Secure and Verifiable Federated Learning

TL;DR: VerifyNet is proposed, the first privacy-preserving and verifiable federated learning framework that claims that it is impossible that an adversary can deceive users by forging Proof, unless it can solve the NP-hard problem adopted in the model.
Proceedings ArticleDOI

CrypTFlow: Secure TensorFlow Inference

TL;DR: In this article, the authors present CrypTFlow, a system that converts TensorFlow inference code into Secure Multi-Party Computation (MPC) protocols at the push of a button.
Journal ArticleDOI

Privacy and Security Issues in Deep Learning: A Survey

TL;DR: This paper briefly introduces the four types of attacks and privacy-preserving techniques in DL, and summarizes the attack and defense methods associated with DL privacy and security in recent years.
Proceedings ArticleDOI

CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU

TL;DR: CryptGPU as discussed by the authors is a system for privacy-preserving machine learning that implements all operations on the GPU (graphics processing unit) and achieves state-of-the-art performance on convolutional neural networks.
Related Papers (5)