scispace - formally typeset
Search or ask a question

Showing papers on "Secure multi-party computation published in 2018"


Proceedings ArticleDOI
15 Oct 2018
TL;DR: A general framework for privacy-preserving machine learning is designed and implemented and used to obtain new solutions for training linear regression, logistic regression and neural network models and to design variants of each building block that are secure against malicious adversaries who deviate arbitrarily.
Abstract: Machine learning is widely used to produce models for a range of applications and is increasingly offered as a service by major technology companies. However, the required massive data collection raises privacy concerns during both training and prediction stages. In this paper, we design and implement a general framework for privacy-preserving machine learning and use it to obtain new solutions for training linear regression, logistic regression and neural network models. Our protocols are in a three-server model wherein data owners secret share their data among three servers who train and evaluate models on the joint data using three-party computation (3PC). Our main contribution is a new and complete framework ($\textABY ^3$) for efficiently switching back and forth between arithmetic, binary, and Yao 3PC which is of independent interest. Many of the conversions are based on new techniques that are designed and optimized for the first time in this paper. We also propose new techniques for fixed-point multiplication of shared decimal values that extends beyond the three-party case, and customized protocols for evaluating piecewise polynomial functions. We design variants of each building block that is secure against \em malicious adversaries who deviate arbitrarily. We implement our system in C++. Our protocols are up to \em four orders of magnitude faster than the best prior work, hence significantly reducing the gap between privacy-preserving and plaintext training.

451 citations


Proceedings ArticleDOI
15 Oct 2018
TL;DR: This tutorial provides a comprehensive coverage of SMC techniques, starting from precise definitions and fundamental techniques and includes the-state-of-the-art protocols for oblivious transfer (OT) and OT extension in the presence of semi-honest and malicious users.
Abstract: Secure multi-party computation (SMC) is an emerging topic which has been drawing growing attention during recent decades. There are many examples which show importance of SMC constructions in practice, such as privacy-preserving decision making and machine learning, auctions, private set intersection, and others. In this tutorial, we provide a comprehensive coverage of SMC techniques, starting from precise definitions and fundamental techniques. Consequently, a significant portion of the tutorial focuses on recent advances in general SMC constructions. We cover garbled circuit evaluation (GCE) and linear secret sharing (LSS) which are commonly used for secure two-party and multi-party computation, respectively. The coverage includes both standard adversarial models: semi-honest and malicious. For GCE, we start with the original Yao's garbled circuits construction [30] for semi-honest adversaries and consequently cover its recent optimizations such as the "free XOR,'' the garbled row reduction, the half-gates optimization, and the use of AES NI techniques. We follow with a discussion of techniques for making GCE resilient to malicious behavior, which includes the cut-and-choose approach and additional techniques to deter known attacks in the presence of malicious participants. In addition, we include the-state-of-the-art protocols for oblivious transfer (OT) and OT extension in the presence of semi-honest and malicious users. For LSS, we start from standard solutions for the semi-honest adversarial model including [5, 28] and consequently move to recent efficient constructions for semi-honest and malicious adversarial models. The coverage includes different types of corruption thresholds (with and without honest majority), which imply different guarantees with respect to abort.

311 citations


Posted Content
TL;DR: A new framework for privacy preserving deep learning that allows one to implement complex privacy preserving constructs such as Federated Learning, Secure Multiparty Computation, and Differential Privacy while still exposing a familiar deep learning API to the end-user is detailed.
Abstract: We detail a new framework for privacy preserving deep learning and discuss its assets. The framework puts a premium on ownership and secure processing of data and introduces a valuable representation based on chains of commands and tensors. This abstraction allows one to implement complex privacy preserving constructs such as Federated Learning, Secure Multiparty Computation, and Differential Privacy while still exposing a familiar deep learning API to the end-user. We report early results on the Boston Housing and Pima Indian Diabetes datasets. While the privacy features apart from Differential Privacy do not impact the prediction accuracy, the current implementation of the framework introduces a significant overhead in performance, which will be addressed at a later stage of the development. We believe this work is an important milestone introducing the first reliable, general framework for privacy preserving deep learning.

302 citations


Proceedings ArticleDOI
29 May 2018
TL;DR: Chameleon as mentioned in this paper is a hybrid mixed protocol for secure function evaluation (SFE) which enables two parties to jointly compute a function without disclosing their private inputs, but does not support signed fixed-point numbers.
Abstract: We present Chameleon, a novel hybrid (mixed-protocol) framework for secure function evaluation (SFE) which enables two parties to jointly compute a function without disclosing their private inputs. Chameleon combines the best aspects of generic SFE protocols with the ones that are based upon additive secret sharing. In particular, the framework performs linear operations in the ring $\mathbbZ _2^l $ using additively secret shared values and nonlinear operations using Yao's Garbled Circuits or the Goldreich-Micali-Wigderson protocol. Chameleon departs from the common assumption of additive or linear secret sharing models where three or more parties need to communicate in the online phase: the framework allows two parties with private inputs to communicate in the online phase under the assumption of a third node generating correlated randomness in an offline phase. Almost all of the heavy cryptographic operations are precomputed in an offline phase which substantially reduces the communication overhead. Chameleon is both scalable and significantly more efficient than the ABY framework (NDSS'15) it is based on. Our framework supports signed fixed-point numbers. In particular, Chameleon's vector dot product of signed fixed-point numbers improves the efficiency of mining and classification of encrypted data for algorithms based upon heavy matrix multiplications. Our evaluation of Chameleon on a 5 layer convolutional deep neural network shows 133x and 4.2x faster executions than Microsoft CryptoNets (ICML'16) and MiniONN (CCS'17), respectively.

258 citations


Posted Content
TL;DR: Chameleon combines the best aspects of generic SFE protocols with the ones that are based upon additive secret sharing, and improves the efficiency of mining and classification of encrypted data for algorithms based upon heavy matrix multiplications.
Abstract: We present Chameleon, a novel hybrid (mixed-protocol) framework for secure function evaluation (SFE) which enables two parties to jointly compute a function without disclosing their private inputs. Chameleon combines the best aspects of generic SFE protocols with the ones that are based upon additive secret sharing. In particular, the framework performs linear operations in the ring $\mathbb{Z}_{2^l}$ using additively secret shared values and nonlinear operations using Yao's Garbled Circuits or the Goldreich-Micali-Wigderson protocol. Chameleon departs from the common assumption of additive or linear secret sharing models where three or more parties need to communicate in the online phase: the framework allows two parties with private inputs to communicate in the online phase under the assumption of a third node generating correlated randomness in an offline phase. Almost all of the heavy cryptographic operations are precomputed in an offline phase which substantially reduces the communication overhead. Chameleon is both scalable and significantly more efficient than the ABY framework (NDSS'15) it is based on. Our framework supports signed fixed-point numbers. In particular, Chameleon's vector dot product of signed fixed-point numbers improves the efficiency of mining and classification of encrypted data for algorithms based upon heavy matrix multiplications. Our evaluation of Chameleon on a 5 layer convolutional deep neural network shows 133x and 4.2x faster executions than Microsoft CryptoNets (ICML'16) and MiniONN (CCS'17), respectively.

193 citations


Journal ArticleDOI
02 Jan 2018
TL;DR: In this paper, the authors provide an overview of existing PSI protocols in various security models, and propose a new PSI protocol whose runtime is superior to that of existing protocols.
Abstract: Private set intersection (PSI) allows two parties to compute the intersection of their sets without revealing any information about items that are not in the intersection. It is one of the best studied applications of secure computation and many PSI protocols have been proposed. However, the variety of existing PSI protocols makes it difficult to identify the solution that performs best in a respective scenario, especially since they were not compared in the same setting. In addition, existing PSI protocols are several orders of magnitude slower than an insecure naive hashing solution, which is used in practice.In this article, we review the progress made on PSI protocols and give an overview of existing protocols in various security models. We then focus on PSI protocols that are secure against semi-honest adversaries and take advantage of the most recent efficiency improvements in Oblivious Transfer (OT) extension, propose significant optimizations to previous PSI protocols, and suggest a new PSI protocol whose runtime is superior to that of existing protocols. We compare the performance of the protocols, both theoretically and experimentally, by implementing all protocols on the same platform, give recommendations on which protocol to use in a particular setting, and evaluate the progress on PSI protocols by comparing them to the currently employed insecure naive hashing protocol. We demonstrate the feasibility of our new PSI protocol by processing two sets with a billion elements each.

158 citations


Book ChapterDOI
29 Apr 2018
TL;DR: These protocols are provided assuming the minimal assumption that two-round oblivious transfer (OT) exists and that the protocol is secure against semi-honest adversaries and malicious adversaries.
Abstract: We provide new two-round multiparty secure computation (MPC) protocols assuming the minimal assumption that two-round oblivious transfer (OT) exists. If the assumed two-round OT protocol is secure against semi-honest adversaries (in the plain model) then so is our two-round MPC protocol. Similarly, if the assumed two-round OT protocol is secure against malicious adversaries (in the common random/reference string model) then so is our two-round MPC protocol. Previously, two-round MPC protocols were only known under relatively stronger computational assumptions. Finally, we provide several extensions.

135 citations


Journal ArticleDOI
TL;DR: Cheon et al. as discussed by the authors applied the homomorphic encryption scheme for an efficient arithmetic over real numbers, and devised a new encoding method to reduce storage of encrypted database, which was selected as the best solution of Track 3 at iDASH privacy and security competition 2017.
Abstract: Security concerns have been raised since big data became a prominent tool in data analysis. For instance, many machine learning algorithms aim to generate prediction models using training data which contain sensitive information about individuals. Cryptography community is considering secure computation as a solution for privacy protection. In particular, practical requirements have triggered research on the efficiency of cryptographic primitives. This paper presents a method to train a logistic regression model without information leakage. We apply the homomorphic encryption scheme of Cheon et al. (ASIACRYPT 2017) for an efficient arithmetic over real numbers, and devise a new encoding method to reduce storage of encrypted database. In addition, we adapt Nesterov’s accelerated gradient method to reduce the number of iterations as well as the computational cost while maintaining the quality of an output classifier. Our method shows a state-of-the-art performance of homomorphic encryption system in a real-world application. The submission based on this work was selected as the best solution of Track 3 at iDASH privacy and security competition 2017. For example, it took about six minutes to obtain a logistic regression model given the dataset consisting of 1579 samples, each of which has 18 features with a binary outcome variable. We present a practical solution for outsourcing analysis tools such as logistic regression analysis while preserving the data confidentiality.

129 citations


Book ChapterDOI
19 Aug 2018
TL;DR: Protocols for secure multiparty computation enable a set of parties to compute a function of their inputs without revealing anything but the output and the security properties of the protocol must be preserved in the presence of adversarial behavior.
Abstract: Protocols for secure multiparty computation enable a set of parties to compute a function of their inputs without revealing anything but the output. The security properties of the protocol must be preserved in the presence of adversarial behavior. The two classic adversary models considered are semi-honest (where the adversary follows the protocol specification but tries to learn more than allowed by examining the protocol transcript) and malicious (where the adversary may follow any arbitrary attack strategy). Protocols for semi-honest adversaries are often far more efficient, but in many cases the security guarantees are not strong enough.

127 citations


Proceedings Article
03 Dec 2018
TL;DR: This work presents a distributed learning approach that combines differential privacy with secure multi-party computation, and explores two popular methods of differential privacy, output perturbations and gradient perturbation, and advances the state-of-the-art for both methods in the distributed learning setting.
Abstract: Distributed learning allows a group of independent data owners to collaboratively learn a model over their data sets without exposing their private data. We present a distributed learning approach that combines differential privacy with secure multi-party computation. We explore two popular methods of differential privacy, output perturbation and gradient perturbation, and advance the state-of-the-art for both methods in the distributed learning setting. In our output perturbation method, the parties combine local models within a secure computation and then add the required differential privacy noise before revealing the model. In our gradient perturbation method, the data owners collaboratively train a global model via an iterative learning algorithm. At each iteration, the parties aggregate their local gradients within a secure computation, adding sufficient noise to ensure privacy before the gradient updates are revealed. For both methods, we show that the noise can be reduced in the multi-party setting by adding the noise inside the secure computation after aggregation, asymptotically improving upon the best previous results. Experiments on real world data sets demonstrate that our methods provide substantial utility gains for typical privacy requirements.

123 citations


Journal ArticleDOI
TL;DR: Simulation results indicate that the secure MPC-based protocol can be a viable privacy-preserving data aggregation mechanism since it not only reduces the overhead with respect to FHE but also almost matches the performance of the Paillier cryptosystem when it is used within a proper sized AMI network.

Journal ArticleDOI
TL;DR: In this paper, a distributed projected gradient-based algorithm is proposed for secure multiparty computation with perfect correctness. But the correctness and computational efficiency of the proposed algorithms are verified by two case studies of power systems.

Proceedings ArticleDOI
15 Oct 2018
TL;DR: This work suggests a new approach for fast generation of pseudo-random instances of VOLE via a deterministic local expansion of a pair of short correlated seeds and no interaction, and provides the first example of compressing a non-trivial and cryptographically useful correlation with good concrete efficiency.
Abstract: Oblivious linear-function evaluation (OLE) is a secure two-party protocol allowing a receiver to learn any linear combination of a pair of field elements held by a sender. OLE serves as a common building block for secure computation of arithmetic circuits, analogously to the role of oblivious transfer (OT) for boolean circuits. A useful extension of OLE is vector OLE (VOLE), allowing the receiver to learn any linear combination of two vectors held by the sender. In several applications of OLE, one can replace a large number of instances of OLE by a smaller number of instances of VOLE. This motivates the goal of amortizing the cost of generating long instances of VOLE. We suggest a new approach for fast generation of pseudo-random instances of VOLE via a deterministic local expansion of a pair of short correlated seeds and no interaction. This provides the first example of compressing a non-trivial and cryptographically useful correlation with good concrete efficiency. Our VOLE generators can be used to enhance the efficiency of a host of cryptographic applications. These include secure arithmetic computation and non-interactive zero-knowledge proofs with reusable preprocessing. Our VOLE generators are based on a novel combination of function secret sharing (FSS) for multi-point functions and linear codes in which decoding is intractable. Their security can be based on variants of the learning parity with noise (LPN) assumption over large fields that resist known attacks. We provide several constructions that offer tradeoffs between different efficiency measures and the underlying intractability assumptions.

Journal ArticleDOI
TL;DR: The widely increasing range of applications of a cryptographic technique called Multi-Party Computation, ranging from securing small high value items such as cryptographic keys, through to securing an entire database is discussed.
Abstract: We discuss the widely increasing range of applications of a cryptographic technique called Multi-Party Computation. For many decades this was perceived to be of purely theoretical interest, but now it has started to find application in a number of use cases. We highlight in this paper a number of these, ranging from securing small high value items such as cryptographic keys, through to securing an entire database.

Journal ArticleDOI
TL;DR: Recently, several schemes for the quantum private comparison (QPC) have been proposed, where two users can compare the equality of equality of the private comparison as discussed by the authors, which is a primitive for many cryptographic tasks.
Abstract: Private comparison is a primitive for many cryptographic tasks, and recently several schemes for the quantum private comparison (QPC) have been proposed, where two users can compare the equality of...


Posted Content
TL;DR: Araki et al. as discussed by the authors proposed three-party secure computation protocols for various NN building blocks such as matrix multiplication, convolutions, Rectified Linear Units, Maxpool, normalization and so on.
Abstract: Neural Networks (NN) provide a powerful method for machine learning training and inference. To effectively train, it is desirable for multiple parties to combine their data — however, doing so conflicts with data privacy. In this work, we provide novel three-party secure computation protocols for various NN building blocks such as matrix multiplication, convolutions, Rectified Linear Units, Maxpool, normalization and so on. This enables us to construct three-party secure protocols for training and inference of several NN architectures such that no single party learns any information about the data. Experimentally, we implement our system over Amazon EC2 servers in different settings. Our work advances the state-of-the-art of secure computation for neural networks in three ways: Scalability: We are the first work to provide neural network training on Convolutional Neural Networks (CNNs) that have an accuracy of >99% on the MNIST dataset; Performance: For secure inference, our system outperforms prior 2 and 3-server works (SecureML, MiniONN, Chameleon, Gazelle) by 6x-113x (with larger gains obtained in more complex networks). Our total execution times are 2-4x faster than even just the online times of these works. For secure training, compared to the only prior work (SecureML) that considered a much smaller fully connected network, our protocols are 79x and 7x faster than their 2 and 3-server protocols. In the WAN setting, these improvements are more dramatic and we obtain an improvement of 553x! Security: Our protocols provide two kinds of security: full security (privacy and correctness) against one semi-honest corruption and the notion of privacy against one malicious corruption [Araki et al. CCS’16]. All prior works only provide semi-honest security and ours is the first system to provide any security against malicious adversaries for the secure computation of complex algorithms such as neural network inference and training. Our gains come from a significant improvement in communication through the elimination of expensive garbled circuits and oblivious transfer protocols.

Journal ArticleDOI
01 Nov 2018
TL;DR: Shrinkwrap is introduced, a private data federation that offers data owners a differentially private view of the data held by others to improve their performance over oblivious query processing and provides a trade-off between result accuracy and query evaluation performance.
Abstract: A private data federation is a set of autonomous databases that share a unified query interface offering in-situ evaluation of SQL queries over the union of the sensitive data of its members. Owing to privacy concerns, these systems do not have a trusted data collector that can see all their data and their member databases cannot learn about individual records of other engines. Federations currently achieve this goal by evaluating queries obliviously using secure multiparty computation. This hides the intermediate result cardinality of each query operator by exhaustively padding it. With cascades of such operators, this padding accumulates to a blow-up in the output size of each operator and a proportional loss in query performance. Hence, existing private data federations do not scale well to complex SQL queries over large datasets.We introduce Shrinkwrap, a private data federation that offers data owners a differentially private view of the data held by others to improve their performance over oblivious query processing. Shrinkwrap uses computational differential privacy to minimize the padding of intermediate query results, achieving up to a 35X performance improvement over oblivious query processing. When the query needs differentially private output, Shrinkwrap provides a trade-off between result accuracy and query evaluation performance.

Posted Content
TL;DR: This paper presents a method to train a logistic regression model without information leakage, and applies the homomorphic encryption scheme of Cheon et al. (ASIACRYPT 2017) for an efficient arithmetic over real numbers, and devise a new encoding method to reduce storage of encrypted database.
Abstract: Security concerns have been raised since big data became a prominent tool in data analysis. For instance, many machine learning algorithms aim to generate prediction models using training data which contain sensitive information about individuals. Cryptography community is considering secure computation as a solution for privacy protection. In particular, practical requirements have triggered research on the efficiency of cryptographic primitives. This paper presents a method to train a logistic regression model without information leakage. We apply the homomorphic encryption scheme of Cheon et al. (ASIACRYPT 2017) for an efficient arithmetic over real numbers, and devise a new encoding method to reduce storage of encrypted database. In addition, we adapt Nesterov’s accelerated gradient method to reduce the number of iterations as well as the computational cost while maintaining the quality of an output classifier. Our method shows a state-of-the-art performance of homomorphic encryption system in a real-world application. The submission based on this work was selected as the best solution of Track 3 at iDASH privacy and security competition 2017. For example, it took about six minutes to obtain a logistic regression model given the dataset consisting of 1579 samples, each of which has 18 features with a binary outcome variable. We present a practical solution for outsourcing analysis tools such as logistic regression analysis while preserving the data confidentiality.

Proceedings ArticleDOI
15 Oct 2018
TL;DR: HyCC is presented, a tool-chain for automated compilation of ANSI C programs into hybrid protocols that efficiently and securely combine multiple MPC protocols with optimizing compilation, scheduling, and partitioning that becomes accessible for developers without cryptographic background.
Abstract: While secure multi-party computation (MPC) is a vibrant research topic and a multitude of practical MPC applications have been presented recently, their development is still a tedious task that requires expert knowledge. Previous works have made first steps in compiling high-level descriptions from various source descriptions into MPC protocols, but only looked at a limited set of protocols. In this work we present HyCC, a tool-chain for automated compilation of ANSI C programs into hybrid protocols that efficiently and securely combine multiple MPC protocols with optimizing compilation, scheduling, and partitioning. As a result, our compiled protocols are able to achieve performance numbers that are comparable to hand-built solutions. For the MiniONN neural network (Liu et al., CCS 2017), our compiler improves performance of the resulting protocol by more than a factor of $3$. Thus, for the first time, highly efficient hybrid MPC becomes accessible for developers without cryptographic background.

Proceedings ArticleDOI
08 May 2018
TL;DR: This work investigates the security of the Intel implementation of the EPID protocol, identifying an implementation weakness that leaks information via a cache side channel and showing that a malicious attestation provider can use the leaked information to break the unlinkability guarantees of EPID.
Abstract: Intel Software Guard Extensions (SGX) allows users to perform secure computation on platforms that run untrusted software. To validate that the computation is correctly initialized and that it executes on trusted hardware, SGX supports attestation providers that can vouch for the user’s computation. Communication with these attestation providers is based on the Extended Privacy ID (EPID) protocol, which not only validates the computation but is also designed to maintain the user’s privacy. In particular, EPID is designed to ensure that the attestation provider is unable to identify the host on which the computation executes. In this work we investigate the security of the Intel implementation of the EPID protocol. We identify an implementation weakness that leaks information via a cache side channel. We show that a malicious attestation provider can use the leaked information to break the unlinkability guarantees of EPID. We analyze the leaked information using a lattice-based approach for solving the hidden number problem, which we adapt to the zero-knowledge proof in the EPID scheme, extending prior attacks on signature schemes.

Journal ArticleDOI
TL;DR: The aim of this survey is to systemize and present the cutting-edge technologies in this area by presenting security threats and requirements, followed with other factors that should be considered when constructing secure computation outsourcing schemes.
Abstract: The rapid development of cloud computing promotes a wide deployment of data and computation outsourcing to cloud service providers by resource-limited entities. Based on a pay-per-use model, a client without enough computational power can easily outsource large-scale computational tasks to a cloud. Nonetheless, the issue of security and privacy becomes a major concern when the customer’s sensitive or confidential data is not processed in a fully trusted cloud environment. Recently, a number of publications have been proposed to investigate and design specific secure outsourcing schemes for different computational tasks. The aim of this survey is to systemize and present the cutting-edge technologies in this area. It starts by presenting security threats and requirements, followed with other factors that should be considered when constructing secure computation outsourcing schemes. In an organized way, we then dwell on the existing secure outsourcing solutions to different computational tasks such as matrix computations, mathematical optimization, and so on, treating data confidentiality as well as computation integrity. Finally, we provide a discussion of the literature and a list of open challenges in the area.

Journal ArticleDOI
TL;DR: Experimental results are demonstrated, illustrating the merits of the proposed methods such as low computational complexity, high embedding capacity and real reversibility are achieved.

Proceedings ArticleDOI
01 Apr 2018
TL;DR: This work explored adding private-data support to Hyperledger Fabric using secure multiparty computation (MPC), and in this solution the peers store on the chain encryption of their private data, and use secure MPC whenever such private data is needed in a transaction.
Abstract: Hyperledger Fabric is a "permissioned" blockchain architecture, providing a consistent distributed ledger, shared by a set of "peers." As with every blockchain architecture, the core principle of Hyperledger Fabric is that all the peers must have the same view of the shared ledger, making it challenging to support private data for the different peers. Extending Hyperledger Fabric to support private data (that can influence transactions) would open the door to many exciting new applications, in areas from healthcare to commerce, insurance, finance, and more. In this work we explored adding private-data support to Hyperledger Fabric using secure multiparty computation (MPC). Specifically, in our solution the peers store on the chain encryption of their private data, and use secure MPC whenever such private data is needed in a transaction. This solution is very general, allowing in principle to base transactions on any combination of public and private data. We created a demo of our solution over Hyperledger Fabric v1.0, implementing a bidding system where sellers can list assets on the ledger with a secret reserve price, and bidders publish their bids on the ledger but keep secret the bidding price itself. We implemented a smart contract (aka "chaincode") that runs the auction on this secret data, using a simple secure-MPC protocol that was built using the EMP-toolkit library. The chaincode itself was written in Go, and we used the SWIG library to make it possible to call our protocol implementation in C++. We identified two basic services that should be added to Hyperledger Fabric to support our solution, and are now working on implementing them.

Posted Content
TL;DR: This work presents a framework for experimenting with secure multi-party computation directly in TensorFlow, gives an open source implementation of a state-of-the-art protocol and reports on concrete benchmarks using typical models from private machine learning.
Abstract: We present a framework for experimenting with secure multi-party computation directly in TensorFlow. By doing so we benefit from several properties valuable to both researchers and practitioners, including tight integration with ordinary machine learning processes, existing optimizations for distributed computation in TensorFlow, high-level abstractions for expressing complex algorithms and protocols, and an expanded set of familiar tooling. We give an open source implementation of a state-of-the-art protocol and report on concrete benchmarks using typical models from private machine learning.

Proceedings ArticleDOI
17 Jun 2018
TL;DR: It is shown that for basic operation such as addition and multiplication, the proposed scheme offers order wise gain, in terms of number of servers needed, compared to the approaches formed by concatenation of job splitting and conventional MPC approaches.
Abstract: In this paper, we introduce limited-sharing multiparty computation; in which there is a network of workers (processors) and a set of sources, each having access to a massive matrix as a private input. These sources aim to offload the task of computing a polynomial function of the matrices to the workers, while preserving the privacy of data. We also assume that the load of the link between each source and each worker is upper bounded by a fraction of each input matrix for some $c\in\{1, \frac{1}{2},\frac{1}{3}, \ldots\}$ . The objective is to minimize the number of workers needed to perform the computation, such that even if an arbitrary subset of $t-1$ workers, for some $t\in \mathbb{N}$ , collude, they cannot gain any information about the input matrices. This framework extends the conventional problem of multi-party computation, where the complexity of computation in each worker is not a constraint. We propose a novel sharing scheme, called polynomial sharing, and several procedures for basic operations such as adding and multiplication of two matrices, and transposing a matrix, and show that any polynomial function of the input matrices can be calculated using the proposed sharing algorithm and above procedures, subject to the problem constraints. We show that for basic operation such as addition and multiplication, the proposed scheme offers order wise gain, in terms of number of servers needed, compared to the approaches formed by concatenation of job splitting and conventional MPC approaches.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: It is shown that additive HSS for non-trivial functions, even the AND of two input bits, implies non-interactive key exchange, and is therefore unlikely to be implied by public-key encryption or even oblivious transfer.
Abstract: Homomorphic secret sharing (HSS) is the secret sharing analogue of homomorphic encryption. An HSS scheme supports a local evaluation of functions on shares of one or more secret inputs, such that the resulting shares of the output are short. Some applications require the stronger notion of additive HSS, where the shares of the output add up to the output over some finite Abelian group. While some strong positive results for HSS are known under specific cryptographic assumptions, many natural questions remain open. We initiate a systematic study of HSS, making the following contributions. - A definitional framework. We present a general framework for defining HSS schemes that unifies and extends several previous notions from the literature, and cast known results within this framework. - Limitations. We establish limitations on information-theoretic multi-input HSS with short output shares via a relation with communication complexity. We also show that additive HSS for non-trivial functions, even the AND of two input bits, implies non-interactive key exchange, and is therefore unlikely to be implied by public-key encryption or even oblivious transfer. - Applications. We present two types of applications of HSS. First, we construct 2-round protocols for secure multiparty computation from a simple constant-size instance of HSS. As a corollary, we obtain 2-round protocols with attractive asymptotic efficiency features under the Decision Diffie Hellman (DDH) assumption. Second, we use HSS to obtain nearly optimal worst-case to average-case reductions in P. This in turn has applications to fine-grained average-case hardness and verifiable computation.

Book ChapterDOI
19 Aug 2018
TL;DR: This work constructs several round-optimal n-party protocols, tolerating any \(t<\frac{n}{2}\) corruptions, and studies the exact round complexity of secure multiparty computation in the honest majority setting.
Abstract: We study the exact round complexity of secure multiparty computation (MPC) in the honest majority setting We construct several round-optimal n-party protocols, tolerating any \(t<\frac{n}{2}\) corruptions

Journal ArticleDOI
TL;DR: This paper provides new methods for garbling that are secure solely under the assumption that the primitive used (e.g., AES) is a pseudorandom function.
Abstract: Protocols for secure computation enable mutually distrustful parties to jointly compute on their private inputs without revealing anything, but the result. Over recent years, secure computation has become practical and considerable effort has been made to make it more and more efficient. A highly important tool in the design of two-party protocols is Yao’s garbled circuit construction (Yao 1986), and multiple optimizations on this primitive have led to performance improvements in orders of magnitude over the last years. However, many of these improvements come at the price of making very strong assumptions on the underlying cryptographic primitives being used (e.g., that AES is secure for related keys, that it is circular-secure, and even that it behaves like a random permutation when keyed with a public fixed key). The justification behind making these strong assumptions has been that otherwise it is not possible to achieve fast garbling and thus fast secure computation. In this paper, we take a step back and examine whether it is really the case that such strong assumptions are needed. We provide new methods for garbling that are secure solely under the assumption that the primitive used (e.g., AES) is a pseudorandom function. Our results show that in many cases, the penalty incurred is not significant, and so a more conservative approach to the assumptions being used can be adopted.

Journal ArticleDOI
TL;DR: A novel TiOISS scheme based on PBVCS using exclusive OR operation is proposed, which does not need complex computation in revealing process, and it can be used in real-time application.
Abstract: Perfect black visual cryptography scheme (PBVCS) shares a binary secret image into n shadows. Stacking any $$k(k