scispace - formally typeset
Search or ask a question

Showing papers on "Secure multi-party computation published in 2021"


Proceedings ArticleDOI
27 May 2021
TL;DR: In this article, the authors present SAFELearn, a generic design for efficient private federated learning systems that protects against inference attacks that have to analyze individual clients' model updates using secure aggregation.
Abstract: Federated learning (FL) is an emerging distributed machine learning paradigm which addresses critical data privacy issues in machine learning by enabling clients, using an aggregation server (aggregator), to jointly train a global model without revealing their training data. Thereby, it improves not only privacy but is also efficient as it uses the computation power and data of potentially millions of clients for training in parallel. However, FL is vulnerable to so-called inference attacks by malicious aggregators which can infer information about clients’ data from their model updates. Secure aggregation restricts the central aggregator to only learn the summation or average of the updates of clients. Unfortunately, existing protocols for secure aggregation for FL suffer from high communication, computation, and many communication rounds.In this work, we present SAFELearn, a generic design for efficient private FL systems that protects against inference attacks that have to analyze individual clients’ model updates using secure aggregation. It is flexibly adaptable to the efficiency and security requirements of various FL applications and can be instantiated with MPC or FHE. In contrast to previous works, we only need 2 rounds of communication in each training iteration, do not use any expensive cryptographic primitives on clients, tolerate dropouts, and do not rely on a trusted third party. We implement and benchmark an instantiation of our generic design with secure two-party computation. Our implementation aggregates 500 models with more than 300K parameters in less than 0.5 seconds.

88 citations


Proceedings ArticleDOI
23 May 2021
TL;DR: CryptGPU as discussed by the authors is a system for privacy-preserving machine learning that implements all operations on the GPU (graphics processing unit) and achieves state-of-the-art performance on convolutional neural networks.
Abstract: We introduce CryptGPU, a system for privacy-preserving machine learning that implements all operations on the GPU (graphics processing unit). Just as GPUs played a pivotal role in the success of modern deep learning, they are also essential for realizing scalable privacy-preserving deep learning. In this work, we start by introducing a new interface to losslessly embed cryptographic operations over secret-shared values (in a discrete domain) into floating-point operations that can be processed by highly-optimized CUDA kernels for linear algebra. We then identify a sequence of "GPU-friendly" cryptographic protocols to enable privacy-preserving evaluation of both linear and non-linear operations on the GPU. Our microbenchmarks indicate that our private GPU-based convolution protocol is over 150× faster than the analogous CPU-based protocol; for non-linear operations like the ReLU activation function, our GPU-based protocol is around 10× faster than its CPU analog. With CryptGPU, we support private inference and training on convolutional neural networks with over 60 million parameters as well as handle large datasets like ImageNet. Compared to the previous state-of-the-art, our protocols achieve a 2× to 8× improvement in private inference for large networks and datasets. For private training, we achieve a 6× to 36× improvement over prior state-of-the-art. Our work not only showcases the viability of performing secure multiparty computation (MPC) entirely on the GPU to newly enable fast privacy-preserving machine learning, but also highlights the importance of designing new MPC primitives that can take full advantage of the GPU’s computing capabilities.

82 citations


Journal ArticleDOI
TL;DR: The existing secure computation sub-protocols involved in SecRCNN, including division, exponentiation and logarithm, are improved and can dramatically reduce the number of messages exchanged during the iterative approximation process based on the coordinate rotation digital computer algorithm.
Abstract: In this paper, we propose a lightweight privacy-preserving Faster R-CNN framework (SecRCNN) for object detection in medical images. Faster R-CNN is one of the most outstanding deep learning models for object detection. Using SecRCNN, healthcare centers can efficiently complete privacy-preserving computations of Faster R-CNN via the additive secret sharing technique and edge computing. To implement SecRCNN, we design a series of interactive protocols to perform the three stages of Faster R-CNN, namely feature map extraction, region proposal and regression and classification. To improve the efficiency of SecRCNN, we improve the existing secure computation sub-protocols involved in SecRCNN, including division, exponentiation and logarithm. The newly proposed sub-protocols can dramatically reduce the number of messages exchanged during the iterative approximation process based on the coordinate rotation digital computer algorithm. Moreover, the effectiveness, efficiency and security of SecRCNN are demonstrated through comprehensive theoretical analysis and extensive experiments. The experimental findings show that the communication overhead in computing division, logarithm and exponentiation decreases to 36.19%, 73.82% and 43.37%, respectively.

56 citations


Book ChapterDOI
01 Apr 2021
TL;DR: This work focuses on a simple local leakage model, where the adversary can apply an arbitrary function of a bounded output length to the secret state of each party, but cannot otherwise learn joint information about the states.
Abstract: We consider the following basic question: to what extent are standard secret sharing schemes and protocols for secure multiparty computation that build on them resilient to leakage? We focus on a simple local leakage model, where the adversary can apply an arbitrary function of a bounded output length to the secret state of each party, but cannot otherwise learn joint information about the states.

53 citations


Book ChapterDOI
17 Oct 2021
TL;DR: A barrier to obtaining a 1-round implementation via a single FSS scheme is identified, showing that this would require settling a major open problem in the area of FSS: namely, a PRG-based FSS for the class of bit-conjunction functions.
Abstract: Boyle et al. (TCC 2019) proposed a new approach for secure computation in the preprocessing model building on function secret sharing (FSS), where a gate g is evaluated using an FSS scheme for the related offset family \(g_r(x)=g(x+r)\). They further presented efficient FSS schemes based on any pseudorandom generator (PRG) for the offset families of several useful gates g that arise in “mixed-mode” secure computation. These include gates for zero test, integer comparison, ReLU, and spline functions. The FSS-based approach offers significant savings in online communication and round complexity compared to alternative techniques based on garbled circuits or secret sharing.

52 citations


Journal ArticleDOI
TL;DR: This article designs eight secure computation protocols to allow the cloud server to efficiently execute basic integer and floating-point computations and proposes an efficient privacy-preserving outsourced support vector machine scheme (EPoSVM), designed for IoMT deployment.
Abstract: As the use of machine learning in the Internet-of-Medical Things (IoMT) settings increases, so do the data privacy concerns Therefore, in this article, we propose an efficient privacy-preserving outsourced support vector machine scheme (EPoSVM), designed for IoMT deployment To securely train the support vector machine (SVM), we design eight secure computation protocols to allow the cloud server to efficiently execute basic integer and floating-point computations The proposed scheme protects training data privacy and guarantees the security of the trained SVM model The security analysis proves that our proposed protocols and EPoSVM satisfy both security and privacy protection requirements Findings from the performance evaluation using two real-world disease data sets also demonstrate the efficiency and effectiveness of EPoSVM in achieving the same classification accuracy as a general SVM

51 citations


Book ChapterDOI
01 Jul 2021
TL;DR: Ciampi, Ostrovsky, Siniscalchi and Visconti as mentioned in this paper proposed a four-round secure multi-party computation (MPC) protocol with non-polynomial time assumptions.
Abstract: Secure multi-party computation (MPC) is a central cryptographic task that allows a set of mutually distrustful parties to jointly compute some function of their private inputs where security should hold in the presence of a malicious adversary that can corrupt any number of parties. Despite extensive research, the precise round complexity of this “standard-bearer” cryptographic primitive is unknown. Recently, Garg, Mukherjee, Pandey and Polychroniadou, in EUROCRYPT 2016 demonstrated that the round complexity of any MPC protocol relying on black-box proofs of security in the plain model must be at least four. Following this work, independently Ananth, Choudhuri and Jain, CRYPTO 2017 and Brakerski, Halevi, and Polychroniadou, TCC 2017 made progress towards solving this question and constructed four-round protocols based on non-polynomial time assumptions. More recently, Ciampi, Ostrovsky, Siniscalchi and Visconti in TCC 2017 closed the gap for two-party protocols by constructing a four-round protocol from polynomial-time assumptions. In another work, Ciampi, Ostrovsky, Siniscalchi and Visconti TCC 2017 showed how to design a four-round multi-party protocol for the specific case of multi-party coin-tossing.

44 citations


Journal ArticleDOI
TL;DR: It is proved that Preyer facilitates patient dynamic treatment policymaking without leaking sensitive information to unauthorized parties.
Abstract: In this paper, we propose a privacy-preserving reinforcement learning framework for a patient-centric dynamic treatment regime, which we refer to as Preyer. Using Preyer, a patient-centric treatment strategy can be made spontaneously while preserving the privacy of the patient's current health state and the treatment decision. Specifically, we first design a new storage and computation method to support noninteger processing for multiple encrypted domains. A new secure plaintext length control protocol is also proposed to avoid plaintext overflow after executing secure computation repeatedly. Moreover, we design a new privacy-preserving reinforcement learning framework with experience replay to build the model for secure dynamic treatment policymaking. Furthermore, we prove that Preyer facilitates patient dynamic treatment policymaking without leaking sensitive information to unauthorized parties. We also demonstrate the utility and efficiency of Preyer using simulations and analysis.

44 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a secure logistic regression training protocol and its implementation, with a new subprotocol to securely compute the activation function, and a series of cryptographic engineering optimizations to improve the performance.
Abstract: In biomedical applications, valuable data is often split between owners who cannot openly share the data because of privacy regulations and concerns. Training machine learning models on the joint data without violating privacy is a major technology challenge that can be addressed by combining techniques from machine learning and cryptography. When collaboratively training machine learning models with the cryptographic technique named secure multi-party computation, the price paid for keeping the data of the owners private is an increase in computational cost and runtime. A careful choice of machine learning techniques, algorithmic and implementation optimizations are a necessity to enable practical secure machine learning over distributed data sets. Such optimizations can be tailored to the kind of data and Machine Learning problem at hand. Our setup involves secure two-party computation protocols, along with a trusted initializer that distributes correlated randomness to the two computing parties. We use a gradient descent based algorithm for training a logistic regression like model with a clipped ReLu activation function, and we break down the algorithm into corresponding cryptographic protocols. Our main contributions are a new protocol for computing the activation function that requires neither secure comparison protocols nor Yao’s garbled circuits, and a series of cryptographic engineering optimizations to improve the performance. For our largest gene expression data set, we train a model that requires over 7 billion secure multiplications; the training completes in about 26.90 s in a local area network. The implementation in this work is a further optimized version of the implementation with which we won first place in Track 4 of the iDASH 2019 secure genome analysis competition. In this paper, we present a secure logistic regression training protocol and its implementation, with a new subprotocol to securely compute the activation function. To the best of our knowledge, we present the fastest existing secure multi-party computation implementation for training logistic regression models on high dimensional genome data distributed across a local area network.

33 citations


Book ChapterDOI
16 Aug 2021
TL;DR: This work initiates the study of fluid MPC, where parties can dynamically join and leave the computation, and constructs information-theoretic fluidMPC protocols in the honest-majority setting that achieve maximal fluidity.
Abstract: Existing approaches to secure multiparty computation (MPC) require all participants to commit to the entire duration of the protocol. As interest in MPC continues to grow, it is inevitable that there will be a desire to use it to evaluate increasingly complex functionalities, resulting in computations spanning several hours or days.

32 citations


Journal ArticleDOI
TL;DR: The proposed Secure Decentralized Training Framework for Privacy Preserving Deep Learning models is capable of working on a decentralized network setting that does not need a trusted third-party server while simultaneously ensuring the privacy of local data with a low cost of communication bandwidth.

Journal ArticleDOI
TL;DR: This paper proposes the first decentralized and fair hierarchical threshold secret sharing (HTSS) scheme using blockchain, and demonstrates that the scheme can run reasonably fast and is practical.

Journal ArticleDOI
TL;DR: In this article, a synthesis between homomorphic encryption and secure multiparty computation is proposed to provide a mathematical guarantee of privacy for multisite medical data sharing, which can accelerate the pace of medical research while offering additional incentives for health care and research institutes to employ common data interoperability standards.
Abstract: Multisite medical data sharing is critical in modern clinical practice and medical research. The challenge is to conduct data sharing that preserves individual privacy and data utility. The shortcomings of traditional privacy-enhancing technologies mean that institutions rely upon bespoke data sharing contracts. The lengthy process and administration induced by these contracts increases the inefficiency of data sharing and may disincentivize important clinical treatment and medical research. This paper provides a synthesis between 2 novel advanced privacy-enhancing technologies—homomorphic encryption and secure multiparty computation (defined together as multiparty homomorphic encryption). These privacy-enhancing technologies provide a mathematical guarantee of privacy, with multiparty homomorphic encryption providing a performance advantage over separately using homomorphic encryption or secure multiparty computation. We argue multiparty homomorphic encryption fulfills legal requirements for medical data sharing under the European Union’s General Data Protection Regulation which has set a global benchmark for data protection. Specifically, the data processed and shared using multiparty homomorphic encryption can be considered anonymized data. We explain how multiparty homomorphic encryption can reduce the reliance upon customized contractual measures between institutions. The proposed approach can accelerate the pace of medical research while offering additional incentives for health care and research institutes to employ common data interoperability standards.

Journal ArticleDOI
TL;DR: The new definition of utilities is proved to be coincident (and compatible) with the former definitions under asymmetric information scenario and the achievement of security properties in enterprise information systems is worked towards.
Abstract: It has become acentral question to guarantee the security of sensitive data in enterprise information systems. Actually, different enterprises may grasp asymmetric information due to their scales. ...

Journal ArticleDOI
TL;DR: A blockchain-empowered distributed SD-CPS framework is proposed to realize consensus and distributed resource management by offloading data in a hybrid network paradigm that combines cloud computing and EC.
Abstract: The evolution of the Internet of Things (IoT) makes an increased emphasis on extending their computing and storage capabilities by relying particularly on the cloud/edge computing (EC) for cyber–physical systems (CPSs) Especially, in software-defined CPS (SD-CPS), different software-defined networking (SDN) controllers share information and cooperate to make global decisions To further enhance system security during the information sharing process, we introduce blockchain technology into SD-CPS However, because many security-related decisions are sensitive to latency, it is vital to minimize the system latency in blockchain-empowered SD-CPS In this article, a blockchain-empowered distributed SD-CPS framework is proposed to realize consensus and distributed resource management by offloading data in a hybrid network paradigm that combines cloud computing and EC Moreover, to adaptively implement offloading and control strategies while guaranteeing data security, we design a resource management scheme for reducing system latency and provide the flexibility of cooperation To foster intelligence, we formulate the joint communication, computation, and consensus problems as a Markov decision process and use deep reinforcement learning to balance resource allocation, reduce latency, and guarantee data security Compared with other schemes, simulation results verify the effectiveness of the proposed scheme, which performs better on self-adaptation decision making and system delay reduction

Proceedings ArticleDOI
12 Nov 2021
TL;DR: In this article, the authors present a highly scalable secure computation of graph algorithms, which hides all information about the topology of the graph or other input values associated with nodes or edges.
Abstract: We present a highly-scalable secure computation of graph algorithms, which hides all information about the topology of the graph or other input values associated with nodes or edges. The setting is where all nodes and edges of the graph are secret-shared between multiple servers, and a secure computation protocol is run between these servers. While the method is general, we demonstrate it in a 3-server setting with an honest majority, with either semi-honest security or full security. A major technical contribution of our work is replacing the usage of secure sort protocols with secure shuffles, which are much more efficient. Full security against malicious behavior is achieved by adding an efficient verification for the shuffle operation, and computing circuits using fully secure protocols. We demonstrate the applicability of this technology by implementing two major algorithms: computing breadth-first search (BFS), which is also useful for contact tracing on private contact graphs, and computing maximal independent set (MIS). We implement both algorithms, with both semi-honest and full security, and run them within seconds on graphs of millions of elements.

Proceedings ArticleDOI
26 Oct 2021
TL;DR: Wang et al. as discussed by the authors proposed a large-scale secure gradient tree boosting model (XGB) under vertically federated learning setting, which employs secure multi-party computation techniques to avoid leaking intermediate information during training, and store the output model in a distributed manner in order to minimize information release.
Abstract: Privacy-preserving machine learning has drawn increasingly attention recently, especially with kinds of privacy regulations come into force. Under such situation, Federated Learning (FL) appears to facilitate privacy-preserving joint modeling among multiple parties. Although many federated algorithms have been extensively studied, there is still a lack of secure and practical gradient tree boosting models (e.g., XGB) in literature. In this paper, we aim to build large-scale secure XGB under vertically federated learning setting. We guarantee data privacy from three aspects. Specifically, (1) we employ secure multi-party computation techniques to avoid leaking intermediate information during training, (2) we store the output model in a distributed manner in order to minimize information release, and (3) we provide a novel algorithm for secure XGB predict with the distributed model. Furthermore, by proposing secure permutation protocols, we can improve the training efficiency and make the framework scale to large dataset. We conduct extensive experiments on both public datasets and real-world datasets, and the results demonstrate that our proposed XGB models provide not only competitive accuracy but also practical performance.

Journal ArticleDOI
TL;DR: In this article, the authors employ the technique of software bounded model checking (SBMC), which reduces the problem to a bounded state space, which is automatically searched exhaustively using a SAT solver as a backend.
Abstract: Card-based cryptography provides simple and practicable protocols for performing secure multi-party computation (MPC) with just a deck of cards. For the sake of simplicity, this is often done using cards with only two symbols, e.g., Open image in new window and Open image in new window . Within this paper, we target the setting where all cards carry distinct symbols, catering for use-cases with commonly available standard decks and a weaker indistinguishability assumption. As of yet, the literature provides for only three protocols and no proofs for non-trivial lower bounds on the number of cards. As such complex proofs (handling very large combinatorial state spaces) tend to be involved and error-prone, we propose using formal verification for finding protocols and proving lower bounds. In this paper, we employ the technique of software bounded model checking (SBMC), which reduces the problem to a bounded state space, which is automatically searched exhaustively using a SAT solver as a backend.

Journal ArticleDOI
TL;DR: This paper designs a protocol which securely computes any Boolean circuit with only a single shuffle, and the number of cards required is proportional to the size of the circuit to be computed.

Journal ArticleDOI
TL;DR: A blockchain based secure computation offloading scheduling scheme is proposed that embraces the blockchain based trust management paradigm and smart contract enabled Deep Reinforcement Learning (DRL) algorithm, and a novel three-valued subjective logic (3VSL) scheme is adopted to obtain a more comprehensive reputation.
Abstract: With the emergence of computation intensive vehicular applications, computation offloading based on mobile edge computing (MEC) has become a promising paradigm in resource constrained vehicular cloud networks (VCNs). However, when doing computation offloading in a VCN, malicious service providers can cause serious security concerns on the content offloading. To address that, in this paper a blockchain based secure computation offloading scheduling scheme is proposed. It embraces the blockchain based trust management paradigm and smart contract enabled Deep Reinforcement Learning (DRL) algorithm. As for the trust management, the long-term reputation and short-term trust variability are jointly considered. Specifically, a novel three-valued subjective logic (3VSL) scheme is adopted to obtain a more comprehensive reputation, and the statistics of behavioral transitions can provide a short-term trust variability to timely capture the malicious behaviors. In addition, to securely update, validate, and store the trust information, we propose a hierarchical blockchain framework that comprises vehicular blockchain, RSU blockchain, and cloud blockchain. Furthermore, a smart contract enabled DRL algorithm is proposed to implement the secure and intelligent computation offloading scheduling in a VCN. Simulations are conducted to verify the effectiveness of the proposed scheme.

Proceedings ArticleDOI
23 May 2021
TL;DR: In this article, Chen et al. design and implement the first MPC protocol for distributed generation of an RSA modulus that can support thousands of parties and offers security against active corruption of an arbitrary number of parties.
Abstract: In this work, we design and implement the first protocol for distributed generation of an RSA modulus that can support thousands of parties and offers security against active corruption of an arbitrary number of parties. In a nutshell, we first design a highly optimized protocol for this scale that is secure against passive corruptions, and then amplify its security to withstand active corruptions using lightweight succinct zero-knowledge proofs. Our protocol achieves security with "identifiable abort," where a corrupted party is identified whenever the protocol aborts, and supports public verifiability.Our protocol against passive corruptions extends the recent work of Chen et al. (CRYPTO 2020) that, in turn, is based on the blueprint introduced in the original work of Boneh-Franklin protocol (CRYPTO 1997, J. ACM, 2001). Specifically, we reduce the task of sampling a modulus to secure distributed multiplication, which we implement via an efficient threshold additively homomorphic encryption scheme based on the Ring-LWE assumption. This results in a protocol where the (amortized) per-party communication cost grows logarithmically in the number of parties. In order to minimize the work done by the parties, we employ a "publicly verifiable" coordinator that is connected to all parties and only performs computations on public data.We implemented both the passive and the active variants of our protocol and ran experiments using 2 to 4,000 parties. This is the first implementation of any MPC protocol that can scale to more than 1,000 parties. For generating a 2048-bit modulus among 1,000 parties, our passive protocol executed in under 6 minutes and the active variant ran in under 25 minutes.

Proceedings ArticleDOI
12 Nov 2021
TL;DR: In this paper, the authors present a differentially private top-k algorithm for very small data sets (hundreds of values) using semi-honest computation parties distributed over the Internet.
Abstract: Private learning of top-k, i.e., the k most frequent values also called heavy hitters, is a common industry scenario: Companies want to privately learn, e.g., frequently typed new words to improve suggestions on mobile devices, often used browser settings, telemetry data of frequent crashes, heavily shared articles, etc. Real-world deployments often use local differential privacy, where distributed users share locally randomized data with an untrusted server. Central differential privacy, on the other hand, assumes access to the raw data and applies the randomization only once, on the aggregated result. These solutions either require large amounts of users for high accuracy (local model) or a trusted third party (central model).We present multi-party computation protocols HH and PEM of sketches (succinct data structures) to efficiently compute differentially private top-k: HH has running time linear in the size of the data and is applicable for very small data sets (hundreds of values), and PEM is sublinear in the data domain and provides better accuracy than HH for large data sizes. Our approaches are efficient (practical running time, requiring no output reconstruction as other sketches) and more accurate than local differential privacy even for a small number of users. In our experiments we were able to securely compute differentially private top-k in less than 10 minutes using 3 semi-honest computation parties distributed over the Internet with inputs from hundreds of users (HH) and input size that is independent of the user count (PEM, excluding count aggregation).

Proceedings Article
06 Dec 2021
TL;DR: In this article, the authors present CrypTen, a software framework that exposes secure MPC primitives via abstractions that are common in modern machine learning frameworks, such as tensor computations, automatic differentiation, and modular neural networks.
Abstract: Secure multi-party computation (MPC) allows parties to perform computations on data while keeping that data private. This capability has great potential for machine-learning applications: it facilitates training of machine-learning models on private data sets owned by different parties, evaluation of one party's private model using another party's private data, etc. Although a range of studies implement machine-learning models via secure MPC, such implementations are not yet mainstream. Adoption of secure MPC is hampered by the absence of flexible software frameworks that "speak the language" of machine-learning researchers and engineers. To foster adoption of secure MPC in machine learning, we present CrypTen: a software framework that exposes popular secure MPC primitives via abstractions that are common in modern machine-learning frameworks, such as tensor computations, automatic differentiation, and modular neural networks. This paper describes the design of CrypTen and measure its performance on state-of-the-art models for text classification, speech recognition, and image classification. Our benchmarks show that CrypTen's GPU support and high-performance communication between (an arbitrary number of) parties allows it to perform efficient private evaluation of modern machine-learning models under a semi-honest threat model. For example, two parties using CrypTen can securely predict phonemes in speech recordings using Wav2Letter faster than real-time. We hope that CrypTen will spur adoption of secure MPC in the machine-learning community.

DOI
07 Dec 2021
TL;DR: Salvia as mentioned in this paper is an implementation of Secure Aggregation (SA) for Python users in the Flower FL framework, which is robust against client dropouts and exposes a flexible and easy-to-use API that is compatible with various machine learning frameworks.
Abstract: Federated Learning (FL) allows parties to learn a shared prediction model by delegating the training computation to clients and aggregating all the separately trained models on the server. To prevent private information being inferred from local models, Secure Aggregation (SA) protocols are used to ensure that the server is unable to inspect individual trained models as it aggregates them. However, current implementations of SA in FL frameworks have limitations, including vulnerability to client dropouts or configuration difficulties. In this paper, we present Salvia, an implementation of SA for Python users in the Flower FL framework. Based on the SecAgg(+) protocols for a semi-honest threat model, Salvia is robust against client dropouts and exposes a flexible and easy-to-use API that is compatible with various machine learning frameworks. We show that Salvia's experimental performance is consistent with SecAgg(+)'s theoretical computation and communication complexities.

Journal ArticleDOI
23 Feb 2021-Sensors
TL;DR: In this article, the authors used homomorphic encryption, secret sharing and zero-knowledge proofs to construct a publicly verifiable secure MPC protocol consisting of two parts, an on-chain computation phase and an off-chain preprocessing phase and integrated the protocol as part of the chaincode in Hyperledger Fabric to protect the privacy of transaction data.
Abstract: The development of information technology has brought great convenience to our lives, but at the same time, the unfairness and privacy issues brought about by traditional centralized systems cannot be ignored. Blockchain is a peer-to-peer and decentralized ledger technology that has the characteristics of transparency, consistency, traceability and fairness, but it reveals private information in some scenarios. Secure multi-party computation (MPC) guarantees enhanced privacy and correctness, so many researchers have been trying to combine secure MPC with blockchain to deal with privacy and trust issues. In this paper, we used homomorphic encryption, secret sharing and zero-knowledge proofs to construct a publicly verifiable secure MPC protocol consisting of two parts—an on-chain computation phase and an off-chain preprocessing phase—and we integrated the protocol as part of the chaincode in Hyperledger Fabric to protect the privacy of transaction data. Experiments showed that our solution performed well on a permissioned blockchain. Most of the time taken to complete the protocol was spent on communication, so the performance has a great deal of room to grow.

Book ChapterDOI
16 Aug 2021
TL;DR: In this article, it was shown that quantum-hard one-way functions imply simulation-secure quantum oblivious transfer (QOT), which is known to suffice for secure computation of arbitrary quantum functionalities.
Abstract: We prove that quantum-hard one-way functions imply simulation-secure quantum oblivious transfer (QOT), which is known to suffice for secure computation of arbitrary quantum functionalities. Furthermore, our construction only makes black-box use of the quantum-hard one-way function.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a privacy-preserving multi-party skyline query on encrypted data using additive homomorphic and proxy re-encryption cryptosystems to improve the efficiency of comparison.
Abstract: One existing challenge associated with large scale skyline queries on cloud services, particularly when dealing with private information such as biomedical data, is supporting multi-party queries with curious-but-honest parties on encrypted data. In addition, existing solutions designed for performing secure skyline queries incur significant communication and computation costs due to ciphertext calculation. Thus, in this paper, we demonstrate the potential of supporting privacy-preserving multi-party skyline queries on encrypted data using additive homomorphic and proxy re-encryption cryptosystems. However, the secure computation based on these cryptosystems will further slow down query efficiency. To improve the efficiency of comparison on encrypted data, we redesign two lightweight secure comparison protocols. Meanwhile, we present an efficient method named “blind-reading” to securely obtain the skyline point. We also propose a novel method, Privacy Matrix, designed to reduce the scale of the dataset so that the computational cost is significantly decreased without privacy leakage. Then, we construct our secure skyline query protocol by integrating lightweight secure comparison protocols, “blind-reading” and Privacy Matrix techniques. Finally, we evaluate the security of our protocol, where we show it is secure without leaking information. The performance evaluation also shows that our proposed approach significantly improves the efficiency (at least $\times 4.5$ faster) compared to the-state-of-art and has the scalability of query processing under large datasets.

Book ChapterDOI
25 Jun 2021
TL;DR: In this article, the authors proposed a new federated learning scheme based on secure multi-party computation (SMC) and differential privacy, which prevents inference during the learning process as well as inference of the output.
Abstract: Federated learning ensures that the quality of the model is uncompromised while the resulting global model is consistent with the model trained by directly collecting user data. However, the risk of inferring data considered in federated learning. Furthermore, the inference to the learning outcome considered in a federated learning environment must satisfy that data cannot be inferred from any outcome except the owner of the data. In this paper, we propose a new federated learning scheme based on secure multi-party computation (SMC) and differential privacy. The scheme prevents inference during the learning process as well as inference of the output. Meanwhile, the scheme protects the user’s local data during the learning process to ensure the correctness of the results after users’ midway exits through the process.

Proceedings ArticleDOI
09 Jun 2021
TL;DR: In this article, a secure version of the classical Yannakakis algorithm for computing free-connex join-aggregate queries is described. But the protocol can be used in the secure two-party computation model, where the parties would like to evaluate a query without revealing their own data.
Abstract: In this paper, we describe a secure version of the classical Yannakakis algorithm for computing free-connex join-aggregate queries. This protocol can be used in the secure two-party computation model, where the parties would like to evaluate a query without revealing their own data. Our protocol presents a dramatic improvement over the state-of-the-art protocol based on Yao's garbled circuit. In theory, its cost (both running time and communication) is linear in data size and polynomial in query size, whereas that of the garbled circuit is polynomial in data size and exponential in query size. This translates to a reduction in running time in practice from years to minutes, as tested on a number of TPC-H queries of varying complexity.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper investigated simultaneously the security and computation offloading problems in a multi-user MECCO system with blockchain, and proposed a trustworthy access control using blockchain, which can protect cloud resources against illegal offloading behaviours.
Abstract: For current and future Internet of Things (IoT) networks, mobile edge-cloud computation offloading (MECCO) has been regarded as a promising means to support delay-sensitive IoT applications. However, offloading mobile tasks to cloud is vulnerable to security issues due to malicious mobile devices (MDs). How to implement offloading to alleviate computation burdens at MDs while guaranteeing high security in mobile edge cloud is a challenging problem. In this paper, we investigate simultaneously the security and computation offloading problems in a multi-user MECCO system with blockchain. First, to improve the offloading security, we propose a trustworthy access control using blockchain, which can protect cloud resources against illegal offloading behaviours. Then, to tackle the computation management of authorized MDs, we formulate a computation offloading problem by jointly optimizing the offloading decisions, the allocation of computing resource and radio bandwidth, and smart contract usage. This optimization problem aims to minimize the long-term system costs of latency, energy consumption and smart contract fee among all MDs. To solve the proposed offloading problem, we develop an advanced deep reinforcement learning algorithm using a double-dueling Q-network. Evaluation results from real experiments and numerical simulations demonstrate the significant advantages of our scheme over existing approaches.