scispace - formally typeset
Search or ask a question

Showing papers on "Verifiable secret sharing published in 2014"



Proceedings ArticleDOI
03 Nov 2014
TL;DR: A novel homomorphic hashing technique is developed that allows us to efficiently authenticate computations, at the same cost as if the data were in the clear, avoiding a $10^4$ overhead which would occur with a naive approach.
Abstract: We study the task of verifiable delegation of computation on encrypted data. We improve previous definitions in order to tolerate adversaries that learn whether or not clients accept the result of a delegated computation. In this strong model, we construct a scheme for arbitrary computations and highly efficient schemes for delegation of various classes of functions, such as linear combinations, high-degree univariate polynomials, and multivariate quadratic polynomials. Notably, the latter class includes many useful statistics. Using our solution, a client can store a large encrypted dataset on a server, query statistics over this data, and receive encrypted results that can be efficiently verified and decrypted. As a key contribution for the efficiency of our schemes, we develop a novel homomorphic hashing technique that allows us to efficiently authenticate computations, at the same cost as if the data were in the clear, avoiding a $10^4$ overhead which would occur with a naive approach. We support our theoretical constructions with extensive implementation tests that show the practical feasibility of our schemes.

184 citations


Journal ArticleDOI
TL;DR: This work reports an experimental demonstration of graph state-based quantum secret sharing--an important primitive for a quantum network with applications ranging from secure money transfer to multiparty quantum computation.
Abstract: Quantum communication schemes rely on cryptographically secure quantum resources to distribute private information. Here, the authors show that graph states—nonlocal states based on networks of qubits—can be exploited to implement quantum secret sharing of quantum and classical information.

122 citations


Posted Content
TL;DR: Buffet as mentioned in this paper is a built system that provides inexpensive RAM and dynamic control flow for verifiable computations, allowing the programmer to express programs in an expansive subset of C (disallowing only goto and function pointers).
Abstract: Recent work on proof-based verifiable computation has resulted in built systems that employ tools from complexity theory and cryptography to address a basic problem in systems security: allowing a local computer to outsource the execution of a program while providing the local computer with a guarantee of integrity and the remote computer with a guarantee of privacy. However, support for programs that use RAM and control flow has been problematic. State of the art systems either restrict the use of these constructs (e.g., requiring static loop bounds), incur sizeable overhead on every step, or pay tremendous costs when the constructs are invoked. This paper describes Buffet, a built system that solves these problems by providing inexpensive “a la carte” RAM and dynamic control flow. Buffet composes an elegant prior approach to RAM with a novel adaptation of techniques from the compilers literature. Buffet allows the programmer to express programs in an expansive subset of C (disallowing only “goto” and function pointers), can handle essentially any example in the verifiable computation literature, and achieves the best performance in the area by multiple orders of magnitude.

111 citations


Book ChapterDOI
17 Aug 2014
TL;DR: Boneh and Freeman as discussed by the authors proposed a homomorphic signature scheme for a class of functions, which allows a client to sign and upload elements of some data set D on a server at any later point, the server can derive a (publicly verifiable) signature that certifies that some y is the result computing some ρ in ρ.
Abstract: A homomorphic signature scheme for a class of functions \(\mathcal{C}\) allows a client to sign and upload elements of some data set D on a server At any later point, the server can derive a (publicly verifiable) signature that certifies that some y is the result computing some \(f\in\mathcal{C}\) on the basic data set D This primitive has been formalized by Boneh and Freeman (Eurocrypt 2011) who also proposed the only known construction for the class of multivariate polynomials of fixed degree d ≥ 1 In this paper we construct new homomorphic signature schemes for such functions Our schemes provide the first alternatives to the one of Boneh-Freeman, and improve over their solution in three main aspects First, our schemes do not rely on random oracles Second, we obtain security in a stronger fully-adaptive model: while the solution of Boneh-Freeman requires the adversary to query messages in a given data set all at once, our schemes can tolerate adversaries that query one message at a time, in a fully-adaptive way Third, signature verification is more efficient (in an amortized sense) than computing the function from scratch The latter property opens the way to using homomorphic signatures for publicly-verifiable computation on outsourced data Our schemes rely on a new assumption on leveled graded encodings which we show to hold in a generic model

109 citations


Book ChapterDOI
07 Sep 2014
TL;DR: A generic construction is provided that transforms a voting scheme that is verifiable against an honest bulletin board and an honest registration authority weak verifiability into a verifiable voting scheme under the weaker trust assumption that the registration authority and the bulletin board are not simultaneously dishonest strong verifiable.
Abstract: Most electronic voting schemes aim at providing verifiability: voters should trust the result without having to rely on some authorities. Actually, even a prominent voting system like Helios cannot fully achieve verifiability since a dishonest bulletin board may add ballots. This problem is called ballot stuffing. In this paper we give a definition of verifiability in the computational model to account for a malicious bulletin board that may add ballots. Next, we provide a generic construction that transforms a voting scheme that is verifiable against an honest bulletin board and an honest registration authority weak verifiability into a verifiable voting scheme under the weaker trust assumption that the registration authority and the bulletin board are not simultaneously dishonest strong verifiability. This construction simply adds a registration authority that sends private credentials to the voters, and publishes the corresponding public credentials. We further provide simple and natural criteria that imply weak verifiability. As an application of these criteria, we formally prove the latest variant of Helios by Bernhard, Pereira and Warinschi weakly verifiable. By applying our generic construction we obtain a Helios-like scheme that has ballot privacy and strong verifiability and thus prevents ballot stuffing. The resulting voting scheme, Helios-C, retains the simplicity of Helios and has been implemented and tested.

93 citations


Journal ArticleDOI
TL;DR: This study presents a secure Boolean-based secret image sharing scheme that uses arandom image generating function to generate a random image from secret images or shared images that efficiently increases the sharing capacity on free of sharing the random image.

89 citations


Journal ArticleDOI
TL;DR: A dynamic‐identity‐based remote user authentication scheme will be proposed in this manuscript that employs smart cards to achieve mutual authentication, and no verifier table is needed.
Abstract: People are always concerned about privacy. Techniques tracing specific user activities may damage privacy seriously. To protect user privacy, a dynamic-identity-based remote user authentication scheme will be proposed in this manuscript. This scheme employs smart cards to achieve mutual authentication, and no verifier table is needed. Via this scheme, transmitted identities are always fresh such that untraceability is ensured, and a user can change his or her password after being authenticated successfully. Copyright © 2013 John Wiley & Sons, Ltd.

84 citations


Book ChapterDOI
11 May 2014
TL;DR: The first efficient threshold CCA-secure keyed-homomorphic encryption scheme with publicly verifiable ciphertexts was proposed in this paper, which does not involve quadratic pairing product equations and does not rely on a chosen-ciphertext-secure encryption scheme.
Abstract: Verifiability is central to building protocols and systems with integrity. Initially, efficient methods employed the Fiat-Shamir heuristics. Since 2008, the Groth-Sahai techniques have been the most efficient in constructing non-interactive witness indistinguishable and zero-knowledge proofs for algebraic relations in the standard model. For the important task of proving membership in linear subspaces, Jutla and Roy (Asiacrypt 2013) gave significantly more efficient proofs in the quasi-adaptive setting (QA-NIZK). For membership of the row space of a t ×n matrix, their QA-NIZK proofs save Ω(t) group elements compared to Groth-Sahai. Here, we give QA-NIZK proofs made of a constant number group elements – regardless of the number of equations or the number of variables – and additionally prove them unbounded simulation-sound. Unlike previous unbounded simulation-sound Groth-Sahai-based proofs, our construction does not involve quadratic pairing product equations and does not rely on a chosen-ciphertext-secure encryption scheme. Instead, we build on structure-preserving signatures with homomorphic properties. We apply our methods to design new and improved CCA2-secure encryption schemes. In particular, we build the first efficient threshold CCA-secure keyed-homomorphic encryption scheme (i.e., where homomorphic operations can only be carried out using a dedicated evaluation key) with publicly verifiable ciphertexts.

84 citations


Patent
21 Mar 2014
TL;DR: In this paper, the authors present a system and methods for providing authentication of items such as goods, as they are passed through commerce and change ownership, by creating a publicly verifiable live audit trail using digital signatures, asymmetric key cryptography, an item labeling system, and a mobile authentication client application.
Abstract: Systems and methods for providing authentication of items such as goods, as they are passed through commerce and change ownership. As described herein, the systems and methods certify authenticity of product origin and provide proof of current ownership by creating a publicly verifiable live audit trail using digital signatures, asymmetric (public/secret) key cryptography, an item labeling system, and a mobile authentication client application to capture information about transactions related to an item, perform transactions and record transactions in conjunction with the back end servers.

72 citations


Journal ArticleDOI
TL;DR: In the proposed DQSS protocol, an agent can obtain a shadow of the secret key by simply performing a measurement on single photons by using the measurement property of Greenberger–Horne–Zeilinger state and the controlled-NOT gate.
Abstract: This work proposes a new dynamic quantum secret sharing (DQSS) protocol using the measurement property of Greenberger---Horne---Zeilinger state and the controlled-NOT gate. In the proposed DQSS protocol, an agent can obtain a shadow of the secret key by simply performing a measurement on single photons. In comparison with the existing DQSS protocols, it provides better qubit efficiency and has an easy way to add a new agent. The proposed protocol is also free from the eavesdropping attack, the collusion attack, and can have an honesty check on a revoked agent.


Proceedings Article
20 Aug 2014
TL;DR: This work presents the design and prototype implementation of a novel VC scheme that achieves orders of magnitude speed-up in comparison with the state of the art, and builds and evaluates TRUESET, a system that can verifiably compute any polynomial-time function expressed as a circuit consisting of "set gates" such as union, intersection, difference and set cardinality.
Abstract: Verifiable computation (VC) enables thin clients to efficiently verify the computational results produced by a powerful server. Although VC was initially considered to be mainly of theoretical interest, over the last two years impressive progress has been made on implementing VC. Specifically, we now have open-source implementations of VC systems that handle all classes of computations expressed either as circuits or in the RAM model. Despite this very encouraging progress, new enhancements in the design and implementation of VC protocols are required to achieve truly practical VC for real-world applications. In this work, we show that for functions that can be expressed efficiently in terms of set operations (e.g., a subset of SQL queries) VC can be enhanced to become drastically more practical: We present the design and prototype implementation of a novel VC scheme that achieves orders of magnitude speed-up in comparison with the state of the art. Specifically, we build and evaluate TRUESET, a system that can verifiably compute any polynomial-time function expressed as a circuit consisting of "set gates" such as union, intersection, difference and set cardinality. Moreover, TRUESET supports hybrid circuits consisting of both set gates and traditional arithmetic gates. Therefore, it does not lose any of the expressiveness of previous schemes--this also allows the user to choose the most efficient way to represent different parts of a computation. By expressing set computations as polynomial operations and introducing a novel Quadratic Polynomial Program technique, our experiments show that TRUESET achieves prover performance speed-up ranging from 30x to 150x and up to 97% evaluation key size reduction compared to the state-of-the-art.

Journal ArticleDOI
TL;DR: It is regretful that this comment shows that any malicious CSP or the malicious organizer can generate the valid response which can pass the verification even if they have deleted all the stored data, i.e., Zhu et al.'s CPDP scheme cannot satisfy the property of knowledge soundness.
Abstract: Provable data possession (PDP) is a probabilistic proof technique for cloud service providers (CSPs) to prove the clients' data integrity without downloading the whole data. In 2012, Zhu et al. proposed the construction of an efficient PDP scheme for multicloud storage. They studied the existence of multiple CSPs to cooperatively store and maintain the clients' data. Then, based on homomorphic verifiable response and hash index hierarchy, they presented a cooperative PDP (CPDP) scheme from the bilinear pairings. They claimed that their scheme satisfied the security property of knowledge soundness. It is regretful that this comment shows that any malicious CSP or the malicious organizer (O) can generate the valid response which can pass the verification even if they have deleted all the stored data, i.e., Zhu et al.'s CPDP scheme cannot satisfy the property of knowledge soundness. Then, we discuss the origin and severity of the security flaws. It implies that the attacker can get the pay without storing the clients' data. It is important to clarify the scientific fact to design more secure and practical CPDP scheme in Zhu et al.'s system architecture and security model.

Book ChapterDOI
26 Mar 2014
TL;DR: In this article, a verifiable evaluation of hierarchical set operations unions, intersections and set-differences applied to a collection of dynamically changing sets of elements from a given domain is presented.
Abstract: We study the problem of verifiable delegation of computation over outsourced data, whereby a powerful worker maintains a large data structure for a weak client in a verifiable way Compared to the well-studied problem of verifiable computation, this setting imposes additional difficulties since the verifier also needs to check the consistency of updates succinctly and without maintaining large state We present a scheme for verifiable evaluation of hierarchical set operations unions, intersections and set-differences applied to a collection of dynamically changing sets of elements from a given domain The verification cost incurred is proportional only to the size of the final outcome set and to the size of the query, and is independent of the cardinalities of the involved sets The cost of updates is optimal involving O1 modular operations per update Our construction extends that of [Papamanthou et al, CRYPTO 2011] and relies on a modified version of the extractable collision-resistant hash function ECRH construction, introduced in [Bitansky et al, ITCS 2012] that can be used to succinctly hash univariate polynomials

Book ChapterDOI
26 Mar 2014
TL;DR: VOS generalizes the notion of Oblivious RAM ORAM in that it allows the server to perform computation, and also explicitly considers data integrity and freshness, and is applied to Dynamic Proofs of Retrievability, and RAM-model secure multi-party computation.
Abstract: We formalize the notion of Verifiable Oblivious Storage VOS, where a client outsources the storage of data to a server while ensuring data confidentiality, access pattern privacy, and integrity and freshness of data accesses. VOS generalizes the notion of Oblivious RAM ORAM in that it allows the server to perform computation, and also explicitly considers data integrity and freshness. We show that allowing server-side computation enables us to construct asymptotically more efficient VOS schemes whose bandwidth overhead cannot be matched by any ORAM scheme, due to a known lower bound by Goldreich and Ostrovsky. Specifically, for large block sizes we can construct a VOS scheme with constant bandwidth per query; further, answering queries requires only poly-logarithmic server computation. We describe applications of VOS to Dynamic Proofs of Retrievability, and RAM-model secure multi-party computation.

Proceedings ArticleDOI
15 Jul 2014
TL;DR: This paper introduces a new "packed" proactive secret sharing (PPSS) scheme, where the amortized communication and theAmortized computational cost of maintaining each individual secret is optimal, resolving a long standing problem in this area.
Abstract: In PODC 1991 Ostrovsky and Yung [35] introduced the proactive security model, where corruptions spread throughout the network, analogous to the spread of a virus or a worm. PODC 2006 distinguished lecture by Danny Dolev, that also appears in the PODC06 proceedings, lists the above work as one of PODC's "Century Papers at the First Quarter-Century Milestone" [22]. At the very center of this work is the notion of proactive secret sharing schemes. Secret sharing schemes allow a dealer to distribute a secret among a group of parties such that while the group of parties jointly possess the secret, no sufficiently small subset of the parties can learn any information about the secret. The secret can be reconstructed only when a sufficient number of shares are combined together. Most secret sharing schemes assume that an adversary can only corrupt some fixed number of the parties over the entire lifetime of the secret; such a model is unrealistic in the case where over a long enough period of time, an adversary can eventually corrupt all parties or a large enough fraction that exceeds such a threshold. More specifically, in the proactive security model, the adversary is not limited in the number of parties it can corrupt, but rather in the rate of corruption with respect to a "rebooting" rate. Ostrovsky and Yung proposed the first proactive secret sharing scheme, which received a lot of follow-up attention. In the same paper, Ostrovsky and Yung also showed that constructing a general purpose secure multiparty computation (MPC) protocol in the proactive security model is feasible as long as the rate of corruption is a constant fraction of the parties. Their result, however, was shown only for stand-alone security and incurred a large polynomial communication overhead for each gate of the computation. Following the initial work defining the proactive security model, numerous cryptographic primitives and distributed protocols have been adapted to the proactive security model, such as proactively secure threshold encryption, proactive Byzantine agreement, proactive key management, proactive digital signatures, and many others. All these results use proactive secret sharing schemes. In this paper, we introduce a new "packed" proactive secret sharing (PPSS) scheme, where the amortized communication and the amortized computational cost of maintaining each individual secret is optimal (e.g., a constant rate), resolving a long standing problem in this area. Assuming secure point-to-point channels and authenticated, reliable broadcast over a synchronous network, our PPSS scheme can tolerate a 1/3-e (resp. 1/2-e) corruption rate against a malicious adversary, and is perfectly (resp. statistically) UC-secure, whereas all previous proactive secret sharing schemes have been secure under cryptographic assumptions only. As an application of our PPSS scheme, we show how to construct a proactive multiparty computation (PMPC) protocol with the same threshold as the PPSS scheme and near-linear communication complexity. PMPC problem is very general and implies, for example, proactive Byzantine Agreement. Our PMPC result also matches the asymptotic communication complexity of the best known MPC results in the "classical" model of stationary faults [19].

Journal ArticleDOI
TL;DR: This paper proposes the first MTSS based on the Asmuth–Bloom's SS which is unconditionally secure and one unique feature is that each shareholder needs to keep only one private share.

Journal ArticleDOI
TL;DR: The security problem that an adversary can obtain the secret when there are more than t participants in Shamir's secret reconstruction is introduced and a secure secret reconstruction scheme, which prevents the adversary from obtaining the secret is proposed.
Abstract: In Shamir's t, n secret sharing SS scheme, the secret s is divided into n shares by a dealer and is shared among n shareholders in such a way that any t or more than t shares can reconstruct this secret; but fewer than t shares cannot obtain any information about the secret s. In this paper, we will introduce the security problem that an adversary can obtain the secret when there are more than t participants in Shamir's secret reconstruction. A secure secret reconstruction scheme, which prevents the adversary from obtaining the secret is proposed. In our scheme, Lagrange components, which are linear combination of shares, are used to reconstruct the secret. Lagrange component can protect shares unconditionally. We show that this scheme can be extended to design a multi-secret sharing scheme. All existing multi-secret sharing schemes are based on some cryptographic assumptions, such as a secure one-way function or solving the discrete logarithm problem; but, our proposed multi-secret sharing scheme is unconditionally secure. Copyright © 2013 John Wiley & Sons, Ltd.

Posted Content
TL;DR: Verifiable oblivious storage (VOS) as discussed by the authors generalizes the notion of oblivious RAM (ORAM) in that it allows the server to perform computation, and also explicitly considers data integrity and freshness.
Abstract: We formalize the notion of Verifiable Oblivious Storage (VOS), where a client outsources the storage of data to a server while ensuring data confidentiality, access pattern privacy, and integrity and freshness of data accesses. VOS generalizes the notion of Oblivious RAM (ORAM) in that it allows the server to perform computation, and also explicitly considers data integrity and freshness. We show that allowing server-side computation enables us to construct asymptotically more efficient VOS schemes whose bandwidth overhead cannot be matched by any ORAM scheme, due to a known lower bound by Goldreich and Ostrovsky. Specifically, for large block sizes we can construct a VOS scheme with constant bandwidth per query; further, answering queries requires only poly-logarithmic server computation. We describe applications of VOS to Dynamic Proofs of Retrievability, and RAM-model secure multi-party computation.

Proceedings ArticleDOI
19 Jul 2014
TL;DR: In this paper, a peered secure web bulletin board suite of protocols has been formally verified against an idealised specification of the bulletin board behaviour, based on the Event-B modelling and refinement approach.
Abstract: The Secure Web Bulletin Board (WBB) is a key component of verifiable election systems. However, there is very little in the literature on their specification, design and implementation, and there are no formally analysed designs. The WBB is used in the context of election verification to publish evidence of voting and tallying that voters and officials can check, and where challenges can be launched in the event of malfeasance. In practice, the election authority has responsibility for implementing the web bulletin board correctly and reliably, and will wish to ensure that it behaves correctly even in the presence of failures and attacks. To ensure robustness, an implementation will typically use a number of peers to be able to provide a correct service even when some peers go down or behave dishonestly. In this paper we propose a new protocol to implement such a Web Bulletin Board, motivated by the needs of the vVote verifiable voting system. Using a distributed algorithm increases the complexity of the protocol and requires careful reasoning in order to establish correctness. Here we use the Event-B modelling and refinement approach to establish correctness of the peered design against an idealised specification of the bulletin board behaviour. In particular we have shown that for n peers, a threshold of t > 2n/3 peers behaving correctly is sufficient to ensure correct behaviour of the bulletin board distributed design. The algorithm also behaves correctly even if honest or dishonest peers temporarily drop out of the protocol and then return. The verification approach also establishes that the protocols used within the bulletin board do not interfere with each other. This is the first time a peered secure web bulletin board suite of protocols has been formally verified.

Book ChapterDOI
25 May 2014
TL;DR: Evaluation of the reference implementations of the trusty URIs shows that these desired properties are indeed accomplished by the approach, and that it remains practical even for very large files.
Abstract: To make digital resources on the web verifiable, immutable, and permanent, we propose a technique to include cryptographic hash values in URIs. We call them trusty URIs and we show how they can be used for approaches like nanopublications to make not only specific resources but their entire reference trees verifiable. Digital artifacts can be identified not only on the byte level but on more abstract levels such as RDF graphs, which means that resources keep their hash values even when presented in a different format. Our approach sticks to the core principles of the web, namely openness and decentralized architecture, is fully compatible with existing standards and protocols, and can therefore be used right away. Evaluation of our reference implementations shows that these desired properties are indeed accomplished by our approach, and that it remains practical even for very large files.

Journal ArticleDOI
TL;DR: A relation between the notions of verifiable random functions (VRFs) and identity-based key encapsulation mechanisms (IB-KEMs) is shown and a direct construction of VRFs from VRF-suitable IB-K EMs is proposed.
Abstract: In this paper we show a relation between the notions of verifiable random functions (VRFs) and identity-based key encapsulation mechanisms (IB-KEMs). In particular, we propose a class of IB-KEMs that we call VRF-suitable, and we propose a direct construction of VRFs from VRF-suitable IB-KEMs. Informally, an IB-KEM is VRF-suitable if it provides what we call unique decapsulation (i.e., given a ciphertext C produced with respect to an identity ID, all the secret keys corresponding to identity ID?, decapsulate to the same value, even if ID?ID?), and it satisfies an additional property that we call pseudo-random decapsulation. In a nutshell, pseudo-random decapsulation means that if one decapsulates a ciphertext C, produced with respect to an identity ID, using the decryption key corresponding to any other identity ID?, the resulting value looks random to a polynomially bounded observer. Our construction is of interest both from a theoretical and a practical perspective. Indeed, apart from establishing a connection between two seemingly unrelated primitives, our methodology is direct in the sense that, in contrast to most previous constructions, it avoids the inefficient Goldreich---Levin hardcore bit transformation. As an additional contribution, we propose a new VRF-suitable IB-KEM based on the decisional l-weak Bilinear Diffie---Hellman Inversion assumption. Interestingly, when applying our transformation to this scheme, we obtain a new VRF construction that is secure under the same assumption, and it efficiently supports a large input space.

Posted Content
TL;DR: This is the first time a peered secure web bulletin board suite of protocols has been formally verified, and it is shown that for n peers, a threshold of t > 2n/3 peers behaving correctly is sufficient to ensure correct behaviour of the bulletin board distributed design.
Abstract: The Web Bulletin Board (WBB) is a key component of verifiable election systems. It is used in the context of election verification to publish evidence of voting and tallying that voters and officials can check, and where challenges can be launched in the event of malfeasance. In practice, the election authority has responsibility for implementing the web bulletin board correctly and reliably, and will wish to ensure that it behaves correctly even in the presence of failures and attacks. To ensure robustness, an implementation will typically use a number of peers to be able to provide a correct service even when some peers go down or behave dishonestly. In this paper we propose a new protocol to implement such a Web Bulletin Board, motivated by the needs of the vVote verifiable voting system. Using a distributed algorithm increases the complexity of the protocol and requires careful reasoning in order to establish correctness. Here we use the Event-B modelling and refinement approach to establish correctness of the peered design against an idealised specification of the bulletin board behaviour. In particular we show that for n peers, a threshold of t > 2n/3 peers behaving correctly is sufficient to ensure correct behaviour of the bulletin board distributed design. The algorithm also behaves correctly even if honest or dishonest peers temporarily drop out of the protocol and then return. The verification approach also establishes that the protocols used within the bulletin board do not interfere with each other. This is the first time a peered web bulletin board suite of protocols has been formally verified.

Book ChapterDOI
06 Sep 2014
TL;DR: This paper proposes verifiable homomorphic encryption (VHE), which enables verifiable computation on outsourced encrypted data in the cloud.
Abstract: On one hand, homomorphic encryption allows a cloud server to perform computation on outsourced encrypted data but provides no verifiability that the computation is correct. On the other hand, homomorphic authenticator, such as homomorphic signature with public verifiability and homomorphic MAC with private verifiability, guarantees authenticity of computation over outsourced data but does not provide data confidentiality. Since cloud servers are usually operated by third-party providers which are almost certain to be outside the trust domain of cloud users, neither homomorphic encryption nor homomorphic authenticator suffices for verifiable computation on outsourced encrypted data in the cloud. In this paper, we propose verifiable homomorphic encryption (VHE), which enables verifiable computation on outsourced encrypted data.

Journal ArticleDOI
TL;DR: In the new scheme, each authorized subset of participants is able to recover both the secret and cover images losslessly whereas non-authorized subsets obtain no information about the secret image.

Proceedings Article
01 Jul 2014
TL;DR: This paper presents a new End-to-End (E2E) verifiable e-voting protocol for large-scale elections, called Direct Recording Electronic with Integrity (DRE-i), which does not involve any Tallying Authorities (TAs), and calls this new category “self-enforcing electronic voting”.
Abstract: This paper presents a new End-to-End (E2E) verifiable e-voting protocol for large-scale elections, called Direct Recording Electronic with Integrity (DRE-i). In contrast to all other E2E verifiable voting schemes, ours does not involve any Tallying Authorities (TAs). The design of DRE-i is based on the hypothesis that existing E2E voting protocols’ universal dependence on TAs is a key obstacle to their practical deployment. In DRE-i, the need for TAs is removed by applying novel encryption techniques such that after the election multiplying the ciphertexts together will cancel out random factors and permit anyone to verify the tally. We describe how to apply the DRE-i protocol to enforce the tallying integrity of a DRE-based election held at a set of supervised polling stations. Each DRE machine directly records votes just as the existing practice in the realworld DRE deployment. But unlike the ordinary DRE machines, in DRE-i the machine must publish additional audit data to allow public verification of the tally. If the machine attempts to cheat by altering either votes or audit data, then the public verification of the tallying integrity will fail. To improve system reliability, we further present a fail-safe mechanism to allow graceful recovery from the effect of missing or corrupted ballots in a publicly verifiable and privacy-preserving manner. Finally, we compare DRE-i with previous related voting schemes and show several improvements in security, efficiency and usability. This highlights the promising potential of a new category of voting systems that are E2E verifiable and TA-free. We call this new category “self-enforcing electronic voting”.

Journal ArticleDOI
TL;DR: The existing common coin protocol is extended to make it compatible with the new AVSS protocol that shares multiple secrets simultaneously, which is more communication efficient than all the existing common Coin protocols.
Abstract: We present an efficient, optimally-resilient Asynchronous Byzantine Agreement (ABA) protocol involving $$n = 3t+1$$ parties over a completely asynchronous network, tolerating a computationally unbounded Byzantine adversary, capable of corrupting at most $$t$$ out of the $$n$$ parties. In comparison with the best known optimally-resilient ABA protocols of Canetti and Rabin (STOC 1993) and Abraham et al. (PODC 2008), our protocol is significantly more efficient in terms of the communication complexity. Our ABA protocol is built on a new statistical asynchronous verifiable secret sharing (AVSS) protocol with optimal resilience. Our AVSS protocol significantly improves the communication complexity of the only known statistical and optimally-resilient AVSS protocol of Canetti et al. Our AVSS protocol is further built on an asynchronous primitive called asynchronous weak commitment (AWC), while the AVSS of Canetti et al. is built on the primitive called asynchronous weak secret sharing (AWSS). We observe that AWC has weaker requirements than AWSS and hence it can be designed more efficiently than AWSS. The common coin primitive is one of the most important building blocks for the construction of an ABA protocol. In this paper, we extend the existing common coin protocol to make it compatible with our new AVSS protocol that shares multiple secrets simultaneously. As a byproduct, our new common coin protocol is more communication efficient than all the existing common coin protocols.

Proceedings ArticleDOI
12 Jan 2014
TL;DR: By considering rational arguments, in which the prover is additionally restricted to be computationally bounded, the class NC1, of search problems computable by log-space uniform circuits of O(log n)-depth, admits rational protocols that are simultaneously one-round and polylog(n) time verifiable.
Abstract: Rational proofs, recently introduced by Azar and Micali (STOC 2012), are a variant of interactive proofs in which the prover is neither honest nor malicious, but rather rational. The advantage of rational proofs over their classical counterparts is that they allow for extremely low communication and verification time. Azar and Micali demonstrated their potential by giving a one message rational proof for #SAT, in which the verifier runs in time O(n), where $n$ denotes the instance size. In a follow-up work (EC 2013), Azar and Micali proposed "super-efficient" and interactive versions of rational proofs and argued that they capture precisely the class TC0 of constant-depth, polynomial-size circuits with threshold gates. In this paper, we show that by considering rational arguments, in which the prover is additionally restricted to be computationally bounded, the class NC1, of search problems computable by log-space uniform circuits of O(log n)-depth, admits rational protocols that are simultaneously one-round and polylog(n) time verifiable. This demonstrates the potential of rational arguments as a way to extend the notion of "super-efficient" rational proofs beyond the class TC0. The low interaction nature of our protocols, along with their sub-linear verification time, make them well suited for delegation of computation. While they provide a weaker (yet arguably meaningful) guarantee of soundness, they compare favorably with each of the known delegation schemes in at least one aspect. They are simple, rely on standard complexity hardness assumptions, provide a correctness guarantee for all instances, and do not require preprocessing.

Book ChapterDOI
12 Oct 2014
TL;DR: In this paper, the authors proposed a verifiable outsourcing of matrix multiplications that favorably compares with the state-of-the-art in terms of the number of modulo multiplications.
Abstract: With the emergence of cloud computing services, a resource-constrained client can outsource its computationally-heavy tasks to cloud providers. Because such service providers might not be fully trusted by the client, the need to verify integrity of the returned computation result arises. The ability to do so is called verifiable delegation or verifiable outsourcing. Furthermore, the data used in the computation may be sensitive and it is often desired to protect it from the cloud throughout the computation. In this work, we put forward solutions for verifiable outsourcing of matrix multiplications that favorably compare with the state of the art. Our goal is to minimize the cost of verifying the result without increasing overhead associated with other aspects of the scheme. In our scheme, the cost of verifying the result of computation uses only a single modulo exponentiation and the number of modulo multiplications linear in the size of the output matrix. This cost can be further reduced to avoid all cryptographic operations if the cloud is rational. A rational cloud is neither honest nor arbitrarily malicious, but rather economically motivated with the sole purpose of maximizing its monetary reward. We extend our core constructions with several desired features such as data protection, public verifiability, and computation chaining.