scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2012"


Book ChapterDOI
19 Mar 2012
TL;DR: In this paper, the authors extend the definition of verifiable computation in two important directions: public delegation and public verifiability, which have important applications in many practical delegation scenarios.
Abstract: The wide variety of small, computationally weak devices, and the growing number of computationally intensive tasks makes it appealing to delegate computation to data centers. However, outsourcing computation is useful only when the returned result can be trusted, which makes verifiable computation (VC) a must for such scenarios. In this work we extend the definition of verifiable computation in two important directions: public delegation and public verifiability, which have important applications in many practical delegation scenarios. Yet, existing VC constructions based on standard cryptographic assumptions fail to achieve these properties. As the primary contribution of our work, we establish an important (and somewhat surprising) connection between verifiable computation and attribute-based encryption (ABE), a primitive that has been widely studied. Namely, we show how to construct a VC scheme with public delegation and public verifiability from any ABE scheme. The VC scheme verifies any function in the class of functions covered by the permissible ABE policies (currently Boolean formulas). This scheme enjoys a very efficient verification algorithm that depends only on the output size. Efficient delegation, however, requires the ABE encryption algorithm to be cheaper than the original function computation. Strengthening this connection, we show a construction of a multi-function verifiable computation scheme from an ABE scheme with outsourced decryption, a primitive defined recently by Green, Hohenberger and Waters (USENIX Security 2011). A multi-function VC scheme allows the verifiable evaluation of multiple functions on the same preprocessed input. In the other direction, we also explore the construction of an ABE scheme from verifiable computation protocols.

292 citations


Book ChapterDOI
19 Mar 2012
TL;DR: New algorithms (and new analyses of existing algorithms) in both the interactive and non-interactive settings are given, and a reduction based on the IDC framework shows that an efficient, private algorithm for computing sufficiently accurate rank-1 matrix approximations would lead to an improved efficient algorithm for releasing private synthetic data for graph cuts.
Abstract: In this paper we study the problem of approximately releasing the cut function of a graph while preserving differential privacy, and give new algorithms (and new analyses of existing algorithms) in both the interactive and non-interactive settings. Our algorithms in the interactive setting are achieved by revisiting the problem of releasing differentially private, approximate answers to a large number of queries on a database. We show that several algorithms for this problem fall into the same basic framework, and are based on the existence of objects which we call iterative database construction algorithms. We give a new generic framework in which new (efficient) IDC algorithms give rise to new (efficient) interactive private query release mechanisms. Our modular analysis simplifies and tightens the analysis of previous algorithms, leading to improved bounds. We then give a new IDC algorithm (and therefore a new private, interactive query release mechanism) based on the Frieze/Kannan low-rank matrix decomposition. This new release mechanism gives an improvement on prior work in a range of parameters where the size of the database is comparable to the size of the data universe (such as releasing all cut queries on dense graphs). We also give a non-interactive algorithm for efficiently releasing private synthetic data for graph cuts with error O(|V|1.5). Our algorithm is based on randomized response and a non-private implementation of the SDP-based, constant-factor approximation algorithm for cut-norm due to Alon and Naor. Finally, we give a reduction based on the IDC framework showing that an efficient, private algorithm for computing sufficiently accurate rank-1 matrix approximations would lead to an improved efficient algorithm for releasing private synthetic data for graph cuts. We leave finding such an algorithm as our main open problem.

192 citations


Book ChapterDOI
19 Mar 2012
TL;DR: This work reduces both the CRS length and the prover's computational complexity from quadratic to quasilinear in the circuit size of Groth's NIZK circuit satisfiability argument by using a recent construction of progression-free sets.
Abstract: In 2010, Groth constructed the only previously known sublinear-communication NIZK circuit satisfiability argument in the common reference string model. We optimize Groth's argument by, in particular, reducing both the CRS length and the prover's computational complexity from quadratic to quasilinear in the circuit size. We also use a (presumably) weaker security assumption, and have tighter security reductions. Our main contribution is to show that the complexity of Groth's basic arguments is dominated by the quadratic number of monomials in certain polynomials. We collapse the number of monomials to quasilinear by using a recent construction of progression-free sets.

191 citations


Book ChapterDOI
19 Mar 2012
TL;DR: These are the first efficient constructions for a broad class of natural predicates such as quoting, subsets, weighted sums, averages, and Fourier transforms that provably satisfy this strong security notion.
Abstract: In tandem with recent progress on computing on encrypted data via fully homomorphic encryption, we present a framework for computing on authenticated data via the notion of slightly homomorphic signatures, or P-homomorphic signatures. With such signatures, it is possible for a third party to derive a signature on the object m′ from a signature of m as long as P(m,m′)=1 for some predicate P which captures the "authenticatable relationship" between m′ and m. Moreover, a derived signature on m′ reveals no extra information about the parent m. Our definition is carefully formulated to provide one unified framework for a variety of distinct concepts in this area, including arithmetic, homomorphic, quotable, redactable, transitive signatures and more. It includes being unable to distinguish a derived signature from a fresh one even when given the original signature. The inability to link derived signatures to their original sources prevents some practical privacy and linking attacks, which is a challenge not satisfied by most prior works. Under this strong definition, we then provide generic constructions for all univariate and closed predicates, and specific efficient constructions for a broad class of natural predicates such as quoting, subsets, weighted sums, averages, and Fourier transforms. To our knowledge, these are the first efficient constructions for these predicates (excluding subsets) that provably satisfy this strong security notion.

111 citations


Book ChapterDOI
19 Mar 2012
TL;DR: In this article, the authors combine the techniques of Hardt and Talwar [11] and McGregor et al. to obtain linear lower bounds on distortion when providing differential privacy for a (contrived) class of low-sensitivity queries.
Abstract: This paper is about private data analysis, in which a trusted curator holding a confidential database responds to real vector-valued queries. A common approach to ensuring privacy for the database elements is to add appropriately generated random noise to the answers, releasing only these noisy responses. A line of study initiated in [7] examines the amount of distortion needed to prevent privacy violations of various kinds. The results in the literature vary according to several parameters, including the size of the database, the size of the universe from which data elements are drawn, the "amount" of privacy desired, and for the purposes of the current work, the arity of the query. In this paper we sharpen and unify these bounds. Our foremost result combines the techniques of Hardt and Talwar [11] and McGregor et al. [13] to obtain linear lower bounds on distortion when providing differential privacy for a (contrived) class of low-sensitivity queries. (A query has low sensitivity if the data of a single individual has small effect on the answer.) Several structural results follow as immediate corollaries: — We separate so-called counting queries from arbitrary low-sensitivity queries, proving the latter requires more noise, or distortion, than does the former; — We separate (e,0)-differential privacy from its well-studied relaxation (e,δ)-differential privacy, even when δ∈2−o(n) is negligible in the size n of the database, proving the latter requires less distortion than the former; — We demonstrate that (e,δ)-differential privacy is much weaker than (e,0)-differential privacy in terms of mutual information of the transcript of the mechanism with the database, even when δ∈2−o(n) is negligible in the size n of the database. We also simplify the lower bounds on noise for counting queries in [11] and also make them unconditional. Further, we use a characterization of (e,δ) differential privacy from [13] to obtain lower bounds on the distortion needed to ensure (e,δ)-differential privacy for e,δ>0. We next revisit the LP decoding argument of [10] and combine it with a recent result of Rudelson [15] to improve on a result of Kasiviswanathan et al. [12] on noise lower bounds for privately releasing l-way marginals.

110 citations


Book ChapterDOI
19 Mar 2012
TL;DR: This work defines two new notions of extractable hash functions, proposes an instantiation based on the knowledge of exponent in an RSA group, and builds succinct zero-knowledge arguments in the CRS model.
Abstract: We propose a 2-party UC-secure protocol that can compute any function securely. The protocol requires only two messages, communication that is poly-logarithmic in the size of the circuit description of the function, and the workload for one of the parties is also only poly-logarithmic in the size of the circuit. This implies, for instance, delegatable computation that requires no expensive off-line phase and remains secure even if the server learns whether the client accepts its results. To achieve this, we define two new notions of extractable hash functions, propose an instantiation based on the knowledge of exponent in an RSA group, and build succinct zero-knowledge arguments in the CRS model.

108 citations


Book ChapterDOI
19 Mar 2012
TL;DR: In this paper, it was shown that the free-XOR technique cannot be proven secure based on correlation robustness alone; somewhat surprisingly, some form of circular security is also required.
Abstract: Yao's garbled-circuit approach enables constant-round secure two-party computation of any function. In Yao's original construction, each gate in the circuit requires the parties to perform a constant number of encryptions/decryptions and to send/receive a constant number of ciphertexts. Kolesnikov and Schneider (ICALP 2008) proposed an improvement that allows XOR gates to be evaluated "for free," incurring no cryptographic operations and zero communication. Their "free-XOR" technique has proven very popular, and has been shown to improve performance of garbled-circuit protocols by up to a factor of 4. Kolesnikov and Schneider proved security of their approach in the random oracle model, and claimed that (an unspecified variant of) correlation robustness suffices; this claim has been repeated in subsequent work, and similar ideas have since been used in other contexts. We show that the free-XOR technique cannot be proven secure based on correlation robustness alone; somewhat surprisingly, some form of circular security is also required. We propose an appropriate definition of security for hash functions capturing the necessary requirements, and prove security of the free-XOR approach when instantiated with any hash function satisfying our definition. Our results do not impact the security of the free-XOR technique in practice, or imply an error in the free-XOR work, but instead pin down the assumptions needed to prove security.

104 citations


Book ChapterDOI
19 Mar 2012
TL;DR: A general construction of deterministic encryption schemes that unifies prior work and gives novel schemes, and extends the result about computational entropy to the average case, which is useful in reasoning about leakage-resilient cryptography.
Abstract: We propose a general construction of deterministic encryption schemes that unifies prior work and gives novel schemes. Specifically, its instantiations provide: — A construction from any trapdoor function that has sufficiently many hardcore bits. — A construction that provides "bounded" multi-message security from lossy trapdoor functions. The security proofs for these schemes are enabled by three tools that are of broader interest: — A weaker and more precise sufficient condition for semantic security on a high-entropy message distribution. Namely, we show that to establish semantic security on a distribution M of messages, it suffices to establish indistinguishability for all conditional distribution M|E, where E is an event of probability at least 1/4. (Prior work required indistinguishability on all distributions of a given entropy.) — A result about computational entropy of conditional distributions. Namely, we show that conditioning on an event E of probability p reduces the quality of computational entropy by a factor of p and its quantity by log2 1/p. A generalization of leftover hash lemma to correlated distributions. We also extend our result about computational entropy to the average case, which is useful in reasoning about leakage-resilient cryptography: leaking λ bits of information reduces the quality of computational entropy by a factor of 2λ and its quantity by λ.

92 citations


Book ChapterDOI
19 Mar 2012
TL;DR: A general compiler that transforms any cryptographic scheme into a functionally equivalent scheme which is resilient to any continual leakage, and does not make use of public key encryption, which was required in all previous works.
Abstract: Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage into the model and to construct schemes that are provably secure within them. We design a general compiler that transforms any cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any continual leakage provided that the following three requirements are satisfied: (i) in each observation the leakage is bounded, (ii) different parts of the computation leak independently, and (iii) the randomness that is used for certain operations comes from a simple (non-uniform) distribution. In contrast to earlier work on leakage resilient circuit compilers, which relied on computational assumptions, our results are purely information-theoretic. In particular, we do not make use of public key encryption, which was required in all previous works.

87 citations


Book ChapterDOI
19 Mar 2012
TL;DR: A variant of the UC theorem is proved that enables modular design and analysis of protocols even in face of general, non-modular leakage.
Abstract: We put forth a framework for expressing security requirements from interactive protocols in the presence of arbitrary leakage. The framework allows capturing different levels of leakage-tolerance of protocols, namely the preservation (or degradation) of security, under coordinated attacks that include various forms of leakage from the secret states of participating components. The framework extends the universally composable (UC) security framework. We also prove a variant of the UC theorem that enables modular design and analysis of protocols even in face of general, non-modular leakage. We then construct leakage-tolerant protocols for basic tasks, such as secure message transmission, message authentication, commitment, oblivious transfer and zero-knowledge. A central component in several of our constructions is the observation that resilience to adaptive party corruptions (in some strong sense) implies leakage-tolerance in an essentially optimal way.

75 citations


Book ChapterDOI
19 Mar 2012
TL;DR: It is demonstrated that the notion of smooth projective hash functions can be useful to design round-optimal privacy-preserving interactive protocols and is suitable for designing schemes that rely on standard security assumptions in the standard model with a common-reference string.
Abstract: In 2008, Groth and Sahai proposed a powerful suite of techniques for constructing non-interactive zero-knowledge proofs in bilinear groups. Their proof systems have found numerous applications, including group signature schemes, anonymous voting, and anonymous credentials. In this paper, we demonstrate that the notion of smooth projective hash functions can be useful to design round-optimal privacy-preserving interactive protocols. We show that this approach is suitable for designing schemes that rely on standard security assumptions in the standard model with a common-reference string and are more efficient than those obtained using the Groth-Sahai methodology. As an illustration of our design principle, we construct an efficient oblivious signature-based envelope scheme and a blind signature scheme, both round-optimal.

Book ChapterDOI
19 Mar 2012
TL;DR: Three 3-round proofs and arguments with negligible soundness error are constructed satisfying two relaxed notions of zero-knowledge (ZK): weak ZK and witness hiding (WH).
Abstract: We construct 3-round proofs and arguments with negligible soundness error satisfying two relaxed notions of zero-knowledge (ZK): weak ZK and witness hiding (WH). At the heart of our constructions lie new techniques based on point obfuscation with auxiliary input (AIPO). It is known that such protocols cannot be proven secure using black-box reductions (or simulation). Our constructions circumvent these lower bounds, utilizing AIPO (and extensions) as the "non-black-box component" in the security reduction.

Book ChapterDOI
19 Mar 2012
TL;DR: The use of cryptography is eliminated from the online phase of recent protocols for multiparty coin-flipping and MPC with partial fairness and the first unconditional construction of a complete primitive for fully secure function evaluation whose complexity does not grow with the complexity of the function being evaluated is presented.
Abstract: Motivated by problems in secure multiparty computation (MPC), we study a natural extension of identifiable secret sharing to the case where an arbitrary number of players may be corrupted. An identifiable secret sharing scheme is a secret sharing scheme in which the reconstruction algorithm, after receiving shares from all players, either outputs the correct secret or publicly identifies the set of all cheaters (players who modified their original shares) with overwhelming success probability. This property is impossible to achieve without an honest majority. Instead, we settle for having the reconstruction algorithm inform each honest player of the correct set of cheaters. We show that this new notion of secret sharing can be unconditionally realized in the presence of arbitrarily many corrupted players. We demonstrate the usefulness of this primitive by presenting several applications to MPC without an honest majority. Complete primitives for MPC. We present the first unconditional construction of a complete primitive for fully secure function evaluation whose complexity does not grow with the complexity of the function being evaluated. This can be used for realizing fully secure MPC using small and stateless tamper-proof hardware. A previous completeness result of Gordon et al. (TCC 2010) required the use of cryptographic signatures. Applications to partial fairness. We eliminate the use of cryptography from the online phase of recent protocols for multiparty coin-flipping and MPC with partial fairness (Beimel et al., Crypto 2010 and Crypto 2011). This is a corollary of a more general technique for unconditionally upgrading security against fail-stop adversaries with preprocessing to security against malicious adversaries. Finally, we complement our positive results by a negative result on identifying cheaters in unconditionally secure MPC. It is known that MPC without an honest majority can be realized unconditionally in the OT-hybrid model, provided that one settles for "security with abort" (Kilian, 1988). That is, the adversary can decide whether to abort the protocol after learning the outputs of corrupted players. We show that such protocols cannot be strengthened so that all honest players agree on the identity of a corrupted player in the event that the protocol aborts, even if a broadcast primitive can be used. This is contrasted with the computational setting, in which this stronger notion of security can be realized under standard cryptographic assumptions (Goldreich et al., 1987).

Book ChapterDOI
19 Mar 2012
TL;DR: The notion of all-but-one perfectly sound threshold hashProof systems that can be seen as (threshold) hash proof systems with publicly verifiable and simulation-sound proofs is defined and it is shown that this notion generically implies threshold cryptosystems combining the aforementioned properties.
Abstract: In threshold cryptography, private keys are divided into n shares, each one of which is given to a different server in order to avoid single points of failure. In the case of threshold public-key encryption, at least t≤n servers need to contribute to the decryption process. A threshold primitive is said robust if no coalition of t malicious servers can prevent remaining honest servers from successfully completing private key operations. So far, most practical non-interactive threshold cryptosystems, where no interactive conversation is required among decryption servers, were only proved secure against static corruptions. In the adaptive corruption scenario (where the adversary can corrupt servers at any time, based on its complete view), all existing robust threshold encryption schemes that also resist chosen-ciphertext attacks (CCA) till recently require interaction in the decryption phase. A specific method (in composite order groups) for getting rid of interaction was recently suggested, leaving the question of more generic frameworks and constructions with better security and better flexibility (i.e., compatibility with distributed key generation). This paper describes a general construction of adaptively secure robust non-interactive threshold cryptosystems with chosen-ciphertext security. We define the notion of all-but-one perfectly sound threshold hash proof systems that can be seen as (threshold) hash proof systems with publicly verifiable and simulation-sound proofs. We show that this notion generically implies threshold cryptosystems combining the aforementioned properties. Then, we provide efficient instantiations under well-studied assumptions in bilinear groups (e.g., in such groups of prime order). These instantiations have a tighter security proof and are indeed compatible with distributed key generation protocols.

Book ChapterDOI
19 Mar 2012
TL;DR: It is shown that sufficiently strong collision-resistant hash functions are seed-dependent condensers that produce outputs with min-entropy k' = m - O(log t), i.e. logarithmic entropy deficiency.
Abstract: We initiate a study of randomness condensers for sources that are efficiently samplable but may depend on the seed of the condenser. That is, we seek functions Cond : {0,1}n×{0,1}d→{0,1}m such that if we choose a random seed S←{0,1}d, and a source X= A(S) is generated by a randomized circuit A of size t such that X has min-entropy at least k given S, then Cond(X;S) should have min-entropy at least some k′ given S. The distinction from the standard notion of randomness condensers is that the source X may be correlated with the seed S (but is restricted to be efficiently samplable). Randomness extractors of this type (corresponding to the special case where k′=m) have been implicitly studied in the past (by Trevisan and Vadhan, FOCS ‘00). We show that: — Unlike extractors, we can have randomness condensers for samplable, seed-dependent sources whose computational complexity is smaller than the size t of the adversarial sampling algorithm A. Indeed, we show that sufficiently strong collision-resistant hash functions are seed-dependent condensers that produce outputs with min-entropy k' = m - O(log t), i.e. logarithmic entropy deficiency. — Randomness condensers suffice for key derivation in many cryptographic applications: when an adversary has negligible success probability (or negligible "squared advantage" [3]) for a uniformly random key, we can use instead a key generated by a condenser whose output has logarithmic entropy deficiency. — Randomness condensers for seed-dependent samplable sources that are robust to side information generated by the sampling algorithm imply soundness of the Fiat-Shamir Heuristic when applied to any constant-round, public-coin interactive proof system.

Book ChapterDOI
19 Mar 2012
TL;DR: In this article, it was shown that the 6-round Feistel construction is seq-indifferentiable from a random invertible permutation, which is equivalent to pub-indifference for stateless ideal primitives.
Abstract: We show that the Feistel construction with six rounds and random round functions is publicly indifferentiable from a random invertible permutation (a result that is not known to hold for full indifferentiability). Public indifferentiability (pub-indifferentiability for short) is a variant of indifferentiability introduced by Yoneyama et al. [29] and Dodis et al. [12] where the simulator knows all queries made by the distinguisher to the primitive it tries to simulate, and is useful to argue the security of cryptosystems where all the queries to the ideal primitive are public (as e.g. in many digital signature schemes). To prove the result, we introduce a new and simpler variant of indifferentiability, that we call sequential indifferentiability (seq-indifferentiability for short) and show that this notion is in fact equivalent to pub-indifferentiability for stateless ideal primitives. We then prove that the 6-round Feistel construction is seq-indifferentiable from a random invertible permutation. We also observe that sequential indifferentiability implies correlation intractability, so that the Feistel construction with six rounds and random round functions yields a correlation intractable invertible permutation, a notion we define analogously to correlation intractable functions introduced by Canetti et al. [4].

Book ChapterDOI
19 Mar 2012
TL;DR: This result suggests that in order to prove the security of a given instantiation of RSA-FDH, one should use a non-black box security proof, or use specific properties of the RSA group that are not captured by its multiplicative structure alone.
Abstract: The hash-and-sign RSA signature is one of the most elegant and well known signatures schemes, extensively used in a wide variety of cryptographic applications. Unfortunately, the only existing analysis of this popular signature scheme is in the random oracle model, where the resulting idealized signature is known as the RSA Full Domain Hash signature scheme (RSA-FDH). In fact, prior work has shown several "uninstantiability" results for various abstractions of RSA-FDH, where the RSA function was replaced by a family of trapdoor random permutations, or the hash function instantiating the random oracle could not be keyed. These abstractions, however, do not allow the reduction and the hash function instantiation to use the algebraic properties of RSA function, such as the multiplicative group structure of ℤn* . n. In contrast, the multiplicative property of the RSA function is critically used in many standard model analyses of various RSA-based schemes. Motivated by closing this gap, we consider the setting where the RSA function representation is generic (i.e., black-box) but multiplicative, whereas the hash function itself is in the standard model, and can be keyed and exploit the multiplicative properties of the RSA function. This setting abstracts all known techniques for designing provably secure RSA-based signatures in the standard model, and aims to address the main limitations of prior uninstantiability results. Unfortunately, we show that it is still impossible to reduce the security of RSA-FDH to any natural assumption even in our model. Thus, our result suggests that in order to prove the security of a given instantiation of RSA-FDH, one should use a non-black box security proof, or use specific properties of the RSA group that are not captured by its multiplicative structure alone. We complement our negative result with a positive result, showing that the RSA-FDH signatures can be proven secure under the standard RSA assumption, provided that the number of signing queries is a-priori bounded.

Book ChapterDOI
19 Mar 2012
TL;DR: Hohenberger et al. as mentioned in this paper introduced the notion of collusion-resistant obfuscation and defined this notion with respect to average-case secure obfuscation, and provided a construction of a functional re-encryption scheme for any function F with a polynomial-size domain and showed that it satisfies this notion.
Abstract: We introduce a natural cryptographic functionality called functional re-encryption. Informally, this functionality, for a public-key encryption scheme and a function F with n possible outputs, transforms ("re-encrypts") an encryption of a message m under an "input public key" pk into an encryption of the same message m under one of the n "output public keys", namely the public key indexed by F(m). In many settings, one might require that the program implementing the functional re-encryption functionality should reveal nothing about both the input secret key sk as well as the function F. As an example, consider a user Alice who wants her email server to share her incoming mail with one of a set of n recipients according to an access policy specified by her function F, but who wants to keep this access policy private from the server. Furthermore, in this setting, we would ideally obtain an even stronger guarantee: that this information remains hidden even when some of the n recipients may be corrupted. To formalize these issues, we introduce the notion of collusion-resistant obfuscation and define this notion with respect to average-case secure obfuscation (Hohenberger et al. - TCC 2007). We then provide a construction of a functional re-encryption scheme for any function F with a polynomial-size domain and show that it satisfies this notion of collusion-resistant obfuscation. We note that collusion-resistant security can be viewed as a special case of dependent auxiliary input security (a setting where virtually no positive results are known), and this notion may be of independent interest. Finally, we show that collusion-resistant obfuscation of functional re-encryption for a function F gives a way to obfuscate F in the sense of Barak et al. (CRYPTO 2001), indicating that this task is impossible for arbitrary (polynomial-time computable) functions F.

Book ChapterDOI
19 Mar 2012
TL;DR: This work follows the approach of constructive cryptography, questioning the justification for the existing (game-based) security definitions and finds that some are too weak, or artificially strong, such as INT-CTXT, and others appear unsuitable for symmetric encryption,such as IND-CCA.
Abstract: Traditional security definitions in the context of secure communication specify properties of cryptographic schemes. For symmetric encryption schemes, these properties are intended to capture the protection of the confidentiality or the integrity of the encrypted messages. A vast variety of such definitions has emerged in the literature and, despite the efforts of previous work, the relations and interplay of many of these notions (which are a priori not composable) are unexplored. Also, the exact guarantees implied by the properties are hard to understand. In constructive cryptography, notions such as confidentiality and integrity appear as attributes of channels, i.e., the communication itself. This makes the guarantees achieved by cryptographic schemes explicit, and leads to security definitions that are composable. In this work, we follow the approach of constructive cryptography, questioning the justification for the existing (game-based) security definitions. In particular, we compare these definitions with related constructive notions and find that some are too weak, such as INT-PTXT, or artificially strong, such as INT-CTXT. Others appear unsuitable for symmetric encryption, such as IND-CCA.

Book ChapterDOI
19 Mar 2012
TL;DR: ZK-PCPs can be used to get black-box variants of this result with improved round complexity, as well as an unconditional zero-knowledge variant of Micali's non-interactive CS Proofs (FOCS '94) in the Random Oracle Model.
Abstract: We revisit the question of Zero-Knowledge PCPs, studied by Kilian, Petrank, and Tardos (STOC '97). A ZK-PCP is defined similarly to a standard PCP, except that the view of any (possibly malicious) verifier can be efficiently simulated up to a small statistical distance. Kilian et al.obtained a ZK-PCP for NEXP in which the proof oracle is in EXPNP. They also obtained a ZK-PCP for NP in which the proof oracle is computable in polynomial-time, but this ZK-PCP is only zero-knowledge against bounded-query verifiers who make at most an a priori fixed polynomial number of queries. The existence of ZK-PCPs for NP with efficient oracles and arbitrary polynomial-time malicious verifiers was left open. This question is motivated by the recent line of work on cryptography using tamper-proof hardware tokens: an efficient ZK-PCP (for any language) is equivalent to a statistical zero-knowledge proof using only a single stateless token sent to the verifier. We obtain the following results regarding efficient ZK-PCPs: Negative Result on Efficient ZK-PCPs. Assuming that the polynomial time hierarchy does not collapse, we settle the above question in the negative for ZK-PCPs in which the verifier is nonadaptive (i.e. the queries only depend on the input and secret randomness but not on the PCP answers). Simplifying Bounded-Query ZK-PCPs. The bounded-query zero-knowledge PCP of Kilian et al. starts from a weakly-sound bounded-query ZK-PCP of Dwork et al. (CRYPTO '92) and amplifies its soundness by introducing and constructing a new primitive called locking scheme -- an unconditional oracle-based analogue of a commitment scheme. We simplify the ZK-PCP of Kilian et al. by presenting an elementary new construction of locking schemes. Our locking scheme is purely combinatorial. Black-Box Sublinear ZK Arguments via ZK-PCPs. Kilian used PCPs to construct sublinear-communication zero-knowledge arguments for NP which make a non-black-box use of collision-resistant hash functions (STOC '92). We show that ZK-PCPs can be used to get black-box variants of this result with improved round complexity, as well as an unconditional zero-knowledge variant of Micali's non-interactive CS Proofs (FOCS '94) in the Random Oracle Model.

Book ChapterDOI
19 Mar 2012
TL;DR: This paper invalidates evidence that prime-order bilinear groups according to Freeman's construction cannot have two properties simultaneously except negligible probability and obtains a 2-move (i.e., round optimal) blind signature scheme based on the decisional linear assumption in the common reference string model.
Abstract: At Eurocrypt 2010, Freeman proposed a transformation from pairing-based schemes in composite-order bilinear groups to equivalent ones in prime-order bilinear groups. His transformation can be applied to pairing-based cryptosystems exploiting only one of two properties of composite-order bilinear groups: cancelling and projecting. At Asiacrypt 2010, Meiklejohn, Shacham, and Freeman showed that prime-order bilinear groups according to Freeman's construction cannot have two properties simultaneously except negligible probability and, as an instance of implausible conversion, proposed a (partially) blind signature scheme whose security proof exploits both the cancelling and projecting properties of composite-order bilinear groups. In this paper, we invalidate their evidence by presenting a security proof of the prime-order version of their blind signature scheme. Our security proof follows a different strategy and exploits only the projecting property. Instead of the cancelling property, a new property, that we call translating, on prime-order bilinear groups plays an important role in the security proof, whose existence was not known in composite-order bilinear groups. With this proof, we obtain a 2-move (i.e., round optimal) (partially) blind signature scheme (without random oracle) based on the decisional linear assumption in the common reference string model, which is of independent interest. As the second contribution of this paper, we construct prime-order bilinear groups that possess both the cancelling and projecting properties at the same time by considering more general base groups. That is, we take a rank n ℤp-submodule of ℤ p n2, instead of ℤ p n, to be a base group G, and consider the projections into its rank 1 submodules. We show that the subgroup decision assumption on this base group G holds in the generic bilinear group model for n=2, and provide an efficient membership-checking algorithm to G, which was trivial in the previous setting. Consequently, it is still open whether there exists a cryptosystem on composite-order bilinear groups that cannot be constructed on prime-order bilinear groups.

Book ChapterDOI
19 Mar 2012
TL;DR: Evidence in support of the view that small bias is a good measure of pseudorandomness for local functions with large stretch is given, by demonstrating that resilience to linear distinguishers implies resilience to a larger class of attacks for such functions.
Abstract: We consider pseudorandom generators in which each output bit depends on a constant number of input bits. Such generators have appealingly simple structure: they can be described by a sparse input-output dependency graph G and a small predicate P that is applied at each output. Following the works of Cryan and Miltersen (MFCS '01) and by Mossel et al (FOCS '03), we ask: which graphs and predicates yield "small-bias" generators (that fool linear distinguishers)? We identify an explicit class of degenerate predicates and prove the following. For most graphs, all non-degenerate predicates yield small-bias generators, f: {0,1}n → {0,1}m, with output length m=n1+e for some constant e>0. Conversely, we show that for most graphs, degenerate predicates are not secure against linear distinguishers, even when the output length is linear m=n+Ω(n). Taken together, these results expose a dichotomy: every predicate is either very hard or very easy, in the sense that it either yields a small-bias generator for almost all graphs or fails to do so for almost all graphs. As a secondary contribution, we give evidence in support of the view that small bias is a good measure of pseudorandomness for local functions with large stretch. We do so by demonstrating that resilience to linear distinguishers implies resilience to a larger class of attacks for such functions.

Book ChapterDOI
19 Mar 2012
TL;DR: This work constructs IBE schemes that are secure against a bounded number of collusions, starting with underlying PKE schemes which possess linear homomorphisms over their keys, and gives a transformation that takes any PKE satisfying Linear Key Homomorphism, Identity Map Compatibility, and the Linear Hash Proof Property and translates it into an IBE secure against bounded collusions.
Abstract: In this work, we show how to construct IBE schemes that are secure against a bounded number of collusions, starting with underlying PKE schemes which possess linear homomorphisms over their keys. In particular, this enables us to exhibit a new (bounded-collusion) IBE construction based on the quadratic residuosity assumption, without any need to assume the existence of random oracles. The new IBE's public parameters are of size O(tλlogI) where I is the total number of identities which can be supported by the system, t is the number of collusions which the system is secure against, and λ is a security parameter. While the number of collusions is bounded, we note that an exponential number of total identities can be supported. More generally, we give a transformation that takes any PKE satisfying Linear Key Homomorphism, Identity Map Compatibility, and the Linear Hash Proof Property and translates it into an IBE secure against bounded collusions. We demonstrate that these properties are more general than our quadratic residuosity-based scheme by showing how a simple PKE based on the DDH assumption also satisfies these properties.

Book ChapterDOI
19 Mar 2012
TL;DR: This work shows a construction of a constant-round simultaneously resettable witness-indistinguishable argument of knowledge (simresWIAoK, for short) for any NP language and shows two applications of simresWIoK: the first constant- round simultaneously resetable zero-knowledge argument ofknowledge in the Bare Public-Key Model; and the first simultaneously Resettable identification scheme which follows the knowledge extraction paradigm.
Abstract: In this work, we study simultaneously resettable arguments of knowledge. As our main result, we show a construction of a constant-round simultaneously resettable witness-indistinguishable argument of knowledge (simresWIAoK, for short) for any NP language. We also show two applications of simresWIAoK: the first constant-round simultaneously resettable zero-knowledge argument of knowledge in the Bare Public-Key Model; and the first simultaneously resettable identification scheme which follows the knowledge extraction paradigm.

Book ChapterDOI
19 Mar 2012
TL;DR: In this article, the authors investigated the existence and inherent complexity of computational extractors from both theoretical and practical angles, with particular focus on the relationship between pseudorandomness and computational complexity.
Abstract: Computational extractors are efficient procedures that map a source of sufficiently high min-entropy to an output that is computationally indistinguishable from uniform. By relaxing the statistical closeness property of traditional randomness extractors one hopes to improve the efficiency and entropy parameters of these extractors, while keeping their utility for cryptographic applications. In this work we investigate computational extractors and consider questions of existence and inherent complexity from the theoretical and practical angles, with particular focus on the relationship to pseudorandomness. An obvious way to build a computational extractor is via the "extract-then-prg" method: apply a statistical extractor and use its output to seed a PRG. This approach carries with it the entropy cost inherent to implementing statistical extractors, namely, the source entropy needs to be substantially higher than the PRG's seed length. It also requires a PRG and thus relies on one-way functions. We study the necessity of one-way functions in the construction of computational extractors and determine matching lower and upper bounds on the "black-box efficiency" of generic constructions of computational extractors that use a one-way permutation as an oracle. Under this efficiency measure we prove a direct correspondence between the complexity of computational extractors and that of pseudorandom generators, showing the optimality of the extract-then-prg approach for generic constructions of computational extractors and confirming the intuition that to build a computational extractor via a PRG one needs to make up for the entropy gap intrinsic to statistical extractors. On the other hand, we show that with stronger cryptographic primitives one can have more entropy- and computationally-efficient constructions. In particular, we show a construction of a very practical computational extractor from any weak PRF without resorting to statistical extractors.

Book ChapterDOI
19 Mar 2012
TL;DR: The result shows that VRFs are strictly stronger and cannot be constructed (in a black-box way) form primitives like e.g., public-key encryption, oblivious transfer, and key-agreement, and also shows that unique signatures cannot be instantiated from ATDPs.
Abstract: Verifiable random functions (VRFs) are pseudorandom functions with the additional property that the owner of the seed SK can issue publicly-verifiable proofs for the statements "f(SK,x)=y", for any input x. Moreover, the output of VRFs is guaranteed to be unique, which means that y=f(SK,x) is the only image that can be proven to map to x. Despite their popularity, constructing VRFs seems to be a challenging task and only a few constructions based on specific number-theoretic problems are known. Basing a scheme on general assumptions is still an open problem. Towards this direction, Brakerski et al. showed that verifiable random functions cannot be constructed from one-way permutations in a black-box way. In this paper we continue the study of the relationship between VRFs and well-established cryptographic primitives. Our main result is a separation of VRFs and adaptive trapdoor permutations (ATDPs) in a black-box manner. This result sheds light on the nature of VRFs and is interesting for at least three reasons: — First, the separation result of Brakerski et al. gives the impression that VRFs belong to the "public-key world", and thus their relationship with other public-key primitives is interesting. Our result, however, shows that VRFs are strictly stronger and cannot be constructed (in a black-box way) form primitives like e.g., public-key encryption (even CCA-secure), oblivious transfer, and key-agreement. — Second, the notion of VRFs is closely related to weak verifiable random functions and verifiable pseudorandom generators which are both implied by TDPs. Dwork and Naor (FOCS 2000) asked whether there are transformation between the verifiable primitives similar to the case of "regular" PRFs and PRGs. Here, we give a negative answer to this problem showing that the case of verifiable random functions is essentially different. — Finally, our result also shows that unique signatures cannot be instantiated from ATDPs. While it is well known that standard signature schemes are equivalent to OWFs, we essentially show that the uniqueness property is crucial to change the relations between primitives.

Book ChapterDOI
19 Mar 2012
TL;DR: A new construction of a compression function that uses two parallel calls to an ideal primitive from 2n to n bits that uses the Szemeredi---Trotter theorem over finite fields to bound the number of full compression function evaluations an adversary can make, in terms of thenumber of queries to the underlying primitives.
Abstract: We present a new construction of a compression function H {0,1}3n → {0,1}2n that uses two parallel calls to an ideal primitive (an ideal blockcipher or a public random function) from 2n to n bits. This is similar to the well-known MDC-2 or the recently proposed MJH by Lee and Stam (CT-RSA'11). However, unlike these constructions, we show already in the compression function that an adversary limited (asymptotically in n ) to O(22n (1-δ)/3) queries (for any δ>0) has disappearing advantage to find collisions. A key component of our construction is the use of the Szemeredi---Trotter theorem over finite fields to bound the number of full compression function evaluations an adversary can make, in terms of the number of queries to the underlying primitives. Moveover, for the security proof we rely on a new abstraction that refines and strenghtens existing techniques. We believe that this framework elucidates existing proofs and we consider it of independent interest.

Book ChapterDOI
19 Mar 2012
TL;DR: An affirmative answer to the above question is given, presenting a direct construction of adaptive PRFs from non-adaptive ones, a composition of the non- Adaptive PRF with an appropriate pairwise independent hash function.
Abstract: Unlike the standard notion of pseudorandom functions (PRF), a non-adaptive PRF is only required to be indistinguishable from random in the eyes of a non-adaptive distinguisher (i.e., one that prepares its oracle calls in advance). A recent line of research has studied the possibility of a direct construction of adaptive PRFs from non-adaptive ones, where direct means that the constructed adaptive PRF uses only few (ideally, constant number of) calls to the underlying non-adaptive PRF. Unfortunately, this study has only yielded negative results, showing that "natural" such constructions are unlikely to exist (e.g., Myers [EUROCRYPT '04], Pietrzak [CRYPTO '05, EUROCRYPT '06]).. We give an affirmative answer to the above question, presenting a direct construction of adaptive PRFs from non-adaptive ones. Our construction is extremely simple, a composition of the non-adaptive PRF with an appropriate pairwise independent hash function.

Book ChapterDOI
19 Mar 2012
TL;DR: In this paper, it was shown that under the assumption that subexponentially hard one-way functions exist, rSƵK = SǫK, and that RRSR ⊆ rSǵK, the proof system can be constructed in O(log κ) rounds, where κ is the security parameter and all simulators are black-box.
Abstract: Two central notions of Zero Knowledge that provide strong, yet seemingly incomparable security guarantees against malicious verifiers are those of Statistical Zero Knowledge and Resettable Zero Knowledge. The current state of the art includes several feasibility and impossibility results regarding these two notions separately. However, the question of achieving Resettable Statistical Zero Knowledge (i.e., Resettable Zero Knowledge and Statistical Zero Knowledge simultaneously) for non-trivial languages remained open. In this paper, we show: — Resettable Statistical Zero Knowledge with unbounded prover: under the assumption that sub-exponentially hard one-way functions exist, rSƵK = SƵK. In other words, every language that admits a Statistical Zero-Knowledge (SƵK) proof system also admits a Resettable Statistical Zero-Knowledge (rSƵK) proof system. (Further, the result can be re-stated unconditionally provided there exists a sub-exponentially hard language in SƵK). Moreover, under the assumption that (standard) one-way functions exist, all languages L such that the complement of L is random self reducible, admit a rSƵK; in other words: co-RSR ⊆ rSƵK. — Resettable Statistical Zero Knowledge with efficient prover: efficient-prover Resettable Statistical Zero-Knowledge proof systems exist for all languages that admit hash proof systems (e.g., QNR, QR, ƊƊH, DCR). Furthermore, for these languages we construct a two-round resettable statistical witness-indistinguishable argument system. The round complexity of our proof systems is O(log κ), where κ is the security parameter, and all our simulators are black-box.

Book ChapterDOI
19 Mar 2012
TL;DR: This work constructs weakly secure signatures and one-way functions, for which standard hardness amplification results are known to hold, but for which hardness does not amplify beyond just negligible.
Abstract: If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the "direct product"; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated ones. Interestingly, proving that the direct product amplifies hardness is often highly non-trivial, and in some cases may be false. For example, it is known that the direct product (i.e. "parallel repetition") of general interactive games may not amplify hardness at all. On the other hand, positive results show that the direct product does amplify hardness for many basic primitives such as one-way functions, weakly-verifiable puzzles, and signatures. Even when positive direct product theorems are shown to hold for some primitive, the parameters are surprisingly weaker than what we may have expected. For example, if we start with a weak one-way function that no poly-time attacker can break with probability > 1/2, then the direct product provably amplifies hardness to some negligible probability. Naturally, we would expect that we can amplify hardness exponentially, all the way to 2−n probability, or at least to some fixed/known negligible such as n−logn in the security parameter n, just by taking sufficiently many instances of the weak primitive. Although it is known that such parameters cannot be proven via black-box reductions, they may seem like reasonable conjectures, and, to the best of our knowledge, are widely believed to hold. In fact, a conjecture along these lines was introduced in a survey of Goldreich, Nisan and Wigderson (ECCC '95). In this work, we show that such conjectures are false by providing simple but surprising counterexamples. In particular, we construct weakly secure signatures and one-way functions, for which standard hardness amplification results are known to hold, but for which hardness does not amplify beyond just negligible. That is, for any negligible function e(n), we instantiate these primitives so that the direct product can always be broken with probability e(n), no matter how many copies we take.