scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2004"


Book ChapterDOI
19 Feb 2004
TL;DR: In this paper, a generalization of the notion of indistinguishability of two systems, called indifferentiability, is introduced and motivated by a generalisation of reducibility of one system to another, where a possible adversary has access to additional information about the internal state of the involved systems.
Abstract: The goals of this paper are two-fold. First we introduce and motivate a generalization of the fundamental concept of the indistinguishability of two systems, called indifferentiability. This immediately leads to a generalization of the related notion of reducibility of one system to another. In contrast to the conventional notion of indistinguishability, indifferentiability is applicable in settings where a possible adversary is assumed to have access to additional information about the internal state of the involved systems, for instance the public parameter selecting a member from a family of hash functions.

496 citations


Proceedings Article
01 Jan 2004
TL;DR: This paper considers an adversary that has full (and indeed adaptive) access to any leaked information; shows that some of the basic theorems and intuitions of traditional cryptography no longer hold in a physically observable setting; and constructs pseudorandom generators that are provably secure against all physical-observation attacks.
Abstract: Complexity-theoretic cryptography considers only abstract notions of computation, and hence cannot protect against attacks that exploit the information leakage (via electromagnetic fields, power con- sumption, etc.) inherent in the physical execution of any cryptographic al- gorithm. Such "physical observation attacks" bypass the impressive bar- rier of mathematical security erected so far, and successfully break math- ematically impregnable systems. The great practicality and the inherent availability of physical attacks threaten the very relevance of complexity- theoretic security. To respond to the present crisis, we put forward physically observable cryptography: a powerful, comprehensive, and precise model for defining and delivering cryptographic security against an adversary that has ac- cess to information leaked from the physical execution of cryptographic algorithms. Our general model allows for a variety of adversaries. In this paper, however, we focus on the strongest possible adversary, so as to capture what is cryptographically possible in the worst possible, physi- cally observable setting. In particular, we - consider an adversary that has full (and indeed adaptive) access to any leaked information; - show that some of the basic theorems and intuitions of traditional cryptography no longer hold in a physically observable setting; and - construct pseudorandom generators that are provably secure against all physical-observation attacks.

433 citations


Book ChapterDOI
19 Feb 2004
TL;DR: The physical observation attacks as discussed by the authors bypass the impressive barrier of mathematical security erected so far, and successfully break mathematically impregnable systems, and threaten the very relevance of complexity-theoretic security.
Abstract: Complexity-theoretic cryptography considers only abstract notions of computation, and hence cannot protect against attacks that exploit the information leakage (via electromagnetic fields, power consumption, etc.) inherent in the physical execution of any cryptographic algorithm. Such “physical observation attacks” bypass the impressive barrier of mathematical security erected so far, and successfully break mathematically impregnable systems. The great practicality and the inherent availability of physical attacks threaten the very relevance of complexity-theoretic security.

342 citations


Book ChapterDOI
19 Feb 2004
TL;DR: Starting with the seminal paper of Impagliazzo and Rudich [17], there has been a large body of work showing that various cryptographic primitives cannot be reduced to each other via “black-box” reductions.
Abstract: Starting with the seminal paper of Impagliazzo and Rudich [17], there has been a large body of work showing that various cryptographic primitives cannot be reduced to each other via “black-box” reductions. The common interpretation of these results is that there are inherent limitations in using a primitive as a black box, and that these impossibility results can be overcome only by explicitly using the code of the primitive in the construction.

332 citations


Book ChapterDOI
19 Feb 2004
TL;DR: This is the first paper providing a simple framework for translating security proofs from the logic setting to the standard computational setting for the case of powerful active adversaries that have total control of the communication network.
Abstract: We present a general method to prove security properties of cryptographic protocols against active adversaries, when the messages exchanged by the honest parties are arbitrary expressions built using encryption and concatenation operations. The method allows to express security properties and carry out proofs using a simple logic based language, where messages are represented by syntactic expressions, and does not require dealing with probability distributions or asymptotic notation explicitly. Still, we show that the method is sound, meaning that logic statements can be naturally interpreted in the computational setting in such a way that if a statement holds true for any abstract (symbolic) execution of the protocol in the presence of a Dolev-Yao adversary, then its computational interpretation is also correct in the standard computational model where the adversary is an arbitrary probabilistic polynomial time program. This is the first paper providing a simple framework for translating security proofs from the logic setting to the standard computational setting for the case of powerful active adversaries that have total control of the communication network.

255 citations


Book ChapterDOI
19 Feb 2004
TL;DR: This paper presents a new generation of secure cryptographic algorithms that provide security against an adversary who has only black-box access to the secret information of honest parties.
Abstract: Traditionally, secure cryptographic algorithms provide security against an adversary who has only black-box access to the secret information of honest parties. However, such models are not always adequate. In particular, the security of these algorithms may completely break under (feasible) attacks that tamper with the secret key.

254 citations


Book ChapterDOI
19 Feb 2004
TL;DR: It is shown that if a system consists of a polynomial number of arbitrary ideal subsystems such that each of them has a secure implementation in the sense of blackbox simulatability, then one can securely replace all Ideal subsystems with their respective secure counterparts without destroying the blackbox simulating relation.
Abstract: We consider compositional properties of reactive systems that are secure in a cryptographic sense. We follow the well-known simulatability approach of modern cryptography, i.e., the specification is an ideal system and a real system should in some sense simulate this ideal one. We show that if a system consists of a polynomial number of arbitrary ideal subsystems such that each of them has a secure implementation in the sense of blackbox simulatability, then one can securely replace all ideal subsystems with their respective secure counterparts without destroying the blackbox simulatability relation. We further prove our theorem for universal simulatability by showing that blackbox simulatability implies universal simulatability under reasonable assumptions. We show all our results with concrete security.

151 citations


Book ChapterDOI
Yehuda Lindell1
19 Feb 2004
TL;DR: It is proved that there exist many functionalities that cannot be securely computed in the setting of concurrent self composition, and a communication complexity lower bound on protocols that securely compute a large class of functionalities in this setting is proved.
Abstract: In the setting of concurrent self composition, a single protocol is executed many times concurrently by a single set of parties. In this paper, we prove that there exist many functionalities that cannot be securely computed in this setting. We also prove a communication complexity lower bound on protocols that securely compute a large class of functionalities in this setting. Specifically, we show that any protocol that computes a functionality from this class and remains secure for m concurrent executions, must have bandwidth of at least m bits. Our results hold for the plain model (where no trusted setup phase is assumed), and for the case that the parties may choose their inputs adaptively, based on previously obtained outputs. While proving our impossibility result, we also show that for many functionalities, security under concurrent self composition (where a single secure protocol is run many times) is actually equivalent to the seemingly more stringent requirement of security under concurrent general composition (where a secure protocol is run concurrently with other arbitrary protocols). This observation has significance beyond the impossibility results that are derived by it for concurrent self composition.

135 citations


Book ChapterDOI
19 Feb 2004
TL;DR: This work presents a perfect secret sharing scheme for threshold secret sharing in groups with hierarchical structure that uses Birkhoff interpolation, i.e., the construction of a polynomial according to an unstructured set of point and derivative values.
Abstract: We consider the problem of threshold secret sharing in groups with hierarchical structure. In such settings, the secret is shared among a group of participants that is partitioned into levels. The access structure is then determined by a sequence of threshold requirements: a subset of participants is authorized if it has at least k 0 members from the highest level, as well as at least k 1 > k 0 members from the two highest levels and so forth. Such problems may occur in settings where the participants differ in their authority or level of confidence and the presence of higher level participants is imperative to allow the recovery of the common secret. Even though secret sharing in hierarchical groups has been studied extensively in the past, none of the existing solutions addresses the simple setting where, say, a bank transfer should be signed by three employees, at least one of whom must be a department manager. We present a perfect secret sharing scheme for this problem that, unlike most secret sharing schemes that are suitable for hierarchical structures, is ideal. As in Shamir’s scheme, the secret is represented as the free coefficient of some polynomial. The novelty of our scheme is the usage of polynomial derivatives in order to generate lesser shares for participants of lower levels. Consequently, our scheme uses Birkhoff interpolation, i.e., the construction of a polynomial according to an unstructured set of point and derivative values. A substantial part of our discussion is dedicated to the question of how to assign identities to the participants from the underlying finite field so that the resulting Birkhoff interpolation problem will be well posed. In the course of this discussion, we borrow some results from the theory of Birkhoff interpolation over ℝ and import them to the context of finite fields.

121 citations


Book ChapterDOI
19 Feb 2004
TL;DR: A constant round protocol for Oblivious Transfer in Maurer's bounded storage model that has only 5 messages and uses constructions of almost t-wise independent permutations, randomness extractors and averaging samplers from the theory of derandomization.
Abstract: We present a constant round protocol for Oblivious Transfer in Maurer's bounded storage model In this model, a long random string R is initially transmitted and each of the parties interacts based on a small portion of R Even though the portions stored by the honest parties are small, security is guaranteed against any malicious party that remembers almost all of the string R Previous constructions for Oblivious Transfer in the bounded storage model required polynomially many rounds of interaction Our protocol has only 5 messages We also improve other parameters, such as the number of bits transferred and the probability of immaturely aborting the protocol due to failure Our techniques utilize explicit constructions from the theory of derandomization In particular, we use constructions of almost t-wise independent permutations, randomness extractors and averaging samplers

116 citations


Book ChapterDOI
19 Feb 2004
TL;DR: A mix-net is a cryptographic protocol executed by a set of mix-servers that provides anonymity for a group of senders.
Abstract: A mix-net is a cryptographic protocol executed by a set of mix-servers that provides anonymity for a group of senders. The main application is electronic voting.

Book ChapterDOI
19 Feb 2004
TL;DR: The negative result of the Random Oracle Methodology is extended to address signature schemes that sign only “short” messages, and are “memoryless” (i.e., the only thing kept between different signature generations is the initial signing-key).
Abstract: In earlier work, we described a “pathological” example of a signature scheme that is secure in the Random Oracle Model, but for which no secure implementation exists. For that example, however, it was crucial that the scheme is able to sign “long messages” (i.e., messages whose length is not a-priori bounded). This left open the possibility that the Random Oracle Methodology is sound with respect to signature schemes that sign only “short” messages (i.e., messages of a-priori bounded length, smaller than the length of the keys in use), and are “memoryless” (i.e., the only thing kept between different signature generations is the initial signing-key). In this work, we extend our negative result to address such signature schemes. A key ingredient in our proof is a new type of interactive proof systems, which may be of independent interest.

Proceedings Article
01 Jan 2004
TL;DR: In this article, the authors explore whether non-malleability is necessary for the application for which non-malleability was initially introduced, and propose two alternatives: weak and tag-based NMT.
Abstract: We explore whether non-malleability is necessary for the ap- plications typically used to motivate it, and propose two alternatives. The first we call weak non-malleability (wnm) and show that it suf- fices to achieve secure contract bidding (the application for which non- malleability was initially introduced), despite being strictly weaker than non-malleability. The second we call tag-based non-malleability (tnm), and show that it suffices to construct an efficient universally-composable secure message transmission (SMT) protocol, for which the only previ- ous solution was based on a public key encryption functionality whose security is equivalent to non-malleability. We also demonstrate construc- tions for wnm and tnm encryption schemes that are simpler than known constructions of non-malleable encryption schemes.

Book ChapterDOI
19 Feb 2004
TL;DR: The use of random oracles to achieve universal composability of commitment protocols is motivated and two constructions are given which allow to turn a given non-interactive commitment scheme into a non-Interactive universally composable commitment scheme in the random oracle model.
Abstract: In the setting of universal composability [Can01], commitments cannot be implemented without additional assumptions such as that of a publicly available common reference string[CF01]. Here, as an alternative to the commitments in the common reference string model, the use of random oracles to achieve universal composability of commitment protocols is motivated. Special emphasis is put on the security in the situation when the additional “helper functionality” is replaced by a realizable primitive. This contribution gives two constructions which allow to turn a given non-interactive commitment scheme into a non-interactive universally composable commitment scheme in the random oracle model. For both constructions the binding and the hiding property remain valid when collision-free hash functions are used instead of random oracles. Moreover the second construction in this case even preserves the property of perfect binding.

Book ChapterDOI
19 Feb 2004
TL;DR: In this article, an extended version of COT (ECOT) is introduced, which additionally allows proofs of relations among committed bits, and an efficient protocol that securely realizes an ECOT functionality in the universal-composability (UC) framework is presented.
Abstract: Committed Oblivious Transfer (COT) is a useful cryptographic primitive that combines the functionalities of bit commitment and oblivious transfer In this paper, we introduce an extended version of COT (ECOT) which additionally allows proofs of relations among committed bits, and we construct an efficient protocol that securely realizes an ECOT functionality in the universal-composability (UC) framework Our construction is more efficient than previous (non-UC) constructions of COT, involving only a constant number of exponentiations and communication rounds Using the ECOT functionality as a building block, we construct efficient UC protocols for general two-party and multi-party functionalities, each gate requiring a constant number of ECOT’s

Book ChapterDOI
19 Feb 2004
TL;DR: It is shown that tag-based non-malleability (tnm) suffices to construct an efficient universally-composable secure message transmission (SMT) protocol, for which the only previ- ous solution was based on a public key encryption functionality whose security is equivalent to non- malleability.
Abstract: We explore whether non-malleability is necessary for the applications typically used to motivate it, and propose two alternatives. The first we call weak non-malleability (wnm) and show that it suffices to achieve secure contract bidding (the application for which non-malleability was initially introduced), despite being strictly weaker than non-malleability. The second we call tag-based non-malleability (tnm), and show that it suffices to construct an efficient universally-composable secure message transmission (SMT) protocol, for which the only previous solution was based on a public key encryption functionality whose security is equivalent to non-malleability. We also demonstrate constructions for wnm and tnm encryption schemes that are simpler than known constructions of non-malleable encryption schemes.

Book ChapterDOI
19 Feb 2004
TL;DR: A new technique for proving the adaptive indistinguishability of two systems, each composed of some component systems, is presented, using only the fact that corresponding component systems are non-adaptively indistinguishable.
Abstract: A new technique for proving the adaptive indistinguishability of two systems, each composed of some component systems, is presented, using only the fact that corresponding component systems are non-adaptively indistinguishable. The main tool is the definition of a special monotone condition for a random system F, relative to another random system G, whose probability of occurring for a given distinguisher D is closely related to the distinguishing advantage e of D for F and G, namely it is lower and upper bounded by e and \(\epsilon(1 + {\rm ln}\frac{1}{\epsilon})\), respectively.

Book ChapterDOI
19 Feb 2004
TL;DR: This work considers the problem of constructing a constant-round zero-knowledge proof system for all languages in NP and proposes an alternative solution that admits a considerably simpler analysis.
Abstract: We consider the problem of constructing a constant-round zero-knowledge proof system for all languages in NP. This problem has been previously addressed by Goldreich and Kahan (Jour. of Cryptology, 1996). Following recent works on concurrent zero-knowledge, we propose an alternative solution that admits a considerably simpler analysis.

Book ChapterDOI
19 Feb 2004
TL;DR: It is proved in this paper that any OT protocol that can be constructed based on a PassiveUNC and is secure against a passive adversary can be transformed using a generic “compiler” into an OT protocolbased on a UNC which is secureagainst an active adversary.
Abstract: In a paper from EuroCrypt’99, Damgard, Kilian and Salvail show various positive and negative results on constructing Bit Commitment (BC) and Oblivious Transfer (OT) from Unfair Noisy Channels (UNC), i.e., binary symmetric channels where the error rate is only known to be in a certain interval [γ..δ] and can be chosen adversarily. They also introduce a related primitive called PassiveUNC. We prove in this paper that any OT protocol that can be constructed based on a PassiveUNC and is secure against a passive adversary can be transformed using a generic “compiler” into an OT protocol based on a UNC which is secure against an active adversary. Apart from making positive results easier to prove in general, this also allows correcting a problem in the EuroCrypt’99 paper: There, a positive result was claimed on constructing from UNC an OT that is secure against active cheating. We point out that the proof sketch given for this was incomplete, and we show that a correct proof of a much stronger result follows from our general compilation result and a new technique for transforming between weaker versions of OT with different parameters.

Book ChapterDOI
19 Feb 2004
TL;DR: This work presents an implementation of oblivious transfer based on collection of dense trapdoor permutations, with the property that the permutation domains are polynomially dense in the set of all strings of a particular length.
Abstract: Until recently, the existence of collection of trapdoor permutations (TDP) was believed (and claimed) to imply almost all of the major cryptographic primitives, including public-key encryption (PKE), oblivious transfer (OT), and non-interactive zero-knowledge (NIZK). It was recently realized, however, that the commonly accepted general definition of TDP needs to be strengthened slightly in order to make the security proofs of TDP-based OT go through. We present an implementation of oblivious transfer based on collection of dense trapdoor permutations. The latter is a collection of trapdoor permutations, with the property that the permutation domains are polynomially dense in the set of all strings of a particular length. Previous TDP-based implementations of oblivious transfer assumed an enhancement of the hardness assumption (of the collection).

Book ChapterDOI
19 Feb 2004
TL;DR: A cryptosystem that is RCCA secure has full CCA2 security except for the little detail that it may be possible to modify a ciphertext into another ciphertext containing the same plaintext.
Abstract: Recently Canetti, Krawczyk and Nielsen defined the notion of replayable adaptive chosen ciphertext attack (RCCA) secure encryption. Essentially a cryptosystem that is RCCA secure has full CCA2 security except for the little detail that it may be possible to modify a ciphertext into another ciphertext containing the same plaintext.

Book ChapterDOI
19 Feb 2004
TL;DR: The reduction proposed by Crepeau and Kilian was proved secure against any receiver by Yao, in the case where perfect commitments are used, but what happens to the security of the above reduction when computationally secure commitments are use instead of perfect ones?
Abstract: Quantum 2-party cryptography differs from its classical counterpart in at least one important way: Given blak-box access to a perfect commitment scheme there exists a secure 1-2 quantum oblivious transfer. This reduction proposed by Crepeau and Kilian was proved secure against any receiver by Yao, in the case where perfect commitments are used. However, quantum commitments would normally be based on computational assumptions. A natural question therefore arises: What happens to the security of the above reduction when computationally secure commitments are used instead of perfect ones?

Book ChapterDOI
19 Feb 2004
TL;DR: This paper proposes a modification of Okamoto’s definition of security which still captures security against chosen message attack, and yet enables the design of concrete and reasonably efficient designated confirmer signature schemes which can be proved secure without resorting to random oracle assumptions as previously done.
Abstract: Since designated confirmer signature schemes were introduced by Chaum and formalized by Okamoto, a number of attempts have been made to design designated confirmer signature schemes which are efficient and at the same time provably secure under standard cryptographic assumptions. Yet, there has been a consistent gap in security claims and analysis between all generic theoretical proposals and any concrete implementation proposal one can envision using in practice. In this paper we propose a modification of Okamoto’s definition of security which still captures security against chosen message attack, and yet enables the design of concrete and reasonably efficient designated confirmer signature schemes which can be proved secure without resorting to random oracle assumptions as previously done. In particular, we present simple transformations of the digital signature schemes of Cramer-Shoup, Goldwasser-Micali-Rivest and Gennaro-Halevi-Rabin into secure designated confirmer signature schemes. We prove security of the schemes obtained under the same security assumption made by the digital signature scheme transformed and an encryption scheme we use as a tool.

Book ChapterDOI
19 Feb 2004
TL;DR: It is shown that if Z is a pseudo-free abelian group, then Z also satisfies the Strong RSA Assumption, which may be the strongest natural cryptographic assumption one can make about a group such as Z.
Abstract: We explore the notion of a pseudo-free group, first introduced by Hohenberger [Hoh03], and provide an alternative stronger definition. We show that if Z \(^{\rm *}_{n}\) is a pseudo-free abelian group (as we conjecture), then Z \(^{\rm *}_{n}\) also satisfies the Strong RSA Assumption [FO97,CS00,BP97]. Being a “pseudo-free abelian group” may be the strongest natural cryptographic assumption one can make about a group such as Z \(_{n}^{\rm *}\). More generally, we show that a pseudo-free group satisfies several standard cryptographic assumptions, such as the difficulty of computing discrete logarithms.

Book ChapterDOI
19 Feb 2004
TL;DR: It is shown that, under reasonable (although not standard) assumptions, there exists a one-message proof system for every language in NP that satisfies the following relaxed form of zero knowledge:.
Abstract: We investigate whether it is possible to obtain any meaningful type of zero-knowledge proofs using a one-message (i.e., non-interactive) proof system. We show that, under reasonable (although not standard) assumptions, there exists a one-message proof system for every language in NP that satisfies the following relaxed form of zero knowledge:

Book ChapterDOI
19 Feb 2004
TL;DR: This paper presents the first protocol for password-authenticated key exchange in the standard model (with no common reference string or set-up assumptions other than the shared password), which uses several heavy tools and has a complicated analysis.
Abstract: Goldreich and Lindell (CRYPTO ‘01) recently presented the first protocol for password-authenticated key exchange in the standard model (with no common reference string or set-up assumptions other than the shared password). However, their protocol uses several heavy tools and has a complicated analysis.

Book ChapterDOI
19 Feb 2004
TL;DR: A new model for non-interactive zero-knowledge where security is not based on a common reference string, but where prover and verifier are assumed to possess appropriately correlated secret keys is considered, with immediate applications to non-Interactive verification of undeniable signatures and pseudorandom function values.
Abstract: We consider a new model for non-interactive zero-knowledge where security is not based on a common reference string, but where prover and verifier are assumed to possess appropriately correlated secret keys. We present efficient proofs for equality of discrete logarithms in this model with unconditional soundness and zero-knowledge. This has immediate applications to non-interactive verification of undeniable signatures and pseudorandom function values. Another application is the following: a set of l servers, of which less than l/2 are corrupt, hold shares of a secret integer s. A client C specifies g in some finite group G, and the servers want to allow the client to compute g s non-interactively, i.e., by sending information to C only once. This has immediate applications in threshold cryptography. Using our proof system, the problem can be solved as efficiently as the fastest previous solutions that either required interaction or had to rely on the random oracle model for a proof of security. The price we pay is the need to establish the secret key material once and for all. We present an alternative solution to the problem that is also non-interactive and where clients need no secret keys. This comes at the expense of more communication and the assumption that less than l/3 of the servers are corrupt.

Book ChapterDOI
19 Feb 2004
TL;DR: This paper states that all functions can be computed securely in the information theoretic setting, given access to a black box for some complete function such as AND, and gives rise to two types of functions, those that can be compute without aBlack box (“easy”) and those that cannot (”hard”).
Abstract: Secure computation is one of the most fundamental cryptographic tasks. It is known that all functions can be computed securely in the information theoretic setting, given access to a black box for some complete function such as AND. However, without such a black box, not all functions can be securely computed. This gives rise to two types of functions, those that can be computed without a black box (“easy”) and those that cannot (“hard”). However, no further distinction among the hard functions is made.

Book ChapterDOI
19 Feb 2004
TL;DR: The novelty of reversing the order of these steps, i.e. first encrypting and then compressing, is investigated, and it is shown that the theoretical compression gain is unchanged by performing encryption before compression.
Abstract: When it is desired to transmit redundant data over an insecure and bandwidth-constrained channel, it is customary to first compress the redundant data and then encrypt it for security reasons. In this paper, we investigate the novelty of reversing the order of these steps, i.e. first encrypting and then compressing. Although counter-intuitive, we show surprisingly that through the use of coding with side information principles, this reversal in order is indeed possible. In fact, for lossless compression, we show that the theoretical compression gain is unchanged by performing encryption before compression. We show that the cryptographic security of the reversed system is directly related to the strength of the key generator.

Book ChapterDOI
19 Feb 2004
TL;DR: This work re-cast this construction of 2-round zero-knowledge proof systems secure against provers which are resource-bounded during the interaction in the language of list-decoding.
Abstract: Dwork and Stockmeyer showed 2-round zero-knowledge proof systems secure against provers which are resource-bounded during the interaction [6]. The resources considered are running time and advice (the amount of precomputed information). We re-cast this construction in the language of list-decoding. This perspective leads to the following improvements: