scispace - formally typeset
Search or ask a question

Showing papers in "IACR Cryptology ePrint Archive in 1997"


Posted Content
N. Asokan1, Victor Shoup2, Michael Waidner2
TL;DR: In this paper, the authors present a protocol that allows two players to exchange digital signatures over the Internet in a fair way, so that either each player gets the other's signature, or neither player does.
Abstract: We present a new protocol that allows two players to exchange digital signatures over the Internet in a fair way, so that either each player gets the other's signature, or neither player does. The obvious application is where the signatures represent items of value, for example, an electronic check or airline ticket. The protocol can also be adapted to exchange encrypted data. It relies on a trusted third party, but is "optimistic," in that the third party is only needed in cases where one player crashes or attempts to cheat. A key feature of our protocol is that a player can always force a timely and fair termination, without the cooperation of the other player, even in a completely asynchronous network. A specialization of our protocol can be used for contract signing; this specialization is not only more efficient, but also has the important property that the third party can be held accountable for its actions: if it ever cheats, this can be detected and proven.

537 citations


Posted Content
TL;DR: This paper shows how to transform PIR schemes into SPIR schemes (with information-theoretic privacy), paying a constant factor in communication complexity, and introduces a new cryptographic primitive, called conditional disclosure of secrets, which it is believed may be a useful building block for the design of other cryptographic protocols.
Abstract: Private information retrieval (PIR) schemes allow a user to retrieve the ith bit of an n-bit data string x, replicated in k?2 databases (in the information-theoretic setting) or in k?1 databases (in the computational setting), while keeping the value of i private. The main cost measure for such a scheme is its communication complexity. In this paper we introduce a model of symmetrically-private information retrieval (SPIR), where the privacy of the data, as well as the privacy of the user, is guaranteed. That is, in every invocation of a SPIR protocol, the user learns only a single physical bit of x and no other information about the data. Previously known PIR schemes severely fail to meet this goal. We show how to transform PIR schemes into SPIR schemes (with information-theoretic privacy), paying a constant factor in communication complexity. To this end, we introduce and utilize a new cryptographic primitive, called conditional disclosure of secrets, which we believe may be a useful building block for the design of other cryptographic protocols. In particular, we get a k-database SPIR scheme of complexity O(n1/(2k?1)) for every constant k?2 and an O(logn)-database SPIR scheme of complexity O(log2n·loglogn). All our schemes require only a single round of interaction, and are resilient to any dishonest behavior of the user. These results also yield the first implementation of a distributed version of (n1)-OT (1-out-of-n oblivious transfer) with information-theoretic security and sublinear communication complexity.

418 citations


Posted Content
TL;DR: In this paper, the universal one-way hash functions (UOWHFs) of Naor and Yung were investigated, and the main construction of the XOR tree was proposed.
Abstract: Recent attacks on the cryptographic hash functions MD4 and MD5 make it clear that (strong) collision-resistance is a hard-toachieve goal. We look towards a weaker notion, the universal one-way hash functions (UOWHFs) of Naor and Yung, and investigate their practical potential. The goal is to build UOWHFs not based on number theoretic assumptions, but from the primitives underlying current cryptographic hash functions like MD5 and SHA-1. Pursuing this goal leads us to new questions. The main one is how to extend a compression function to a full-fledged hash function in this new setting. We show that the classic Merkle-Damgard method used in the standard setting fails for these weaker kinds of hash functions, and we present some new methods that work. Our main construction is the “XOR tree.” We also consider the problem of input length-variability and present a general solution.

190 citations


Posted Content
TL;DR: In this article, the authors present an NP-arguments that achieve negligible error probability and computational zero-knowledge in four rounds of interaction, assuming only the existence of a one-way function.
Abstract: We fill a gap in the theory of zero-knowledge protocols by presenting NP-arguments that achieve negligible error probability and computational zero-knowledge in four rounds of interaction, assuming only the existence of a one-way function. This result is optimal in the sense that four rounds and a one-way function are each individually necassary to achieve a negligible error zero-knowledge argument for NP.

50 citations


Posted Content
TL;DR: Oracle hashing as mentioned in this paper is a hash function that, like random oracles, hides all partial information on its input, but it is probabilistic: different applications to the same input result in different hash values, and the ability to verify whether a given hash value was generated from a given input.
Abstract: The random oracle model is a very convenient setting for designing cryptographic protocols. In this idealised model all parties have access to a common, public random function, called a random oracle. Protocols in this model are often very simple and efficient; also the analysis is often clearer. However, we do not have a general mechanism for transforming protocols that are secure in the random oracle model into protocols that are secure in real life. In fact, we do not even know how to meaningfully specify the properties required from such a mechanism. Instead, it is a common practice to simply replace - often without mathematical justification - the random oracle with a 'cryptographic hash function' (e.g., MD5 or SHA). Consequently, the resulting protocols have no meaningful proofs of security. We propose a research program aimed at rectifying this situation by means of identifying, and subsequently realizing, the useful properties of random oracles. As a first step, we introduce a new primitive that realises a specific aspect of random oracles. This primitive, called oracle hashing, is a hash function that, like random oracles, 'hides all partial information on its input'. A salient property of oracle hashing is that it is probabilistic: different applications to the same input result in different hash values. Still, we maintain the ability to verify whether a given hash value was generated from a given input. We describe constructions of oracle hashing, as well as applications where oracle hashing successfully replaces random oracles.

21 citations


Posted Content
TL;DR: In this paper, the authors present a new paradigm for the design of collision-free hash functions, and derive several specific functions from this paradigm, all of which use a standard hash function, assumed ideal and some algebraic operations.
Abstract: We present a simple, new paradigm for the design of collision-free hash functions. Any function emanating from this paradigm is incremental. (This means that if a message x which I have previously hashed is modified to x′ then rather than having to re-compute the hash of x′ from scratch, I can quickly "update" the old hash value to the new one, in time proportional to the amount of modification made in x to get x′). Also any function emanating from this paradigm is parallelizable, useful for hardware implementation. We derive several specific functions from our paradigm. All use a standard hash function, assumed ideal, and some algebraic operations. The first function, MuHASH, uses one modular multiplication per block of the message, making it reasonably efficient, and significantly faster than previous incremental hash functions. Its security is proven, based on the hardness of the discrete logarithm problem. A second function, AdHASH, is even faster, using additions instead of multiplications, with security proven given either that approximation of the length of shortest lattice vectors is hard or that the weighted subset sum problem is hard. A third function, LtHASH, is a practical variant of recent lattice based functions, with security proven based, again on the hardness of shortest lattice vector approximation.

18 citations


Posted Content
TL;DR: In this paper, the authors formalize the notion of negligible functions and prove that any cryptographic primitive has a specific associated security level, and reconcile different definitions of negligible error arguments and computational proofs of knowledge that have appeared in the literature.
Abstract: In theoretical cryptography, one formalizes the notion of an adversary’s success probability being “too small to matter” by asking that it be a negligible function of the security parameter. We argue that the issue that really arises is what it might mean for a collection of functions to be “negligible.” We consider (and define) two such notions, and prove them equivalent. Roughly, this enables us to say that any cryptographic primitive has a specific associated “security level.” In particular we say this for any one-way function. We also reconcile different definitions of negligible error arguments and computational proofs of knowledge that have appeared in the literature. Although the motivation is cryptographic, the main result is purely about negligible functions. ∗Supported in part by NSF Grant CCR-0098123, a Packard Foundation Fellowship in Science and Engineering, and an IBM Faculty Partnership Development Award.

16 citations


Posted Content
TL;DR: In this paper, the authors studied CBC authentication for real-time applications in which the length of the message is not known until the message ends, and furthermore, since the application is realtime, it is not possible to start processing the authentication until after the message end.
Abstract: The Cipher Block Chaining (CBC) Message Authentication Code (MAC) is an authentication method which is widely used in practice. It is well known that the use of the CBC MAC for variable length messages is not secure, and a few rules of thumb for the correct use of the CBC MAC are known by folklore. The first rigorous proof of the security of CBC MAC, when used on fixed length messages, was given only recently by Bellare et al.[3]. They also suggested variants of CBC MAC that handle variable-length messages but in these variants the length of the message has to be known in advance (i.e., before the message is processed). We study CBC authentication of real-time applications in which the length of the message is not known until the message ends, and furthermore, since the application is real-time, it is not possible to start processing the authentication until after the message ends. We first consider a variant of CBC MAC, that we call the encrypted CBC MAC (EMAC), which handles messages of variable unknown lengths. Computing EMAC on a message is virtually as simple and as efficient as computing the standard CBC MAC on the message. We provide a rigorous proof that its security is implied by the security of the underlying block cipher. Next, we argue that the basic CBC MAC is secure when applied to a prefix-free message space. A message space can be made prefix-free by also authenticating the (usually hidden) last character which marks the end of the message.

11 citations


Posted Content
TL;DR: This work introduces a probabilistic coding scheme which, in addition to the standard coding theoretic requirements, has the feature that any constant fraction of the bits in the (randomized) codeword yields no information about the message being encoded.

4 citations


Posted Content
TL;DR: In this paper, Schnorr and Euchner used lattice basis reduction to factorize a large composite number in less than 3 hours using the ideas of SH95] and R97.
Abstract: We address to the problem to factor a large composite number by lattice reduction algorithms. Schnorr Sc93] has shown that under a reasonable number theoretic assumptions this problem can be reduced to a simultaneous diophantine approximation problem. The latter in turn can be solved by nding suuciently many`1 {short vectors in a suitably deened lattice. Using lattice basis reduction algorithms Schnorr and Euchner applied the reduction technique of Sc93] to 40{bit long integers. Their implementation needed several hours to compute a 5% fraction of the solution, i.e., 6 out of 125 congruences which are necessary to factorize the composite. In this report we describe a more eecient implementation using stronger lattice basis reduction techniques incorporating ideas of SH95] and R97]. For 60{bit long integers our algorithm yields a complete factorization in less than 3 hours.

4 citations