scispace - formally typeset
Search or ask a question

Showing papers by "Moni Naor published in 1996"


Proceedings ArticleDOI
01 Jul 1996
TL;DR: This work proposes a novel property of encryption protocols and shows that if an encryption protocol enjoying this property is used, instead of a standard encryption scheme, then known constructions become adaptively secure.
Abstract: A fundamental problem in designing secure multi-party protocols is how to deal with adaptive adversaries (i.e., adversaries that may choose the corrupted parties during the course of the computation), in a setting where the channels are insecure and secure communication is achieved by cryptographic primitives based on computational limitations of the adversary. It turns out that the power of an adaptive adversary is greatly affected by the amount of information gathered upon the corruption of the party. This amount of information models the extent to which uncorrupted parties are trusted to carry out instructions that cannot be externally verified, such as erasing records of past configurations. It has been shown that if the parties are trusted to erase such records, then adaptivity secure computation can be carried out using known primitives. However, this total trust in parties may be unrealistic in many scenarios. An important question, open since 1986, is whether adaptively secure multi-party computation can be carried out in the "insecure channel" setting, even if no party is thoroughly trusted. Our main result is an affirmative resolution of this question for the case where even uncorrupted parties may deviate from the protocol by keeping record of all past configurations. We first propose a novel property of encryption protocols and show that if an encryption protocol enjoying this property is used, instead of a standard encryption scheme, then known constructions become adaptively secure. Next we constructed, based on standard RSA assumption, an encryption protocol that enjoys this property. We also consider parties that, even when corrupted, may internally deviate from their protocols in arbitrary ways, as long as no external test can detect faulty behavior. We show that in this case no non-trivial protocol can be proven adaptively secure using black-box simulation. This holds even if the communication channels are totally secure.

598 citations


Journal ArticleDOI
TL;DR: This work considers simple means by which two people may determine whether they possess the same information without revealing anything else to each other in case of disagreement.
Abstract: We consider simple means by which two people may determine whether they possess the same information without revealing anything else to each other in case

235 citations


Journal ArticleDOI
TL;DR: Very efficient constructions for a pseudorandom generator and for a universal one-way hash function based on the intractability of the subset-sum problem for certain dimensions are shown.
Abstract: We show very efficient constructions for a pseudorandom generator and for a universal one-way hash function based on the intractability of the subset-sum problem for certain dimensions. (Pseudorandom generators can be used for private-key encryption and universal one-way hash functions for signature schemes.) The increase in efficiency in our construction is due to the fact that many bits can be generated/hashed with one application of the assumed one-way function. All of our constructions can be implemented in NC using an optimal number of processors.

227 citations


Posted Content
TL;DR: In this paper, the authors proposed an alternative model for reconstruction with a different set of operation (which they call the "Cover" semi-group), which is able to obtain a better contrast than is possible in the previous one.
Abstract: In Eurocrypt 1994 we proposed a a new type of cryptographic scheme, which can decode concealed images without any cryptographic computations, by placing two transparencies on top of each other and using the decoder's (human) visual systems. One of the drawback of that proposal was a loss in contrast: a black pixel is translated in the reconstruction into a black region, but a white pixel is translated into a grey region (half black and half white). In this paper we propose am alternative model for reconstruction with a different set of operation (which we call the “Cover” semi-group) is proposed. In this model we are able to obtain a better contrast than is possible in the previous one.

149 citations


Book ChapterDOI
10 Apr 1996
TL;DR: In this article, the authors proposed an alternative model for reconstruction with a different set of operation (which they call the "Cover" semi-group), which is able to obtain a better contrast than is possible in the previous one.
Abstract: In Eurocrypt 1994 we proposed a a new type of cryptographic scheme, which can decode concealed images without any cryptographic computations, by placing two transparencies on top of each other and using the decoder's (human) visual systems. One of the drawback of that proposal was a loss in contrast: a black pixel is translated in the reconstruction into a black region, but a white pixel is translated into a grey region (half black and half white). In this paper we propose am alternative model for reconstruction with a different set of operation (which we call the “Cover” semi-group) is proposed. In this model we are able to obtain a better contrast than is possible in the previous one.

143 citations


Proceedings ArticleDOI
01 Jul 1996
TL;DR: This paper introduces digital signets, a new technique for protecting digital content from illegal redistribution and motivates the study of the previously unexamined class of incompressible functions, analysis of which adds a cryptographic twist to communication complexity.
Abstract: The problem of protecting digital content software, video, documents, music, etc. – from illegal redistribution by an authorized user, is the focus of considerable industrial and academic effort. In the absence of special-purpose tamperproof hardware, the problem has no cryptographically secure solution: once a legitimate user has purchased the cent ent, the user, by definition, has access to the material and can therefore capture it and redistribute it. A number of techniques have been suggested or are currently employed to make redistribution either inconvenient or traceable. In this paper we introduce digital signets, a new technique for protecting digital content from illegal redistribution. The work motivates the study of the previously unexamined class of incompressible functions, analysis of which adds a cryptographic twist to communication complexity.

122 citations


Journal Article
TL;DR: In this paper, Luby and Rackoff showed that two Feistel permutations are sufficient together with initial and final pair-wise independent permutations for constructing a pseudo-random permutation.
Abstract: Luby and Rackoff showed a method for constructing a pseudo-random permutation from a pseudo-random function. The method is based on composing four (or three for weakened security) so called Feistel permutations each of which requires the evaluation of a pseudo-random function. We reduce somewhat the complexity of the construction and simplify its proof of security by showing that two Feistel permutations are sufficient together with initial and final pair-wise independent permutations. The revised construction and proof provide a framework in which similar constructions may be brought up and their security can be easily proved. We demonstrate this by presenting some additional adjustments of the construction that achieve the following: Reduce the success probability of the adversary. Provide a construction of pseudo-random permutations with large input size using pseudo-random functions with small input size. Provide a construction of a pseudo-random permutation using a single pseudo-random function.

111 citations


Patent
Cynthia Dwork1, Moni Naor1, Florian Pestoni1
03 Dec 1996
TL;DR: In this paper, a system and method for producing verified signatures on documents such as checks and affidavits is described, where a customer who is to obtain a verified signature, at some point in time, registers with a signatory authority, and a secret key, having public and private components, is established uniquely for that customer.
Abstract: A system and method are provided for producing verified signatures on documents such as checks and affidavits. Initially, a customer who is to obtain a verified signature, at some point in time, registers with a signatory authority, and a secret key, having public and private components, is established uniquely for that customer. When a document requires a verified signature, the customer presents the document and proof of his/her identity, such as a preprogrammed computer-interfacable card, to a signature system. Typically, such a system is to be available at an institution, such as an office, bank, or post office, where such services will routinely be used. The system accesses the archive of the private portion of the customer's key, and generates an encoded signature based, in part, on the content of the document. Accordingly, when a recipient of the document later wishes to verify the signature, the recipient uses the customer's public key to decode the signature. It is then straightforward to verify the signature against the content of the document.

105 citations


Posted Content
TL;DR: In this article, the authors proposed a method of controlling the access to a secure database via quorum systems, where only the servers in a complete quorum can collectively grant (or revoke) access permission.
Abstract: We suggest a method of controlling the access to a secure database via quorum systems. A quorum system is a collection of sets (quorums) every two of which have a nonempty intersection. Quorum systems have been used for a number of applications in the area of distributed systems. We propose a separation between access servers, which are protected and trustworthy, but may be outdated, and the data servers, which may all be compromised. The main paradigm is that only the servers in a complete quorum can collectively grant (or revoke) access permission. The method we suggest ensures that, after authorization is revoked, a cheating user Alice will not be able to access the data even if many access servers still consider her authorized and even if the complete raw database is available to her. The method has a low overhead in terms of communication and computation. It can also be converted into a distributed system for issuing secure signatures. An important building block in our method is the use of secret sharing schemes that realize the access structures of quorum systems. We provide several efficient constructions of such schemes which may be of interest in their own right.

53 citations


Proceedings ArticleDOI
01 Jan 1996
TL;DR: In this paper, the authors proposed a method of controlling the access to a secure database via quorum systems, where only the servers in a complete quorum can collectively grant (or revoke) access permission.
Abstract: We suggest a method of controlling the access to a secure database via quorum systems. A quorum system is a collection of sets (quorums) every two of which have a nonempty intersection. Quorum systems have been used for a number of applications in the area of distributed systems. We propose a separation between access servers which are protected and trustworthy, but may be outdated, and the data servers which may all be compromised. The main paradigm is that only the servers in a complete quorum can collectively grant (or revoke) access permission. The method we suggest ensures that after authorization is revoked, a cheating user Alice will not be able to access the data even if many access servers still consider her authorized, and even if the complete raw database is available to her. The method has a low overhead in terms of communication and computation. It can also be converted into a distributed system for issuing secure signatures.

36 citations


Proceedings ArticleDOI
01 Jul 1996
TL;DR: This paper provides a class of distributions where efficient learning with an evaluator is possible, but coming up with a generator that approximates the given distribution is infeasible, and shows that some distributions may be learned to within any ~ > 0, but the learned hypothesis must be of size proportional to l/e.
Abstract: Kearns et al. [18] defined two notions for learning a distribution D. The first is with generator, where the learner presents a generator that outputs a distribution identical or close to D, The other is with an evahtat or, where the learner presents a procedure that on input z evaluates correctly (or approximates) the probability that x is generated by D. They showed an example where efficient learning by a generator is possible , but learning by an evaluator is computationally infeasible. Though it may seem that generation is, in general, easier than evaluation, in this paper we show that the converse may be true: we provide a class of distributions where efficient learning with an evaluator is possible, but coming up with a generator that approximates the given distribution is infeasible. We also show that some distributions may be learned (with either a generator or an evaluator) to within any ~ > 0, but the learned hypothesis must be of size proportional to l/e. This is in contrast to the distribution-free PAC model where the size of the hypothesis can always be proportional to log I/c. Permission to make digital/hard copies of all or patt of thk material for personal or classrmm use is granted without fee provided that the copies a-mnot ~de or dktributad for profit or commercial advantage, the copY-nght notice, the title of the publication and ita date appear, and notice is 1 Introduction The problem of learning a distribution D from several independent samples from D occupies a large part of Statistics and Pattern Recognition, However, only recently did Kearns et al. [18] put it in a comput ational setting. They distinguished between two notions of learning-evaluation and generation. In evaluation the goal of the learner is to be able to evaluate (or approximate) D[y]-the probability that D generates II, given y. In generation the goal of the learner is to construct a distribution D1 (i.e. a machine or a circuit whose output distribution is D' on uniformly random inputs) such that D' is as close as possible to D. In the setting of [18] there is a class of distributions Dn over {O, I}n. Members of Dn are generated by polynomial sized circuits. Such a class may be learned efficiently with a generator (with confidence 6 and approximation e): assuming that D ED. but is unknown, then given enough samples there is an (efi-cient) …

Posted Content
TL;DR: The revised construction and proof provide a framework in which similar constructions may be brought up and their security can be easily proved by presenting some additional adjustments of the construction that achieve the following:• reduce the success probability of the adversary.
Abstract: Luby and Rackoff showed a method for constructing a pseudo-random permutation from a pseudo-random function. The method is based on composing four (or three for weakened security) so called Feistel permutations each of which requires the evaluation of a pseudo-random function. We reduce somewhat the complexity of the construction and simplify its proof of security by showing that two Feistel permutations are sufficient together with initial and final pair-wise independent permutations. The revised construction and proof provide a framework in which similar constructions may be brought up and their security can be easily proved. We demonstrate this by presenting some additional adjustments of the construction that achieve the following: Reduce the success probability of the adversary. Provide a construction of pseudo-random permutations with large input size using pseudo-random functions with small input size. Provide a construction of a pseudo-random permutation using a single pseudo-random function.

01 Jan 1996
TL;DR: In this paper, the role of randomness for the decisional complexity in algebraic decision trees was considered, i.e., the number of comparisons ignoring all other computation, and it was shown that in general the randomized complexity is logarithmic in the size of the decision tree.
Abstract: We consider the role of randomness for the decisional complexity in algebraic decision (or computation) trees, i.e., the number of comparisons ignoring all other computation. Recently Ting and Yao showed that the problem of finding the maximum of n elements has decisional complexity O(log2n) (1994, Inform. Process. Lett., 49, 39?43). In contrast, Rabin showed in 1972 an ?(n) bound for the deterministic case (1972, J. Comput. System Sci., 6, 639?650). We point out that their technique is applicable to several problems for which corresponding ?(n) lower bounds hold. We show that in general the randomized decisional complexity is logarithmic in the size of the decision tree. We then turn to the question of the number of random bits needed to obtain the Ting and Yao result. We provide a deterministic O(k log n) algorithm for finding the elements which are larger than a given element, given a bound k on the number of these elements. We use this algorithm to obtain an O(log2n) random bits and O(log2n) queries algorithm for finding the maximum.