scispace - formally typeset
Search or ask a question

Showing papers by "Moni Naor published in 2001"


Proceedings ArticleDOI
01 Apr 2001
TL;DR: A set of techniques for the rank aggregation problem is developed and compared to that of well-known methods, to design rank aggregation techniques that can be used to combat spam in Web searches.
Abstract: We consider the problem of combining ranking results from various sources. In the context of the Web, the main applications include building meta-search engines, combining ranking functions, selecting documents based on multiple criteria, and improving search precision through word associations. We develop a set of techniques for the rank aggregation problem and compare their performance to that of well-known methods. A primary goal of our work is to design rank aggregation techniques that can e ectively combat \spam," a serious problem in Web searches. Experiments show that our methods are simple, e cient, and e ective.

1,982 citations


Book ChapterDOI
19 Aug 2001
TL;DR: In this paper, the Subset-Cover framework is proposed for the stateless receiver case, where the users do not (necessarily) update their state from session to session, and sufficient conditions that guarantee the security of a revocation algorithm in this class are provided.
Abstract: We deal with the problem of a center sending a message to a group of users such that some subset of the users is considered revoked and should not be able to obtain the content of the message. We concentrate on the stateless receiver case, where the users do not (necessarily) update their state from session to session. We present a framework called the Subset-Cover framework, which abstracts a variety of revocation schemes including some previously known ones. We provide sufficient conditions that guarantees the security of a revocation algorithm in this class. We describe two explicit Subset-Cover revocation algorithms; these algorithms are very flexible and work for any number of revoked users. The schemes require storage at the receiver of log N and 1/2 log2 N keys respectively (N is the total number of users), and in order to revoke r users the required message lengths are of r log N and 2r keys respectively. We also provide a general traitor tracing mechanism that can be integrated with any Subset-Cover revocation scheme that satisfies a "bifurcation property". This mechanism does not need an a priori bound on the number of traitors and does not expand the message length by much compared to the revocation of the same set of traitors. The main improvements of these methods over previously suggested methods, when adopted to the stateless scenario, are: (1) reducing the message length to O(r) regardless of the coalition size while maintaining a single decryption at the user's end (2) provide a seamless integration between the revocation and tracing so that the tracing mechanisms does not require any change to the revocation algorithm.

1,277 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: An elegant and remarkably simple algorithm is analyzed that is optimal in a much stronger sense than FA, and is essentially optimal, not just for some monotone aggregation functions, but for all of them, and not just in a high-probability sense, but over every database.
Abstract: Assume that each object in a database has m grades, or scores, one for each of m attributes. For example, an object can have a color grade, that tells how red it is, and a shape grade, that tells how round it is. For each attribute, there is a sorted list, which lists each object and its grade under that attribute, sorted by grade (highest grade first). There is some monotone aggregation function, or combining rule, such as min or average, that combines the individual grades to obtain an overall grade.To determine objects that have the best overall grades, the naive algorithm must access every object in the database, to find its grade under each attribute. Fagin has given an algorithm (“Fagin's Algorithm”, or FA) that is much more efficient. For some distributions on grades, and for some monotone aggregation functions, FA is optimal in a high-probability sense.We analyze an elegant and remarkably simple algorithm (“the threshold algorithm”, or TA) that is optimal in a much stronger sense than FA. We show that TA is essentially optimal, not just for some monotone aggregation functions, but for all of them, and not just in a high-probability sense, but over every database. Unlike FA, which requires large buffers (whose size may grow unboundedly as the database size grows), TA requires only a small, constant-size buffer.We distinguish two types of access: sorted access (where the middleware system obtains the grade of an object in some sorted list by proceeding through the list sequentially from the top), and random access (where the middleware system requests the grade of object in a list, and obtains it in one step). We consider the scenarios where random access is either impossible, or expensive relative to sorted access, and provide algorithms that are essentially optimal for these cases as well.

908 citations


Proceedings ArticleDOI
09 Jan 2001
TL;DR: This paper presents several significant improvements to oblivious transfer protocols of strings, and in particular providing the first two-round OT protocol whose security analysis does not invoke the random oraclemodel.
Abstract: 1 IntroductionOblivious Transfer (OT) protocols allow one party, the sender, to transmit part of its inputs to another party, the chooser, in a manner that protects both of them: the sender is assured that the chooser does not receive more information than it is entitled, while the chooser is assured that the sender does not learn which part of the inputs it received. OT is used as a key component in many applications of cryptography. Its computational requirements are quite demanding and they are likely to be the bottleneck in many applications that invoke it.1.1 Contributions.This paper presents several significant improvements to oblivious transfer (OT) protocols of strings, and in particular: (i) Improving the efficiency of applications which many invocations of oblivious transfer. (ii) Providing the first two-round OT protocol whose security analysis does not invoke the random oracle model.

832 citations


Proceedings ArticleDOI
06 Jul 2001
TL;DR: This work proposes a new methodology for designing secure protocols, utilizing the communication complexity tree (or branching program) representation of f, and exemplifies a protocol for the Millionaires problem, which is more efficient than previously known ones in either communication or computation.
Abstract: A secure function evaluation protocol allows two parties to jointly compute a function f(x,y) of their inputs in a manner not leaking more information than necessary. A major result in this field is: “any function f that can be computed using polynomial resources can be computed securely using polynomial resources” (where “resources” refers to communication and computation). This result follows by a general transformation from any circuit for f to a secure protocol that evaluates f. Although the resources used by protocols resulting from this transformation are polynomial in the circuit size, they are much higher (in general) than those required for an insecure computation of f.We propose a new methodology for designing secure protocols, utilizing the communication complexity tree (or branching program) representation of f. We start with an efficient (insecure) protocol for f and transform it into a secure protocol. In other words, ``any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter''. We show several simple applications of this new methodology resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the Millionaires problem, where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation.

198 citations


Journal Article
TL;DR: In this paper, the Subset-Cover framework is proposed for the stateless receiver case, where the users do not (necessarily) update their state from session to session, and sufficient conditions that guarantees the security of a revocation algorithm in this class are provided.
Abstract: We deal with the problem of a center sending a message to a group of users such that some subset of the users is considered revoked and should not be able to obtain the content of the message. We concentrate on the stateless receiver case, where the users do not (necessarily) update their state from session to session. We present a framework called the Subset-Cover framework, which abstracts a variety of revocation schemes including some previously known ones. We provide sufficient conditions that guarantees the security of a revocation algorithm in this class. We describe two explicit Subset-Cover revocation algorithms; these algorithms are very flexible and work for any number of revoked users. The schemes require storage at the receiver of log N and 1/2 log 2 N keys respectively (N is the total number of users), and in order to revoke r users the required message lengths are of r log N and 2r keys respectively. We also provide a general traitor tracing mechanism that can be integrated with any Subset-Cover revocation scheme that satisfies a bifurcation property. This mechanism does not need an a priori bound on the number of traitors and does not expand the message length by much compared to the revocation of the same set of traitors. The main improvements of these methods over previously suggested methods, when adopted to the stateless scenario, are: (1) reducing the message length to O(r) regardless of the coalition size while maintaining a single decryption at the user's end (2) provide a seamless integration between the revocation and tracing so that the tracing mechanisms does not require any change to the revocation algorithm.

110 citations


Proceedings ArticleDOI
06 Jul 2001
TL;DR: In this paper, a hash table based on open addressing was proposed to solve the dynamic perfect hashing problem, with expected amortized insertion and deletion time O(1) for fixed-size records.
Abstract: Many data structures give away much more information than they were intended to. Whenever privacy is important, we need to be concerned that it might be possible to infer information from the memory representation of a data structure that is not available through its “legitimate” interface. Word processors that quietly maintain old versions of a document are merely the most egregious example of a general problem.We deal with data structures whose current memory representation does not reveal their history. We focus on dictionaries, where this means revealing nothing about the order of insertions or deletions. Our first algorithm is a hash table based on open addressing, allowing O(1) insertion and search. We also present a history independent dynamic perfect hash table that uses space linear in the number of elements inserted and has expected amortized insertion and deletion time O(1). To solve the dynamic perfect hashing problem we devise a general scheme for history independent memory allocation. For fixed-size records this is quite efficient, with insertion and deletion both linear in the size of the record. Our variable-size record scheme is efficient enough for dynamic perfect hashing but not for general use. The main open problem we leave is whether it is possible to implement a variable-size record scheme with low overhead.

95 citations


Journal Article
TL;DR: In this paper, two new methodologies for the design of efficient secure protocols, that differ with respect to their underlying computational models, are proposed, which are more efficient than previously known ones in either communication or computation.
Abstract: We suggest two new methodologies for the design of efficient secure protocols, that differ with respect to their underlying computational models. In one methodology we utilize the communication complexity tree (or branching for f and transform it into a secure protocol. In other words, "any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter". The second methodology uses the circuit computing f, enhanced with look-up tables as its underlying computational model. It is possible to simulate any RAM machine in this model with polylogarithmic blowup. Hence it is possible to start with a computation of f on a RAM machine and transform it into a secure protocol. We show many applications of these new methodologies resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the "millionaires problem", where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation.

32 citations


Posted Content
TL;DR: Two new methodologies for the design of efficient secure protocols, that differ with respect to their underlying computational models are suggested, including a protocol for the "millionaires problem", which is more efficient than previously known ones in either communication or computation.
Abstract: We suggest two new methodologies for the design of efficient secure protocols, that differ with respect to their underlying computational models. In one methodology we utilize the communication complexity tree (or branching for f and transform it into a secure protocol. In other words, "any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter". The second methodology uses the circuit computing f, enhanced with look-up tables as its underlying computational model. It is possible to simulate any RAM machine in this model with polylogarithmic blowup. Hence it is possible to start with a computation of f on a RAM machine and transform it into a secure protocol. We show many applications of these new methodologies resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the "millionaires problem", where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation.

31 citations


Proceedings ArticleDOI
09 Jan 2001
TL;DR: In this article, the authors show how to construct pseudo-random permutations that satisfy a finite-time cycle restriction, for example that the permutation be cyclic (consisting of one cycle containing all the elements) or an involution (a self-inverse permutation) with no fixed points.
Abstract: We show how to construct pseudo-random permutations that satisfy a cer- tain cycle restriction, for example that the permutation be cyclic (consisting of one cycle containing all the elements) or an involution (a self-inverse permutation) with no fixed points The construction can be based on any (unrestricted) pseudo-random permuta- tion The resulting permutations are defined succinctly and their evaluation at a given point is efficient Furthermore, they enjoy a fast forward property, ie it is possible to iterate them at a very small cost

27 citations



Proceedings Article
08 Jul 2001
TL;DR: Mechanism Design is the algorithmic component of Game Theory, the synthesis of protocols for selfish parties to achieve certain properties in order to decide on some "social choice".
Abstract: Mechanism Design is the algorithmic component of Game Theory, the synthesis of protocols for selfish parties to achieve certain properties. A protocol is a method to aggregate the preferences of the parties in order to decide on some "social choice," where typical examples include: deciding whether a community should build a bridge, how to route packets in a network and deciding who wins an auction. Each party has a utility function which expresses how much it values each possible outcome of the protocol. The goal is to design a protocol where the winning strategies achieve the social choice. Recently Mechanism Design has received attention by computer scientists in light of the above applications, see [19, 20].

Posted Content
TL;DR: In this paper, a hash table based on open addressing was proposed to solve the dynamic perfect hashing problem, with expected amortized insertion and deletion time O(1) for fixed-size records.
Abstract: Many data structures give away much more information than they were intended to. Whenever privacy is important, we need to be concerned that it might be possible to infer information from the memory representation of a data structure that is not available through its “legitimate” interface. Word processors that quietly maintain old versions of a document are merely the most egregious example of a general problem.We deal with data structures whose current memory representation does not reveal their history. We focus on dictionaries, where this means revealing nothing about the order of insertions or deletions. Our first algorithm is a hash table based on open addressing, allowing O(1) insertion and search. We also present a history independent dynamic perfect hash table that uses space linear in the number of elements inserted and has expected amortized insertion and deletion time O(1). To solve the dynamic perfect hashing problem we devise a general scheme for history independent memory allocation. For fixed-size records this is quite efficient, with insertion and deletion both linear in the size of the record. Our variable-size record scheme is efficient enough for dynamic perfect hashing but not for general use. The main open problem we leave is whether it is possible to implement a variable-size record scheme with low overhead.

Posted Content
TL;DR: In this article, the Subset-Cover framework is proposed for the stateless receiver case, where the users do not (necessarily) update their state from session to session, and sufficient conditions for the security of a revocation algorithm in this class are provided.
Abstract: We deal with the problem of a center sending a message to a group of users such that some subset of the users is considered revoked and should not be able to obtain the content of the message. We concentrate on the stateless receiver case, where the users do not (necessarily) update their state from session to session. We present a framework called the Subset-Coverframework, which abstracts a variety of revocation schemes including some previously known ones. We provide sufficient conditions that guarantee the security of a revocation algorithm in this class. We describe two explicit Subset-Cover revocation algorithms; these algorithms are very flexible and work for any number of revoked users. The schemes require storage at the receiver of and keys respectively ( is the total number of users), and in order to revoke users the required message lengths are of and keys respectively. We also provide a general traitor tracing mechanism that can be integrated with any Subset-Cover revocation scheme that satisfies a “bifurcation property”. This mechanism does not need an a priori bound on the number of traitors and does not expand the message length by much compared to the revocation of the same set of traitors. The main improvements of these methods over previously suggested methods, when adapted to the stateless scenario, are: (1) reducing the message length to regardless of the coalition size while maintaining a single decryption at the user’s end (2) provide aseamless integration between the revocation and tracing so that the tracing mechanisms does not require any change to the revocation algorithm.

Journal ArticleDOI
TL;DR: This work provides a deterministic O(k log n) algorithm for finding the elements which are larger than a given element, given a bound k on the number of these elements, and uses this algorithm to obtain an O(log2n) random bits and O( log2 n) queries algorithm forFinding the maximum.
Abstract: We consider the role of randomness for the decisional complexity in algebraic decision (or computation) trees, i.e., the number of comparisons ignoring all other computation. Recently Ting and Yao showed that the problem of finding the maximum of n elements has decisional complexity O(log2n) (1994, Inform. Process. Lett., 49, 39?43). In contrast, Rabin showed in 1972 an ?(n) bound for the deterministic case (1972, J. Comput. System Sci., 6, 639?650). We point out that their technique is applicable to several problems for which corresponding ?(n) lower bounds hold. We show that in general the randomized decisional complexity is logarithmic in the size of the decision tree. We then turn to the question of the number of random bits needed to obtain the Ting and Yao result. We provide a deterministic O(k log n) algorithm for finding the elements which are larger than a given element, given a bound k on the number of these elements. We use this algorithm to obtain an O(log2n) random bits and O(log2n) queries algorithm for finding the maximum.

Posted Content
TL;DR: In this paper, two new methodologies for the design of efficient secure protocols, that differ with respect to their underlying computational models, are proposed, which are more efficient than previously known ones in either communication or computation.
Abstract: We suggest two new methodologies for the design of efficient secure protocols, that differ with respect to their underlying computational models. In one methodology we utilize the communication complexity tree (or branching for f and transform it into a secure protocol. In other words, "any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter". The second methodology uses the circuit computing f, enhanced with look-up tables as its underlying computational model. It is possible to simulate any RAM machine in this model with polylogarithmic blowup. Hence it is possible to start with a computation of f on a RAM machine and transform it into a secure protocol. We show many applications of these new methodologies resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the "millionaires problem", where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation.