scispace - formally typeset
Search or ask a question

Showing papers by "Moni Naor published in 1998"


Proceedings ArticleDOI
23 May 1998
TL;DR: In this paper, the authors introduce the notion of (α, β) timing constraint and show that if the adversary is constrained by an α, β-time assumption, then there exist four-round almost concurrent zero-knowledge interactive proofs and perfect concurrent arguments for every language in NP.
Abstract: Concurrent executions of a zero-knowledge protocol by a single prover (with one or more verifiers) may leak information and may not be zero-knowledge in toto. In this article, we study the problem of maintaining zero-knowledge.We introduce the notion of an (α, β) timing constraint: for any two processors P1 and P2, if P1 measures α elapsed time on its local clock and P2 measures β elapsed time on its local clock, and P2 starts afterP1 does, then P2 will finish after P1 does. We show that if the adversary is constrained by an (α, β) assumption then there exist four-round almost concurrent zero-knowledge interactive proofs and perfect concurrent zero-knowledge arguments for every language in NP. We also address the more specific problem of Deniable Authentication, for which we propose several particularly efficient solutions. Deniable Authentication is of independent interest, even in the sequential case; our concurrent solutions yield sequential solutions without recourse to timing, that is, in the standard model.

397 citations


Proceedings Article
26 Jan 1998
TL;DR: This solution represents certificate revocation lists by authenticated dictionaries that support efficient verification whether a certificate is in the list or not and efficient updates and is compatible, e.g., with X.500 certificates.
Abstract: A new solution is suggested for the problem of certificate revocation. This solution represents Certificate Revocation Lists by an authenticated search data structure. The process of verifying whether a certificate is in the list or not, as well as updating the list, is made very efficient. The suggested solution gains in scalability, communication costs, robustness to parameter changes and update rate. Comparisons to the following solutions are included: 'traditional' CRLs (Certificate Revocation Lists), Micali's Certificate Revocation System (CRS) and Kocher's Certificate Revocation Trees (CRT). Finally, a scenario in which certificates are not revoked, but frequently issued for short-term periods is considered. Based on the authenticated search data structure scheme, a certificate update scheme is presented in which all certificates are updated by a common message. The suggested solutions for certificate revocation and certificate update problems is better than current solutions with respect to communication costs, update rate, and robustness to changes in parameters and is compatible e.g. with X.500 certificates.

295 citations


Journal ArticleDOI
TL;DR: A general construction of zero-knowledge arguments based on specific algebraic assumptions is shown which can be based on any one-way permutation and obtained by a construction of an information-theoretic secure bit-commitment protocol.
Abstract: ``Perfect zero-knowledge arguments'' is a cryptographic primitive which allows one polynomial-time player to convince another polynomial-time player of the validity of an NP statement, without revealing any additional information (in the information-theoretic sense). Here the security achieved is on-line: in order to cheat and validate a false theorem, the prover must break a cryptographic assumption on-line during the conversation, while the verifier cannot find (ever) any information unconditionally. Despite their practical and theoretical importance, it was only known how to implement zero-knowledge arguments based on specific algebraic assumptions. In this paper we show a general construction which can be based on any one-way permutation. The result is obtained by a construction of an information-theoretic secure bit-commitment protocol. The protocol is efficient (both parties are polynomial time) and can be based on any one-way permutation.

196 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented four novel constructions of quorum systems, all featuring optimal or near optimal load, and high availability, based on percolation theory, with a load of O(1/sqn) and a failure probability of ε(n) when the elements fail with probability ε < half.
Abstract: A quorum system is a collection of sets (quorums) every two of which intersect. Quorum systems have been used for many applications in the area of distributed systems, including mutual exclusion, data replication, and dissemination of information. Given a strategy to pick quorums, the load LS is the minimal access probability of the busiest element, minimizing over the strategies. The capacity \capS\ is the highest quorum accesses rate that cS can handle, so $\capS=1/\LS$. The availability of a quorum system cS is the probability that at least one quorum survives, assuming that each element fails independently with probability p. A tradeoff between LS and the availability of cS is shown. We present four novel constructions of quorum systems, all featuring optimal or near optimal load, and high availability. The best construction, based on paths in a grid, has a load of $O(1/\sqn)$, and a failure probability of $\exp(-\Omega(\sqn))$ when the elements fail with probability $p<\half$. Moreover, even in the presence of faults, with exponentially high probability the load of this system is still $O(1/\sqn)$. The analysis of this scheme is based on percolation theory.

180 citations


Book ChapterDOI
23 Aug 1998
TL;DR: In this article, a threshold tracing scheme was proposed to trace the source of keys of decoders which decrypt with probability greater than some threshold q (which is a parameter) for a given decoder.
Abstract: This work presents threshold tracing schemes. Tracing schemes trace the source of keys which are used in pirate decoders for sensitive or proprietary data (such as pay-TV programs). Previous tracing schemes were designed to operate against any decoder which decrypts with a non-negligible success probability. We introduce threshold tracing schemes which are only designed to trace the source of keys of decoders which decrypt with probability greater than some threshold q (which is a parameter). These schemes present a dramatic reduction in the overhead compared to the previous constructions of tracing schemes.

138 citations


Patent
05 Jun 1998
TL;DR: In this paper, the authors proposed a method for secure accounting and auditing of a communications network that operates in an environment in which many servers serve an even larger number of clients (e.g. the web), and are required to meter the interaction between servers and clients.
Abstract: A method for secure accounting and auditing of a communications network operates in an environment in which many servers serve an even larger number of clients (e.g. the web), and are required to meter the interaction between servers and clients (e.g. counting the number of clients that were served by a server). The method (metering process) is very efficient and does not require extensive usage of any new communication channels. The metering is secure against fraud attempts by servers which inflate the number of their clients and against clients that attempt to disrupt the metering process. Several secure and efficient constructions of this method are based on efficient cryptographic techniques, are also very accurate, and preserver the privacy of the clients.

87 citations


Book ChapterDOI
31 May 1998
TL;DR: An environment in which many servers serve an even larger number of clients, and it is required to meter the interaction between servers and clients is considered, based on efficient cryptographic techniques several secure and efficient constructions of metering systems are suggested.
Abstract: We consider an environment in which many servers serve an even larger number of clients (e.g. the web), and it is required to meter the interaction between servers and clients. More specifically, it is desired to count the number of clients that were served by a server. A major possible application is to measure the popularity of web pages in order to decide on advertisement fees. The metering process must be very efficient and should not require extensive usage of any new communication channels. The metering ­ould also be secure against fraud attempts by servers which inflate the number of their clients and against clients that attempt to disrupt the metering process. We suggest several secure and efficient constructions of metering systems, based on efficient cryptographic techniques. They are also very accurate and can preserve the privacy of the clients.

86 citations


Journal ArticleDOI
TL;DR: The method suggested ensures that, after authorization is revoked, a cheating user Alice will not be able to access the data even if many access servers still consider her authorized and even if the complete raw database is available to her.
Abstract: We suggest a method of controlling the access to a secure database via quorum systems. A quorum system is a collection of sets (quorums) every two of which have a nonempty intersection. Quorum systems have been used for a number of applications in the area of distributed systems. We propose a separation between access servers, which are protected and trustworthy, but may be outdated, and the data servers, which may all be compromised. The main paradigm is that only the servers in a complete quorum can collectively grant (or revoke) access permission. The method we suggest ensures that, after authorization is revoked, a cheating user Alice will not be able to access the data even if many access servers still consider her authorized and even if the complete raw database is available to her. The method has a low overhead in terms of communication and computation. It can also be converted into a distributed system for issuing secure signatures. An important building block in our method is the use of secret sharing schemes that realize the access structures of quorum systems. We provide several efficient constructions of such schemes which may be of interest in their own right.

85 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an existentially unforgeable signature scheme that for a reasonable setting of parameters requires at most six times the amount of time needed to generate a signature using plain RSA.
Abstract: A signature scheme is existentially unforgeable if, given any polynomial (in the security parameter) number of pairs (m 1 , S(m 1 )), (m 2 , S(m 2 )), . . ., (m k , S(m k )), where S(m) denotes the signature on the message m , it is computationally infeasible to generate a pair (m k+1 , S(m k+1 )) for any message m k+1 $ otin $ {m 1 , . . ., m k } . We present an existentially unforgeable signature scheme that for a reasonable setting of parameters requires at most six times the amount of time needed to generate a signature using ``plain'' RSA (which is not existentially unforgeable). We point out applications where our scheme is desirable.

50 citations


Journal Article
TL;DR: In this article, the authors present threshold tracing schemes which are only designed to trace the source of keys of decoders which decrypt with probability greater than some threshold q (which is a parameter) and present a dramatic reduction in the overhead compared to the previous constructions of tracing schemes.
Abstract: This work presents threshold tracing schemes. Tracing schemes trace the source of keys which are used in pirate decoders for sensitive or proprietary data (such as pay-TV programs). Previous tracing schemes were designed to operate against any decoder which decrypts with a non-negligible success probability. We introduce threshold tracing schemes which are only designed to trace the source of keys of decoders which decrypt with probability greater than some threshold q (which is a parameter). These schemes present a dramatic reduction in the overhead compared to the previous constructions of tracing schemes. We argue that in many applications it is only required to protect against pirate decoders which have a decryption probability very close to 1 (for example, TV decoders). In such applications it is therefore very favorable to use threshold tracing schemes.

46 citations



Book ChapterDOI
23 Aug 1998
TL;DR: In this paper, the relationship between unpredictable functions and pseudo-random functions is studied and a transformation of the former to the latter using a unique application of the Goldreich-Levin hard core bit is proposed.
Abstract: This paper studies the relationship between unpredictable functions (which formalize the concept of a MAC) and pseudo-random functions. We show an efficient transformation of the former to the latter using a unique application of the Goldreich-Levin hard-core bit (taking the inner-product with a random vector r): While in most applications of the GL-bit the random vector r may be public, in our setting this is not the case. The transformation is only secure when r is secret and treated as part of the key. In addition, we consider weaker notions of unpredictability and their relationship to the corresponding notions of pseudo-randomness. Using these weaker notions we formulate the exact requirements of standard protocols for private-key encryption, authentication and identification. In particular, this implies a simple construction of a private-key encryption scheme from the standard challenge-response identification scheme.

Journal Article
TL;DR: In this paper, the relationship between unpredictable functions and pseudo-random functions is studied and a transformation of the former to the latter using a unique application of the Goldreich-Levin hard core bit is proposed.
Abstract: This paper studies the relationship between unpredictable functions (which formalize the concept of a MAC) and pseudo-random functions. We show an efficient transformation of the former to the latter using a unique application of the Goldreich-Levin hard-core bit (taking the inner-product with a random vector r): While in most applications of the GL-bit the random vector r may be public, in our setting this is not the case. The transformation is only secure when r is secret and treated as part of the key. In addition, we consider weaker notions of unpredictability and their relationship to the corresponding notions of pseudo-randomness. Using these weaker notions we formulate the exact requirements of standard protocols for private-key encryption, authentication and identification. In particular, this implies a simple construction of a private-key encryption scheme from the standard challenge-response identification scheme.

Journal ArticleDOI
01 Apr 1998
TL;DR: This work presents schemes that measure the amount of service requested from servers by clients by clients that are secure and efficient and provide a short proof for the metered data.
Abstract: The majority of Internet revenues come from connectivity and advertisement fees, yet there are almost no means to secure the accounting processes which determine these fees from fraudulent behavior, e.g. a scheme to provide reliable usage information regarding a Web site. There is an enormous financial incentive for the Web site to inflate this data, and therefore measurement schemes should be secure against malicious behavior of the site. Measurement schemes which are based on sampling are relatively protected from corrupt behavior of Web sites but do not provide meaningful data about small and medium scale sites. We present schemes that measure the amount of service requested from servers by clients. The schemes are secure and efficient and provide a short proof for the metered data. Immediate applications are a secure measurement of visits to a Web site and a secure usage based accounting mechanism between networks.

Journal Article
TL;DR: In this article, the authors present schemes that measure the amount of service requested from servers by clients, which are secure and efficient and provide a short proof for the metered data.