scispace - formally typeset
Search or ask a question
Author

J. Lawrence Carter

Bio: J. Lawrence Carter is an academic researcher from IBM. The author has contributed to research in topics: Hash function & Double hashing. The author has an hindex of 8, co-authored 11 publications receiving 4796 citations.

Papers
More filters
Journal ArticleDOI
J. Lawrence Carter1, Mark N. Wegman1
TL;DR: An input independent average linear time algorithm for storage and retrieval on keys that makes a random choice of hash function from a suitable class of hash functions.

2,886 citations

Journal ArticleDOI
Mark N. Wegman1, J. Lawrence Carter1
TL;DR: Several new classes of hash functions with certain desirable properties are exhibited, and two novel applications for hashing which make use of these functions are introduced, including a provably secure authentication technique for sending messages over insecure lines and the application of testing sets for equality.

1,586 citations

Proceedings ArticleDOI
04 May 1977
TL;DR: An input independent average linear time algorithm for storage and retrieval on keys that makes a random choice of hash function from a suitable class of hash functions.
Abstract: This paper gives an input independent average linear time algorithm for storage and retrieval on keys. The algorithm makes a random choice of hash function from a suitable class of hash functions. Given any sequence of inputs the expected time (averaging over all functions in the class) to store and retrieve elements is linear in the length of the sequence. The number of references to the data base required by the algorithm for any input is extremely close to the theoretical minimum for any possible hash function with randomly distributed inputs. We present three suitable classes of hash functions which also may be evaluated rapidly. The ability to analyze the cost of storage and retrieval without worrying about the distribution of the input allows as corollaries improvements on the bounds of several algorithms.

362 citations

Proceedings ArticleDOI
29 Oct 1979
TL;DR: Several new classes of hash functions with certain desirable properties are exhibited, and two novel applications for hashing which make use of these functions are introduced, including a provably secure authentication techniques for sending messages over insecure lines.
Abstract: In this paper we exhibit several new classes of hash functions with certain desirable properties, and introduce two novel applications for hashing which make use of these functions. One class of functions is small, yet is almost universal2. If the functions hash n-bit long names into m-bit indices, then specifying a member of the class requires only O((m + log2log2(n)) log2(n)) bits as compared to O(n) bits for earlier techniques. For long names, this is about a factor of m larger than the lower bound of m+log2n-log2m bits. An application of this class is a provably secure authentication techniques for sending messages over insecure lines. A second class of functions satisfies a much stronger property than universal2. We present the application of testing sets for equality. The authentication technique allows the receiver to be certain that a message is genuine. An 'enemy' - even one with infinite computer resources - cannot forge or modify a message without detection. The set equality technique allows the the operations 'add member to set', 'delete member from set' and 'test two sets for equality' to be performed in expected constant time and with less than a specified probability of error.

171 citations

Proceedings ArticleDOI
05 May 1982
TL;DR: The theoretical justification to the use of several signature testing techniques for testing VLSI chips is given and it is shown that for these methods, the probability that masking will occur is small.
Abstract: Several methods for testing VLSI chips can be classified as signature methods. Both conventional and signature testing methods apply a number of test patterns to the inputs of the circuit. The difference is that a conventional method examines each output, while a signature method first accumulates the outputs in some data compression device, then examines the signature - the final contents of the accumulator - to see if it agrees with the signature produced by a good chip. Signature testing methods have several advantages, but they run the risk that masking may occur. Masking is said to occur if a faulty chip and a good chip behave differently on the test patterns, but the signatures are identical. When masking occurs, the signature testing method will incorrectly conclude that the chip is good, whereas a conventional method would discover that the chip is defective. This paper gives theoretical justification to the use of several signature testing techniques. We show that for these methods, the probability that masking will occur is small. An important difference between this and other work is that our results require very few assumptions about the behavior of faulty chips. They hold even in the presence of so-called correlated errors or even if the circuit were subject to sabatoge. When we speak of the probability of masking, we use the probabilistic approach of Gill, Rabin and others. That is, we introduce randomness into the testing method in a way which can be controlled by the designer. Thus, one theorem assumes that the order of the input patterns - or the patterns themselves - is random; another assumes that the connections between the chip and the signature accumulator are made randomly, and a third assumes that the signature accumulator itself incorporates a random choice. Most of the results of this paper use a particularly simple and practical signature accumulator based on a linear feedback shift register.

29 citations


Cited by
More filters
Book
01 Jan 1996
TL;DR: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols.
Abstract: From the Publisher: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper.

13,597 citations

Journal ArticleDOI
TL;DR: A protocol for coin-tossing by exchange of quantum messages is presented, which is secure against traditional kinds of cheating, even by an opponent with unlimited computing power, but ironically can be subverted by use of a still subtler quantum phenomenon, the Einstein-Podolsky-Rosen paradox.

5,126 citations

Book
01 Jan 1995
TL;DR: This book introduces the basic concepts in the design and analysis of randomized algorithms and presents basic tools such as probability theory and probabilistic analysis that are frequently used in algorithmic applications.
Abstract: For many applications, a randomized algorithm is either the simplest or the fastest algorithm available, and sometimes both. This book introduces the basic concepts in the design and analysis of randomized algorithms. The first part of the text presents basic tools such as probability theory and probabilistic analysis that are frequently used in algorithmic applications. Algorithmic examples are also given to illustrate the use of each tool in a concrete setting. In the second part of the book, each chapter focuses on an important area to which randomized algorithms can be applied, providing a comprehensive and representative selection of the algorithms that might be used in each of these areas. Although written primarily as a text for advanced undergraduates and graduate students, this book should also prove invaluable as a reference for professionals and researchers.

4,412 citations

Journal ArticleDOI
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Abstract: The success of the von Neumann model of sequential computation is attributable to the fact that it is an efficient bridge between software and hardware: high-level languages can be efficiently compiled on to this model; yet it can be effeciently implemented in hardware. The author argues that an analogous bridge between software and hardware in required for parallel computation if that is to become as widely used. This article introduces the bulk-synchronous parallel (BSP) model as a candidate for this role, and gives results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.

3,885 citations

Journal ArticleDOI
TL;DR: Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Abstract: A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.

3,518 citations