scispace - formally typeset
Search or ask a question
Author

Michael Merritt

Other affiliations: Bell Labs, Lawrence Livermore National Laboratory, AT&T  ...read more
Bio: Michael Merritt is an academic researcher from AT&T Labs. The author has contributed to research in topics: Shared memory & Distributed shared memory. The author has an hindex of 33, co-authored 86 publications receiving 6227 citations. Previous affiliations of Michael Merritt include Bell Labs & Lawrence Livermore National Laboratory.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper reviews some of the accomplishments of the theoretical community during the past two decades, notes an apparent disconnect between theoretical and practical concerns, and speculates on future synergy between the two.
Abstract: The field of distributed computing started around 1970 when people began to imagine a future world of multiple interconnected computers operating collectively. The theoretical challenge was to define what a computational problem would be in such a setting and to explore what could and could not be accomplished in a realistic setting in which the different computers fell under different administrative structures, operated at different speeds under the control of uncoordinated clocks, and sometimes failed in unpredictable ways. Meanwhile, the practical problem was to turn the vision into reality by building networks and networking equipment, communication protocols, and useful distributed applications. The theory of distributed computing became recognized as a distinct discipline with the holding of the first ACM Principles of Distributed Computing conference in 1982. This paper reviews some of the accomplishments of the theoretical community during the past two decades, notes an apparent disconnect between theoretical and practical concerns, and speculates on future synergy between the two.

25 citations

Book ChapterDOI
06 Oct 1988
TL;DR: This theory allows careful statement of the correctness conditions to be satisfied by transaction-processing algorithms, as well as clear and concise description of such algorithms, and serves as a framework for rigorous correctness proofs.
Abstract: This paper describes some results of a recent project to develop a theory for reasoning about atomic transactions. This theory allows careful statement of the correctness conditions to be satisfied by transaction-processing algorithms, as well as clear and concise description of such algorithms. It also serves as a framework for rigorous correctness proofs.

25 citations

Proceedings ArticleDOI
01 Jun 1992
TL;DR: This paper provides the first formal definition of the processorimemory interface, and gives a formal specification and correctness proof of a release consistent nonblocking shared memory, and provides new insights into memory systems and programs for non blocking shared memories, areas that are not well-understood.
Abstract: Specifications of shared memories generally assume that processors block, awaiting the response to each memory request, e.g. awaiting the return value for a read operation. On the other hand, studies have shown that substantial performance gain can be obtained by permitting a processor to have multiple memory readslwrites in progress at a time, and indeed high-performance multiprocessors such as the Tera Computer permit such nonblocking memory accesses. Formalizing correctness conditions for nonblocking shared memories requires a generalization of the processorimemory interface to specify accesses to be done concurrently, indicate when an order must be preserved even among concurrently-requested accesses, and permit out-of-order responses to memory requests. This paper provides the first formal definition of such an interface. Sequential consistency and linearizability are defined with respect to this generaf interface, as natural correctness conditions for nonblocking shared memories. Sequential consistency in turn is used in the formal specification of relaxed consistency models on nonblocking shared memories, models that support sequential consistency only for a class of well-behaved (data-race-free or PL) programs. Finally, the framework is illustrated by studying a particular relaxed consistency model, release consistency. Extending the results of a previous paper, we give a formal specification and correctness proof of a release consistent nonblocking shared memory. This work provides new insights into memory systems and programs for nonblocking shared memories, areas that are not well-understood.

24 citations

Proceedings Article
29 Aug 1988
TL;DR: A rigorous framework for analyzing timestampbased concurrency control and recovery algorithms for nested transactions is presented and it is shown that local static atomicity of each object is sufficient to ensure global serializability.
Abstract: We present a rigorous framework for analyzing timestampbased concurrency control and recovery algorithms for nested transactions. We define a local correctness property, local static atomic@, that affords useful modularity. We show that local static atomicity of each object is sufficient to ensure global serializability. We present generalizations of algorithms due to Reed and Herlihy, and show that each ensures local static atomicity.

23 citations

Proceedings ArticleDOI
01 Aug 2001
TL;DR: MPLS is transformed into a flexible and robust method for forwarding packets in a network and the different schemes suggested are evaluated experimentally to demonstrate that the restoration schemes perform well in actual topologies.
Abstract: A new general theory about restoration of network paths is first introduced. The theory pertains to restoration of shortest paths in a network following failure, e.g., we prove that a shortest path in a network after removing k edges is the concatenation of at most k + 1 shortest paths in the original network.The theory is then combined with efficient path concatenation techniques in MPLS (multi-protocol label switching), to achieve powerful schemes for restoration in MPLS based networks. We thus transform MPLS into a flexible and robust method for forwarding packets in a network.Finally, the different schemes suggested are evaluated experimentally on three large networks (a large ISP, the AS graph of the Internet, and the full Internet topology). These experiments demonstrate that the restoration schemes perform well in actual topologies.

22 citations


Cited by
More filters
Patent
30 Sep 2010
TL;DR: In this article, the authors proposed a secure content distribution method for a configurable general-purpose electronic commercial transaction/distribution control system, which includes a process for encapsulating digital information in one or more digital containers, a process of encrypting at least a portion of digital information, a protocol for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container, and a process that delivering one or multiple digital containers to a digital information user.
Abstract: PROBLEM TO BE SOLVED: To solve the problem, wherein it is impossible for an electronic content information provider to provide commercially secure and effective method, for a configurable general-purpose electronic commercial transaction/distribution control system. SOLUTION: In this system, having at least one protected processing environment for safely controlling at least one portion of decoding of digital information, a secure content distribution method comprises a process for encapsulating digital information in one or more digital containers; a process for encrypting at least a portion of digital information; a process for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container; a process for delivering one or more digital containers to a digital information user; and a process for using a protected processing environment, for safely controlling at least a portion of the decoding of the digital information. COPYRIGHT: (C)2006,JPO&NCIPI

7,643 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process.
Abstract: The consensus problem involves an asynchronous system of processes, some of which may be unreliable The problem is for the reliable processes to agree on a binary value In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem

4,389 citations

Book
01 Jan 1996
TL;DR: This book familiarizes readers with important problems, algorithms, and impossibility results in the area, and teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures.
Abstract: In Distributed Algorithms, Nancy Lynch provides a blueprint for designing, implementing, and analyzing distributed algorithms. She directs her book at a wide audience, including students, programmers, system designers, and researchers. Distributed Algorithms contains the most significant algorithms and impossibility results in the area, all in a simple automata-theoretic setting. The algorithms are proved correct, and their complexity is analyzed according to precisely defined complexity measures. The problems covered include resource allocation, communication, consensus among distributed processes, data consistency, deadlock detection, leader election, global snapshots, and many others. The material is organized according to the system model-first by the timing model and then by the interprocess communication mechanism. The material on system models is isolated in separate chapters for easy reference. The presentation is completely rigorous, yet is intuitive enough for immediate comprehension. This book familiarizes readers with important problems, algorithms, and impossibility results in the area: readers can then recognize the problems when they arise in practice, apply the algorithms to solve them, and use the impossibility results to determine whether problems are unsolvable. The book also provides readers with the basic mathematical tools for designing new algorithms and proving new impossibility results. In addition, it teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures. Table of Contents 1 Introduction 2 Modelling I; Synchronous Network Model 3 Leader Election in a Synchronous Ring 4 Algorithms in General Synchronous Networks 5 Distributed Consensus with Link Failures 6 Distributed Consensus with Process Failures 7 More Consensus Problems 8 Modelling II: Asynchronous System Model 9 Modelling III: Asynchronous Shared Memory Model 10 Mutual Exclusion 11 Resource Allocation 12 Consensus 13 Atomic Objects 14 Modelling IV: Asynchronous Network Model 15 Basic Asynchronous Network Algorithms 16 Synchronizers 17 Shared Memory versus Networks 18 Logical Time 19 Global Snapshots and Stable Properties 20 Network Resource Allocation 21 Asynchronous Networks with Process Failures 22 Data Link Protocols 23 Partially Synchronous System Models 24 Mutual Exclusion with Partial Synchrony 25 Consensus with Partial Synchrony

4,340 citations

Proceedings ArticleDOI
16 Jul 2001
TL;DR: A suite of security building blocks optimized for resource-constrained environments and wireless communication, and shows that they are practical even on minimal hardware: the performance of the protocol suite easily matches the data rate of the network.
Abstract: As sensor networks edge closer towards wide-spread deployment, security issues become a central concern. So far, much research has focused on making sensor networks feasible and useful, and has not concentrated on security.We present a suite of security building blocks optimized for resource-constrained environments and wireless communication. SPINS has two secure building blocks: SNEP and mTESLA SNEP provides the following important baseline security primitives: Data confidentiality, two-party data authentication, and data freshness. A particularly hard problem is to provide efficient broadcast authentication, which is an important mechanism for sensor networks. mTESLA is a new protocol which provides authenticated broadcast for severely resource-constrained environments. We implemented the above protocols, and show that they are practical even on minimal hardware: the performance of the protocol suite easily matches the data rate of our network. Additionally, we demonstrate that the suite can be used for building higher level protocols.

2,703 citations

Journal ArticleDOI
TL;DR: This paper describes the beliefs of trustworthy parties involved in authentication protocols and the evolution of these beliefs as a consequence of communication, and gives the results of the analysis of four published protocols.
Abstract: Authentication protocols are the basis of security in many distributed systems, and it is therefore essential to ensure that these protocols function correctly. Unfortunately, their design has been extremely error prone. Most of the protocols found in the literature contain redundancies or security flaws. A simple logic has allowed us to describe the beliefs of trustworthy parties involved in authentication protocols and the evolution of these beliefs as a consequence of communication. We have been able to explain a variety of authentication protocols formally, to discover subtleties and errors in them, and to suggest improvements. In this paper we present the logic and then give the results of our analysis of four published protocols, chosen either because of their practical importance or because they serve to illustrate our method.

2,638 citations