scispace - formally typeset
Search or ask a question

Showing papers by "Michael Merritt published in 1992"


Proceedings ArticleDOI
04 May 1992
TL;DR: A combination of asymmetric (public-key) and symmetric (secret- key) cryptography that allow two parties sharing a common password to exchange confidential and authenticated information over an insecure network is introduced.
Abstract: Classic cryptographic protocols based on user-chosen keys allow an attacker to mount password-guessing attacks. A combination of asymmetric (public-key) and symmetric (secret-key) cryptography that allow two parties sharing a common password to exchange confidential and authenticated information over an insecure network is introduced. In particular, a protocol relying on the counter-intuitive motion of using a secret key to encrypt a public key is presented. Such protocols are secure against active attacks, and have the property that the password is protected against offline dictionary attacks. >

1,571 citations


Patent
24 Sep 1992
TL;DR: In this article, the authors proposed a cryptographic communication system, which employs a novel combination of public and private key cryptography, allowing two parties, who share only a relatively insecure password, to bootstrap a computationally secure cryptographic system over an insecure network.
Abstract: A cryptographic communication system. The system, which employs a novel combination of public and private key cryptography, allows two parties, who share only a relatively insecure password, to bootstrap a computationally securecryptographic system over an insecure network. The system is secure against active and passive attacks, and has the property that the password is protected againstoff-line "dictionary" attacks. If Alice and Bob are two parties who share the password P one embodiment of the system involves the following steps: (1) Alice generates arandom public key E, encrypts it with P and sends P(E) to Bob; (2) Bob decrypts to get E, encrypts a random secret key R with E and sends E(R) to Alice; (3) Alice decrypts to get R, generates a random challenge CA and sends R (CA) to Bob; (4) Bob decrypts to get CA, generates a random challenge CB and sends R (CA ,CB) to Alice; (5) Alice decrypts to get (CA ,CB), compares the first against the challenge and sends R(CB) to Bob if they are equal; (6) Bob decrypts and compares with theearlier challenge; and (7) Alice and Bob can use R as a shared secret key to protect the session.

236 citations


Proceedings ArticleDOI
01 Oct 1992
TL;DR: This paper addresses problems which arise in the synchronization and coordination of distributed systems which employ unreliable shared memory and presents algorithms which solve the consensus problem, and which simulate reliable shared-memory objects, despite the fact that the available memory objects may be faulty.
Abstract: This paper addresses problems which arise in the synchronization and coordination of distributed systems which employ unreliable shared memory. We present algorithms which solve the consensus problem, and which simulate reliable shared-memory objects, despite the fact that the available memory objects (e.g. read/write registers, test-and-set registers, read-modify-write registers) may be faulty.

50 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: This paper provides the first formal definition of the processorimemory interface, and gives a formal specification and correctness proof of a release consistent nonblocking shared memory, and provides new insights into memory systems and programs for non blocking shared memories, areas that are not well-understood.
Abstract: Specifications of shared memories generally assume that processors block, awaiting the response to each memory request, e.g. awaiting the return value for a read operation. On the other hand, studies have shown that substantial performance gain can be obtained by permitting a processor to have multiple memory readslwrites in progress at a time, and indeed high-performance multiprocessors such as the Tera Computer permit such nonblocking memory accesses. Formalizing correctness conditions for nonblocking shared memories requires a generalization of the processorimemory interface to specify accesses to be done concurrently, indicate when an order must be preserved even among concurrently-requested accesses, and permit out-of-order responses to memory requests. This paper provides the first formal definition of such an interface. Sequential consistency and linearizability are defined with respect to this generaf interface, as natural correctness conditions for nonblocking shared memories. Sequential consistency in turn is used in the formal specification of relaxed consistency models on nonblocking shared memories, models that support sequential consistency only for a class of well-behaved (data-race-free or PL) programs. Finally, the framework is illustrated by studying a particular relaxed consistency model, release consistency. Extending the results of a previous paper, we give a formal specification and correctness proof of a release consistent nonblocking shared memory. This work provides new insights into memory systems and programs for nonblocking shared memories, areas that are not well-understood.

24 citations


Journal ArticleDOI
TL;DR: Two orphan management algorithms ensure that orphans only see states of the shared data that they could also see if they were not orphans, and when used in combination with any correct concurrency control algorithm, they guarantee that all computations, orphan as well as nonorphan, see consistent states.
Abstract: In a distributed system, node failures, network delays, and other unpredictable occurences can result in orphan computations—subcomputations that continue to run but whose results are no longer needed. Several algorithms have been proposed to prevent such computations from seeing inconsistent states of the shared data. In this paper, two such orphan management algorithms are analyzed. The first is an algorithm implemented in the Argus distributed-computing system at MIT, and the second is an algorithm proposed at Carnegie-Mellon. The algorithms are described formally, and complete proofs of their correctness are given.The proofs show that the fundamental concepts underlying the two algorithms are very similar in that each can be regarded as an implementation of the same high-level algorithm. By exploiting properties of information flow within transaction management systems, the algorithms ensure that orphans only see states of the shared data that they could also see if they were not orphans. When the algorithms are used in combination with any correct concurrency control algorithm, they guarantee that all computations, orphan as well as nonorphan, see consistent states of the shared data.

7 citations