scispace - formally typeset
Search or ask a question

Showing papers by "Michael Merritt published in 1999"


Proceedings ArticleDOI
01 May 1999
TL;DR: A fast, wait-free (2k 1)-renaming algorithm which takes O(k2) time, which makes extensive use of tools and techniques developed by Attiya and Fouren.
Abstract: Yehuda Afek* ,We describe a fast, wait-free (2k 1)-renaming algorithm which takes O(k2) time. (Where k is the contention, the number of processes actually taking steps in a given run.) The algorithm makes extensive use of tools and techniques developed by Attiya and Fouren [AF98]. Other extensions, including a fast (long-lived) atomic snapshot algorithm, are briefly discussed.

70 citations


Journal ArticleDOI
01 Nov 1999
TL;DR: This result enables one to verify systems specified by I/O automata through model-checkers such as COSPAN or SMV, that operate on models with synchronous parallel composition, although in each case the translation map must match the specific model.
Abstract: The I/O automaton paradigm of Lynch and Tuttle models asynchrony through an interleaving parallel composition. The recognition that such interleaving models in fact can be viewed as special cases of synchronous parallel composition has been very limited. Let be any set of finite-state I/O automata drawing actions from a fixed finite set containing a subset Δ. In this article we establish a translation T : to a class of ω-automata closed under a synchronous parallel composition, for which T is monotonic with respect to implementation relative to Δ, and linear with respect to composition. Thus, for A^1, …, A, B^1, …, B ∈ and A e A^1 V…V A, B e B^1 V…V B, if Δ is the set of actions common to both A and B, then A implements B (in the sense of I/O automata) if and only if the ω-automaton language containment (T(A^1) ⊗ … ⊗ T(A)) ⊂ (T(B^1) ⊗ … ⊗ T(B)) obtains, where V denotes the interleaving parallel composition on and ⊗ denotes the synchronous parallel composition on . For the class , we use the L-process model of ω-automata. This result enables one to verify systems specified by I/O automata through model-checkers such as COSPAN or SMV, that operate on models with synchronous parallel composition. The translation technique generalizes to other interleaving models, although in each case, the translation map must match the specific model.

14 citations


Proceedings ArticleDOI
01 May 1999
TL;DR: It is shown that sequential consistency and linearizability cannot be distinguished by the timing conditions previously considered in the context of counting networks; thus, in contexts where these constraints apply, it is possible to rely on the stronger semantics oflinearizability, which simplifies proofs and enhances compositionality.
Abstract: We compare the impact of timing conditions on implementing sequentially consistent and linearizable counters using (uniform) counting networks in distributed systems. For counting problems in application domains which do not require linearizability but will run correctly if only sequential consistency is provided, the results of our investigation, and their potential payoffs, are threefold: First, we show that sequential consistency and linearizability cannot be distinguished by the timing conditions previously considered in the context of counting networks; thus, in contexts where these constraints apply, it is possible to rely on the stronger semantics of linearizability, which simplifies proofs and enhances compositionality. Second, we identify local timing conditions that support sequential consistency but not linearizability; thus, we suggest weaker, easily implementable timing conditions that are likely to be sufficient in many applications. Third, we show that any kind of synchronization that is too weak to support even sequential consistency may violate it significantly for some counting networks; hence, we identify timing conditions that are to be totally ruled out for specific applications that rely critically on either sequential consistency or linearizability.

12 citations


Journal ArticleDOI
TL;DR: It is shown that in the context of multiobjects, fetch & add objects are less powerful than swap objects, which in turn are lesspowerful than queue objects, and a restricted notion of implementation is introduced, called direct implementation, which shows that, if objects of type Y have a direct implementation from objects of types X, then Y-based multiobjects can also be implemented from X- based multiobjects.
Abstract: We consider shared memory systems that support multiobject operations in which processes may simultaneously access several objects in one atomic operation. We provide upper and lower bounds on the synchronization power (consensus number) of multiobject systems as a function of the type and the number of objects that may be simultaneously accessed in one atomic operation. These bounds imply that known classifications of component objects fail to characterize the synchronization power of their combination. In particular, we show that in the context of multiobjects, fetch & add objects are less powerful than swap objects, which in turn are less powerful than queue objects. This stands in contrast to the fact that swap can be implemented from fetch & add. Herein we introduce a restricted notion of implementation, called direct implementation. We show that, if objects of type Y have a direct implementation from objects of type X, then Y-based multiobjects can also be implemented from X-based multiobjects. Using this observation, we derive results such as: there are no direct implementations of swap or queue objects from any collection of commutative objects (e.g., fetch & add, test & set).

7 citations


Posted Content
TL;DR: This work shows a protocol whose cost is on the order of the number of tolerated failures, and shows how relaxing the consistency requirement to a probabilistic guarantee can reduce the associated cost, effectively to a constant.
Abstract: A secure reliable multicast protocol enables a process to send a message to a group of recipients such that all correct destinations receive the same message, despite the malicious efforts of fewer than a third of the total number of processes, including the sender This has been sh own to be a useful tool in building secure distributed services, albeit with a cost that typically grows linearly with the size of the system For very large networks, for which this is prohibitive, we present two approaches for reducing the cost: First, we show a protocol whose cost is on the order of the number of tolerated failures Secondly, we show how relaxing the consistency requirement to a probabilistic guarantee can reduce the associated cost, effectively to a constant

2 citations