scispace - formally typeset
Search or ask a question

Showing papers by "Rachid Guerraoui published in 2008"


Proceedings ArticleDOI
20 Feb 2008
TL;DR: Opacity is defined as a property of concurrent transaction histories and its graph theoretical interpretation is given and it is proved that every single-version TM system that uses invisible reads and does not abort non-conflicting transactions requires, in the worst case, k steps for an operation to terminate.
Abstract: Transactional memory (TM) is perceived as an appealing alternative to critical sections for general purpose concurrent programming. Despite the large amount of recent work on TM implementations, however, very little effort has been devoted to precisely defining what guarantees these implementations should provide. A formal description of such guarantees is necessary in order to check the correctness of TM systems, as well as to establish TM optimality results and inherent trade-offs.This paper presents opacity, a candidate correctness criterion for TM implementations. We define opacity as a property of concurrent transaction histories and give its graph theoretical interpretation. Opacity captures precisely the correctness requirements that have been intuitively described by many TM designers. Most TM systems we know of do ensure opacity.At a very first approximation, opacity can be viewed as an extension of the classical database serializability property with the additional requirement that even non-committed transactions are prevented from accessing inconsistent states. Capturing this requirement precisely, in the context of general objects, and without precluding pragmatic strategies that are often used by modern TM implementations, such as versioning, invisible reads, lazy updates, and open nesting, is not trivial.As a use case of opacity, we prove the first lower bound on the complexity of TM implementations. Basically, we show that every single-version TM system that uses invisible reads and does not abort non-conflicting transactions requires, in the worst case, ?(k) steps for an operation to terminate, where k is the total number of objects shared by transactions. This (tight) bound precisely captures an inherent trade-off in the design of TM systems. The bound also highlights a fundamental gap between systems in which transactions can be fully isolated from the outside environment, e.g., databases or certain specialized transactional languages, and systems that lack such isolation capabilities, e.g., general TM frameworks.

500 citations


Book ChapterDOI
22 Sep 2008
TL;DR: Adaptive Validation STM is introduced, which is probabilistically permissive with respect to opacity; that is, every opaque history is accepted by AVSTM with positive probability, andAVSTM guarantees lock freedom.
Abstract: We introduce the notion of permissiveness in transactional memories (TM). Intuitively, a TM is permissive if it never aborts a transaction when it need not. More specifically, a TM is permissive with respect to a safety property pif the TM accepts every history that satisfies p. Permissiveness, like safety and liveness, can be used as a metric to compare TMs. We illustrate that it is impractical to achieve permissiveness deterministically, and then show how randomization can be used to achieve permissiveness efficiently. We introduce Adaptive Validation STM (AVSTM), which is probabilistically permissive with respect to opacity; that is, every opaque history is accepted by AVSTM with positive probability. Moreover, AVSTM guarantees lock freedom. Owing to its permissiveness, AVSTM outperforms other STMs by up to 40% in read dominated workloads in high contention scenarios. But, in low contention scenarios, the book-keeping done by AVSTM to achieve permissiveness makes AVSTM, on average, 20-30% worse than existing STMs.

93 citations


Proceedings ArticleDOI
18 Aug 2008
TL;DR: It is shown how to use f-AME to establish a shared secret group key, which can be used to implement a secure, reliable and authenticated long-lived communication service.
Abstract: We study the problem of secure communication in a multi-channel, single-hop radio network with a malicious adversary that can cause collisions and spoof messages. We assume no pre-shared secrets or trusted-third-party infrastructure. The main contribution of this paper is f-AME: a randomized (f)ast-(A)uthenticated (M)essage (E)xchange protocol that enables nodes to exchange messages in a reliable and authenticated manner. It runs in O(|E|t2 log n) time and has optimal resilience to disruption, where E is the set of pairs of nodes that need to swap messages, n is the total number of nodes, C the number of channels, and t

82 citations


Proceedings ArticleDOI
14 Jun 2008
TL;DR: It is proved that OFTMs cannot ensure disjoint-access-parallelism (in a strict sense) and may result in artificial "hot spots" and thus limit the performance of OFT Ms.
Abstract: This paper studies obstruction-free software transactional memory systems (OFTMs). These systems are appealing, for they combine the atomicity property of transactions with a liveness property that ensures the commitment of every transaction that eventually encounters no contention.We precisely define OFTMs and establish two of their fundamental properties. First, we prove that the consensus number of such systems is 2. This indicates that OFTMs cannot be implemented with plain read/write shared memory, on the one hand, but, on the other hand, do not require powerful universal objects, such as compare-and-swap. Second, we prove that OFTMs cannot ensure disjoint-access-parallelism (in a strict sense). This may result in artificial "hot spots" and thus limit the performance of OFTMs.

65 citations


Proceedings ArticleDOI
07 Jun 2008
TL;DR: It is shown that, under certain conditions, the verification problem can be reduced to a finite-state problem, and the use of the method is illustrated by proving the correctness of several STMs, including two-phase locking, DSTM, TL2, and optimistic concurrency control.
Abstract: Model checking software transactional memories (STMs) is difficult because of the unbounded number, length, and delay of concurrent transactions and the unbounded size of the memory We show that, under certain conditions, the verification problem can be reduced to a finite-state problem, and we illustrate the use of the method by proving the correctness of several STMs, including two-phase locking, DSTM, TL2, and optimistic concurrency control The safety properties we consider include strict serializability and opacity; the liveness properties include obstruction freedom, livelock freedom, and wait freedomOur main contribution lies in the structure of the proofs, which are largely automated and not restricted to the STMs mentioned above In a first step we show that every STM that enjoys certain structural properties either violates a safety or liveness requirement on some program with two threads and two shared variables, or satisfies the requirement on all programs In the second step we use a model checker to prove the requirement for the STM applied to a most general program with two threads and two variables In the safety case, the model checker constructs a simulation relation between two carefully constructed finite-state transition systems, one representing the given STM applied to a most general program, and the other representing a most liberal safe STM applied to the same program In the liveness case, the model checker analyzes fairness conditions on the given STM transition system

62 citations


Journal ArticleDOI
TL;DR: It is shown that wait-free contention managers, unlike their non-blocking counterparts, impose an inherent non-trivial overhead even in contention-free executions.
Abstract: It is considered good practice in concurrent computing to devise shared object implementations that ensure a minimal obstruction-free progress property and delegate the task of boosting liveness to independent generic oracles called contention managers. This paper determines necessary and sufficient conditions to implement wait-free and non-blocking contention managers, i.e., contention managers that ensure wait-freedom (resp. non-blockingness) of any associated obstruction-free object implementation. The necessary conditions hold even when universal objects (like compare-and-swap) or random oracles are available in the implementation of the contention manager. On the other hand, the sufficient conditions assume only basic read/write objects, i.e., registers. We show that failure detector ⋄P is the weakest to convert any obstruction-free algorithm into a wait-free one, and Ω*, a new failure detector which we introduce in this paper, and which is strictly weaker than ⋄P but strictly stronger than Ω, is the weakest to convert any obstruction-free algorithm into a non-blocking one. We also address the issue of minimizing the overhead imposed by contention management in low contention scenarios. We propose two intermittent failure detectors I_Ω* and I_⋄P that are in a precise sense equivalent to, respectively, Ω* and ⋄P, but allow for reducing the cost of failure detection in eventually synchronous systems when there is little contention. We present two contention managers: a non-blocking one and a wait-free one, that use, respectively, I_Ω* and I_⋄P. When there is no contention, the first induces very little overhead whereas the second induces some non-trivial overhead. We show that wait-free contention managers, unlike their non-blocking counterparts, impose an inherent non-trivial overhead even in contention-free executions.

61 citations


Proceedings ArticleDOI
18 Aug 2008
TL;DR: It is shown that an adaptive adversary can significantly hamper the spreading of a rumor, while an oblivious adversary cannot, and this latter fact implies that there exist message-efficient asynchronous (randomized) consensus protocols, in the context of an oblivious adversaries.
Abstract: In this paper, we study the complexity of gossip in an asynchronous, message-passing fault-prone distributed system. In short, we show that an adaptive adversary can significantly hamper the spreading of a rumor, while an oblivious adversary cannot. This latter fact implies that there exist message-efficient asynchronous (randomized) consensus protocols, in the context of an oblivious adversary.In more detail, we summarize our results as follows. If the adversary is adaptive, we show that a randomized asynchronous gossip algorithm cannot terminate in fewer than O(f(d + delta)) time steps unless Omega(n+f2) messages are exchanged, where n is the total number of processes, f is the number of tolerated crash failures, d is the maximum communication delay for the specific execution in question, and delta is the bound on relative process speeds in the specific execution. The lower bound result is to be contrasted with deterministic synchronous gossip algorithms that, even against an adaptive adversary, require only O(polylog n) time steps and O(n polylog n) messages.In the case of an oblivious adversary, we present three different randomized, asynchronous algorithms that provide different trade-offs between time complexity and message complexity. The first algorithm is based on the epidemic paradigm, and completes in O(n / (n-f) log2 n (d + δ)) time steps using O(n log3 n (d + δ)) messages, with high probability. The second algorithm relies on more rapid dissemination of the rumors, yielding a constant-time (w.r.t. n) gossip protocol: for every constant epsilon

55 citations


Proceedings ArticleDOI
14 Jun 2008
TL;DR: A generalization of the atomic snapshot object is introduced, which is called the partial snapshot object, which stores a vector of values and investigates implementations of the latter partial scan operation that are more efficient than the complete scans of traditional snapshot objects.
Abstract: We introduce a generalization of the atomic snapshot object, which we call the partial snapshot object. This object stores a vector of values. Processes may write components of the vector individually or atomically read any subset of the components. We investigate implementations of the latter partial scan operation that are more efficient than the complete scans of traditional snapshot objects. We present an algorithm that is based on a new implementation of the active set abstraction, which may be of independent interest.

45 citations


Book ChapterDOI
22 Sep 2008
TL;DR: The weakest failure detector to solve the set-agreement problem in a message-passing system where processes may fail by crashing is determined.
Abstract: In the set-agreement problem, nprocesses seek to agree on at most ni¾? 1 different values. This paper determines the weakest failure detector to solve this problem in a message-passing system where processes may fail by crashing. This failure detector, called the Lonelinessdetector and denoted $\mathcal{L}$, outputs one of two values, "true"or "false"such that: (1) there is at least one process where $\mathcal{L}$ outputs always "false", and (2) if only one process is correct, $\mathcal{L}$ eventually outputs "true"at this process.

39 citations


Book ChapterDOI
22 Sep 2008
TL;DR: It is shown that tolerating asynchronous periods does not increase storage overhead during synchronous periods, and the used storage is lower in synchronousperiods, which are considered common in practice, as compared to asynchronous periods.
Abstract: We study erasure-coded atomic register implementations in an asynchronous crash-recovery model. Erasure coding provides a cheap and space-efficient way to tolerate failures in a distributed system. This paper presents ORCAS, Optimistic eRasure-Coded Atomic Storage, which consists of two separate implementations, ORCAS-A and ORCAS-B. In terms of storage space used, ORCAS-A is more efficient in systems where we expect large number of concurrent writes, whereas, ORCAS-B is more suitable if not many writes are invoked concurrently. Compared to replication based implementations, both ORCAS implementations significantly save on the storage space. The implementations are optimistic in the sense that the used storage is lower in synchronous periods, which are considered common in practice, as compared to asynchronous periods. Indirectly, we show that tolerating asynchronous periods does not increase storage overhead during synchronous periods.

37 citations


Book ChapterDOI
19 Aug 2008
TL;DR: This paper presents the first deterministicspecification automata for strict serializability and opacity in STMs, using an antichain-based tool and shows their deterministic specifications to be equivalent to more intuitive, nondeterministic specification automata (which are too large to be determinized automatically).
Abstract: Software transactional memory (STM) offers a disciplined concurrent programming model for exploiting the parallelism of modern processor architectures. This paper presents the first deterministicspecification automata for strict serializability and opacity in STMs. Using an antichain-based tool, we show our deterministic specifications to be equivalent to more intuitive, nondeterministic specification automata (which are too large to be determinized automatically). Using deterministic specification automata, we obtain a completeverification tool for STMs. We also show how to model and verify contention management within STMs. We automatically check the opacity of popular STM algorithms, such as TL2 and DSTM, with a universal contention manager. The universal contention manager is nondeterministicand establishes correctness for all possible contention management schemes.

Journal ArticleDOI
TL;DR: This position paper reflects about the distinguishing features of these memory transactions with respect to their database cousins.
Abstract: Transactions are back in the spotlight! They are emerging in concurrent programming languages under the name of transactional memory (TM). Their new role? Concurrency control on new multi-core processors. From afar they look the same as good ol' database transactions. But are they really?In this position paper, we reflect about the distinguishing features of these memory transactions with respect to their database cousins.Disclaimer: By its very nature, this position paper does not try to avoid subjectivity.

Proceedings ArticleDOI
12 Jun 2008
TL;DR: Flexotasks is presented, a single system that allows different isolation policies and mechanisms to be combined in an orthogonal manner, subsuming four previously proposed models as well as making it possible to use new combinations best suited to the needs of particular applications.
Abstract: The disadvantages of unconstrained shared-memory multi-threading in Java, especially with regard to latency and determinism in realtime systems, have given rise to a variety of language extensions that place restrictions on how threads allocate, share, and communicate memory, leading to order-of-magnitude reductions in latency and jitter. However, each model makes different trade-offs with respect to expressiveness, efficiency, enforcement, and latency, and no one model is best for all applications.In this paper we present Flexible Task Graphs (Flexotasks), a single system that allows different isolation policies and mechanisms to be combined in an orthogonal manner, subsuming four previously proposed models as well as making it possible to use new combinations best suited to the needs of particular applications. We evaluate our implementation on top of the IBM Web-Sphere Real Time Java virtual machine using both a microbenchmark and a 30 KLOC avionics collision detector. We show that Flexotasks are capable of executing periodic threads at 10 KHz with a standard deviation of 1.2μs and that it achieves significantly better performance than RTSJ's scoped memory constructs while remaining impervious to interference from global garbage collection.

01 Jan 2008
TL;DR: It is highlighted that providing genuinely unbounded transactions is a hard and complicated task, but full of interesting technical and research problems, and solutions to these problems should be evaluated against large scale benchmarks, like STMBench7.
Abstract: Software transactional memory (STM) is a promising technique for writing concurrent programs. So far, most STM approaches have been experimentally evaluated with small-scale microbenchmarks. In this paper, we present several surprising results from implementing and experimenting with STMBench7 – a large scale benchmark for STMs. First, all STMs we used crashed, at some point or another, when running STMBench7. This was mainly due to memory management limitations. This means that, in practice, none of the tested STMs was truly unbounded and dynamic, which are the actual motivations for moving away from hardware transactional memories (HTM). Performance results we gathered also differ from previously published results. We found, for instance, that conflict detection and contention management have the biggest performance impact, way more than other aspects, like the choice of lock-based or obstruction-free implementation, as typically highlighted. Implementation of STMBench7 with various STMs also revealed several programming related issues such as the lack of support for external libraries and only partial support for object oriented features. These issues prove to be a major limitation when adapting STMs for production use. Our work is by no means a bashing of prior work on STMs. All STMs we considered are very well designed and implemented. What we highlight here is that providing genuinely unbounded transactions is a hard and complicated task, but full of interesting technical and research problems. Solutions to these problems should be evaluated against large scale benchmarks, like STMBench7.

Book ChapterDOI
22 Sep 2008
TL;DR: The first optimally-resilient algorithm ASAP is presented that solves consensus as soon as possible in an eventually synchronous system, i.e., a system that from some time GSTonwards, delivers messages in a timely fashion.
Abstract: This paper addresses the following question: what is the minimum-sized synchronous window needed to solve consensus in an otherwise asynchronous system? In answer to this question, we present the first optimally-resilient algorithm ASAPthat solves consensus as soon as possiblein an eventually synchronous system, i.e., a system that from some time GSTonwards, delivers messages in a timely fashion. ASAPguarantees that, in an execution with at most ffailures, every process decides no later than round GST+ f+ 2, which is optimal.

Journal ArticleDOI
TL;DR: This paper addresses the question of the weakest failure detector to solve consensus among a number k > n of processes that communicate using shared objects of a type T with consensus power n and shows that Ωn is necessary and also the weakest to boost the power of (n + 1)-ported one-shot deterministic types from n to any k < n.
Abstract: The power of an object type T can be measured as the maximum number n of processes that can solve consensus using only objects of T and registers. This number, denoted cons(T), is called the consensus power of T. This paper addresses the question of the weakest failure detector to solve consensus among a number k > n of processes that communicate using shared objects of a type T with consensus power n. In other words, we seek for a failure detector that is sufficient and necessary to "boost" the consensus power of a type T from n to k. It was shown in Neiger (Proceedings of the 14th annual ACM symposium on principles of distributed computing (PODC), pp. 100-109, 1995) that a certain failure detector, denoted Omega (n) , is sufficient to boost the power of a type T from n to k, and it was conjectured that Omega (n) was also necessary. In this paper, we prove this conjecture for one-shot deterministic types. We first show that, for any one-shot deterministic type T with cons(T) n. Our result generalizes, in a precise sense, the result of the weakest failure detector to solve consensus in asynchronous message-passing systems (Chandra et al. in J ACM 43(4):685-722, 1996). As a corollary, we show that Omega (t) is the weakest failure detector to boost the resilience level of a distributed shared memory system, i.e., to solve consensus among n > t processes using (t - 1)-resilient objects of consensus power t.

Journal ArticleDOI
TL;DR: This article presents a general characterization of indulgence in an abstract computing model that encompasses various communication and resilience schemes and uses this characterization to establish several results about the inherent power and limitations of indulgent algorithms.
Abstract: An indulgent algorithm is a distributed algorithm that, besides tolerating process failures, also tolerates unreliable information about the interleaving of the processes. This article presents a general characterization of indulgence in an abstract computing model that encompasses various communication and resilience schemes. We use our characterization to establish several results about the inherent power and limitations of indulgent algorithms.

01 Oct 2008
TL;DR: The trials and tribulations of Alice and Bob capture the fundamental difficulty shared by several n-player problems, including reliable broadcast, leader election, static k-selection, and t-resilient consensus, and provide round complexity lower bounds—and (nearly) tight upper bounds—for each of those problems.
Abstract: How efficiently can a malicious device disrupt a single-hop wireless network? Imagine two honest players attempting to exchange information in the presence of a malicious adversary that can disrupt communication by jamming or overwriting messages. Assume the adversary has a broadcast budget of @b-unknown to the honest players. We show that communication can be delayed for [email protected][email protected](lg|V|) rounds, where V is the set of values that the honest players may transmit. We then derive, via reduction to this 3-player game, round complexity lower bounds for several classical n-player problems: [email protected][email protected](lg|V|) for reliable broadcast, [email protected][email protected](logn) for leader election, and [email protected][email protected](klg|V|/k) for static k-selection. We also consider an extension of our adversary model that includes up to t crash failures. Here we show a bound of [email protected][email protected](t) rounds for binary consensus. We provide tight, or nearly tight, upper bounds for all four problems. From these results we can derive bounds on the efficiency of malicious disruption, stated in terms of two new metrics: jamming gain (the ratio of rounds delayed to adversarial broadcasts) and disruption-free complexity (the rounds required to terminate in the special case of no disruption). Two key conclusions of this study: (1) all the problems considered feature semantic vulnerabilities that allow an adversary to disrupt termination more efficiently than simple jamming (i.e., all have a jamming gain greater than 1); and (2) for all the problems considered, the round complexity grows linearly with the number of bits to be communicated (i.e., all have a @W(lg|V|) or @W(lgn) disruption-free complexity.)

Proceedings ArticleDOI
18 Aug 2008
TL;DR: It is shown that the information about process failures that is necessary and sufficient to implement a register shared by two particular processes is sufficient but not necessary to implement set agreement, which indicates that the register abstraction is too weak to implement the set agreement one.
Abstract: One of the most celebrated results of the theory of distributed computing is the impossibility, in an asynchronous system of n processes that communicate through shared memory registers, to solve the set agreement problem where the processes need to decide on up to n-1 among their n initial values. In short, the result indicates that the register abstraction is too weak to implement the set agreement one.This paper explores the relation between these abstractions in a message passing system where a register is not a given physical device but is rather itself implemented by processes communicating through message passing. We show that, maybe surprisingly, the information about process failures that is necessary and sufficient to implement a register shared by two particular processes is sufficient but not necessary to implement set agreement.We later generalize this result by considering k-set agreement, where the processes can decide on up to k values, and comparing it with a register shared by any particular subset of 2k processes. We provethat, for 1 ≤ k ≤ n/2, (a) any failure information that is sufficient to implement a register shared by 2k processes is sufficient to implement (n-k)-set agreement but (b) a failure information that is sufficient for (n-k)-set agreement is not sufficient for a register shared by 2k processes. We also prove that (c) a failure information that is sufficient for a register shared by 2k processes is not sufficient for ((n-k)-1)-set agreement.

Journal ArticleDOI
TL;DR: The notion of atomicity in the crash-recovery context is revisited and a generic algorithm that emulates an atomic memory is introduced that is instantiated for various settings according to whether processes have access to local stable storage, and whether, in every execution, a sufficient number of processes are assumed not to crash.
Abstract: This article considers the problem of robustly emulating a shared atomic memory over a distributed message-passing system where processes can fail by crashing and possibly recover. We revisit the notion of atomicity in the crash-recovery context and introduce a generic algorithm that emulates an atomic memory. The algorithm is instantiated for various settings according to whether processes have access to local stable storage, and whether, in every execution of the algorithm, a sufficient number of processes are assumed not to crash. We establish the optimality of specific instances of our algorithm in terms of resilience, log complexity (number of stable storage accesses needed in every read or write operation), as well as time complexity (number of communication steps needed in every read or write operation). The article also discusses the impact of considering a multiwriter versus a single-writer memory, as well as the impact of weakening the consistency of the memory by providing safe or regular semantics instead of atomicity.

Proceedings ArticleDOI
07 Jan 2008
TL;DR: This work proposes an extensible scheme, named ESE, that ensures efficient insertion of new types, efficient subtyping tests, and small space usage, and has comparable insertion times to the most efficient existing dynamic scheme while ESE outperforms it by a factor of 2-3 times in terms of space usage.
Abstract: The subtyping test consists of checking whether a type t is a descendant of a type r (Agrawal et al. 1989). We study how to perform such a test efficiently, assuming a dynamic hierarchy when new types are inserted at run-time. The goal is to achieve time and space efficiency, even as new types are inserted. We propose an extensible scheme, named ESE, that ensures (1) efficient insertion of new types, (2) efficient subtyping tests, and (3) small space usage. On the one hand ESE provides comparable test times to the most efficient existing static schemes (e.g.,Zibin et al. (2001)). On the other hand, ESE has comparable insertion times to the most efficient existing dynamic scheme (Baehni et al. 2007), while ESE outperforms it by a factor of 2-3 times in terms of space usage.

Book ChapterDOI
15 Dec 2008
TL;DR: Byzantine fault-tolerant state machine replication has reached a reasonable level of maturity as an appealing, software-based technique, to building robust distributed services with commodity hardware.
Abstract: Byzantine fault-tolerant state machine replication (BFT) has reached a reasonable level of maturity as an appealing, software-based technique, to building robust distributed services with commodity hardware The current tendency however is to implement a new BFT protocol from scratch for each new application and network environment This is notoriously difficult Modern BFT protocols require each more than 20000 lines of sophisticated C code and proving their correctness involves an entire PhD Maintainning and testing each new protocol seems just impossible

Proceedings ArticleDOI
17 Jun 2008
TL;DR: The arbitrary protocol is proposed: a tree-based replica control protocol that can be configured based on the frequencies of read and write operations in order to provide lower system load than existing tree replication protocols, yet with comparable cost and availability.
Abstract: Traditional replication protocols that arrange logically the replicas into a tree structure have reasonable availability, low communication costs but induce high system load. We propose in this paper the arbitrary protocol: a tree-based replica control protocol that can be configured based on the frequencies of read and write operations in order to provide lower system load than existing tree replication protocols, yet with comparable cost and availability. Our protocol enables the shifting from one configuration into another by just modifying the structure of the tree. There is no need to implement a new protocol whenever the frequencies of read and write operations change. At the heart of our protocol lies the new idea of logical and physical levels in a tree. In short, read operations are carried out on any physical node of every physical level of the tree whereas the write operation is performed on all physical nodes of a single physical level of the tree. We discuss optimal configurations, proving in particular a new lower bound, of independent interest, for the case of a binary tree.

Book ChapterDOI
19 Aug 2008
TL;DR: SOAR enables to check atomicity of a single-writer multi-reader register implementation with polynomial complexity and outperforms comparable approaches by more than an order of magnitude already in executions with only 6 read/write operations.
Abstract: This paper presents SOAR: the first oblivious atomicity assertion with polynomial complexity. SOAR enables to check atomicity of a single-writer multi-reader register implementation. The basic idea underlying the low overhead induced by SOAR lies in greedily checking, in a backward manner, specific points of an execution where register operations could be linearized, rather than exploring all possible precedence relations among these. We illustrate the use of SOAR by implementing it in +CAL. The performance of the resulting automatic verification outperforms comparable approaches by more than an order of magnitude already in executions with only 6 read/write operations. This difference increases to 3-4 orders of magnitude in the "negative" scenario, i.e., when checking some non-atomic execution, with only 5 operations. For example, checking atomicity of every possible execution of a single-writer single-reader (SWSR) register with at most 2 write and 3 read operations with the state of the art oblivious assertion takes more than 58 hours to complete, whereas SOAR takes just 9 seconds.

Journal ArticleDOI
TL;DR: It is shown that any set of deterministic object types that, combined with registers, implements weak consensus, also implements consensus, and a non-deterministic type that implementsWeak consensus, among any number of processes, but, combinedWith registers, cannot implement consensus even among two processes.

01 Jan 2008
TL;DR: The problem of detecting freeriders in eer-to-peer epidemic content dissemination applications, such as gossip-based and mesh-based systems, is addressed, and LiFT, a Lightweight Freerider-Tracking Protocol, relying on accountability, is presented, the first protocol that detects freers in a randomized push-based application without using cryptography.
Abstract: This paper addresses the problem of detecting freeriders in eer-to-peer epidemic content dissemination applications, such as gossip-based and mesh-based systems. We present LiFT, a Lightweight Freerider-Tracking Protocol, relying on accountability: each peer logs a digest of its past interactions with other peers and tracks abnormal behaviors by cross-checking its history log with other peers. LiFT is the rst protocol that detects freeriders in a randomized push-based application without using cryptography. Such applications are challenging because incentives a la Tit-for-Tat are not applicable and the random selection of communication partners prevents the use of any deterministic verication of the logs. We present a theoretical analysis of LiFT, backed up by extensive simulations. Additionally, we report on our experiments on the illustrative example of a gossip-based content dissemination protocol. In this setting, we show that after 50 gossip periods, LiFT detects freeriders (that freeride the protocol by 10%) over 99% of the time and that only 0.1% of the honest nodes are wrongfully accused.