scispace - formally typeset
Search or ask a question

Showing papers by "Dan Alistarh published in 2012"


Journal ArticleDOI
TL;DR: This paper introduces a novel technique for simulating, in a fault-prone asynchronous shared memory, executions of an asynchronous and failure-prone message-passing system in which some fragments appear synchronous to some processes, and derives the size of the minimal window of synchrony needed to solve set agreement.
Abstract: Set agreement is a fundamental problem in distributed computing in which processes collectively choose a small subset of values from a larger set of proposals The impossibility of fault-tolerant set agreement in asynchronous networks is one of the seminal results in distributed computing In synchronous networks, too, the complexity of set agreement has been a significant research challenge that has now been resolved Real systems, however, are neither purely synchronous nor purely asynchronous Rather, they tend to alternate between periods of synchrony and periods of asynchrony Nothing specific is known about the complexity of set agreement in such a “partially synchronous” setting In this paper, we address this challenge, presenting the first (asymptotically) tight bound on the complexity of set agreement in such systems We introduce a novel technique for simulating, in a fault-prone asynchronous shared memory, executions of an asynchronous and failure-prone message-passing system in which some fragments appear synchronous to some processes We use this simulation technique to derive a lower bound on the round complexity of set agreement in a partially synchronous system by a reduction from asynchronous wait-free set agreement Specifically, we show that every set agreement protocol requires at least $\lfloor\frac{t}{k}\rfloor + 2$ synchronous rounds to decide We present an (asymptotically) matching algorithm that relies on a distributed asynchrony detection mechanism to decide as soon as possible during periods of synchrony From these two results, we derive the size of the minimal window of synchrony needed to solve set agreement By relating synchronous, asynchronous and partially synchronous environments, our simulation technique is of independent interest In particular, it allows us to obtain a new lower bound on the complexity of early deciding k-set agreement complementary to that of Gafni et al (in SIAM J Comput 40(1):63–78, 2011), and to re-derive the combinatorial topology lower bound of Guerraoui et al (in Theor Comput Sci 410(6–7):570–580, 2009) in an algorithmic way

23 citations


Proceedings ArticleDOI
20 Oct 2012
TL;DR: The To-Do Tree concurrent data structure is introduced, which improves on the best known randomized and deterministic upper bounds and allows us to handle the complex dependencies between the processes' coin flips and their scheduling, and to tightly bound the work needed to perform subsets of the tasks.
Abstract: Asynchronous task allocation is a fundamental problem in distributed computing in which p asynchronous processes must execute a set of m tasks. Also known as write-all or do-all, this problem been studied extensively, both independently and as a key building block for various distributed algorithms. In this paper, we break new ground on this classic problem: we introduce the To-Do Tree concurrent data structure, which improves on the best known randomized and deterministic upper bounds. In the presence of an adaptive adversary, the randomized To-Do Tree algorithm has O(m+plogplog2m) work complexity. We then show that there exists a deterministic variant of the To-Do Tree algorithm with work complexity O(m+p log5 m log2 max(m, p)). For all values of m and p, our algorithms are within log factors of the O(m + p log p) lower bound for this problem. The key technical ingredient in our results is a new approach for analyzing concurrent executions against a strong adaptive scheduler. This technique allows us to handle the complex dependencies between the processes' coin flips and their scheduling, and to tightly bound the work needed to perform subsets of the tasks.

20 citations


Book ChapterDOI
30 Jun 2012
TL;DR: This paper presents the first early-deciding upper bounds for synchronous renaming, in which the running time adapts to the actual number of failures f in the execution, and shows that renaming can be solved in $\emph{constant}$ time if the number offailings f is limited to $O( \sqrt{n})$, while for general f≤n−1 ren naming can always be solve in O( logf) communication rounds.
Abstract: Renaming is a fundamental problem in distributed computing, in which a set of n processes need to pick unique names from a namespace of limited size. In this paper, we present the first early-deciding upper bounds for synchronous renaming, in which the running time adapts to the actual number of failures f in the execution. We show that, surprisingly, renaming can be solved in $\emph{constant}$ time if the number of failures f is limited to $O( \sqrt{n})$, while for general f≤n−1 renaming can always be solved in O( logf) communication rounds. In the wait-free case, i.e. for f=n−1, our upper bounds match the Ω( logn) lower bound of Chaudhuri et al. [13].

8 citations


DOI
01 Jan 2012
TL;DR: It is suggested that deterministic implementations of shared-memory data structures do not scale well in terms of worst-case time complexity, and a promising direction for future work is to extend randomized renaming techniques to obtain efficient implementations of concurrent data structures.
Abstract: One of the key trends in computing over the past two decades has been increased distribution, both at the processor level, where multi-core architectures are now the norm, and at the system level, where many key services are currently distributed overmultiple machines. Thus, understanding the power and limitations of computing in a concurrent, distributed setting is one of the major challenges in Computer Science. In this thesis, we analyze the complexity of implementing concurrent data structures in asynchronous shared memory systems. We focus on the complexity of a classic distributed coordination task called renaming, in which a set of processes need to pick distinct names from a small set of identifiers. We present the first tight bounds for the time complexity of this problem, both for deterministic and randomized implementations, solving a long-standing open problem in the field. For deterministic algorithms, we prove a tight linear lower bound; for randomized solutions, we provide logarithmic upper and lower bounds on time complexity. Together, these results show an exponential separation between deterministic and randomized renaming solutions. Importantly, the lower bounds extend to implementations of practical shared-memory data structures, such as queues, stacks, and counters. From a technical perspective, this thesis highlights new connections between the distributed renaming problem and other fundamental objects, such as sorting networks, mutual exclusion, and counters. In particular, we show that sorting networks can be used to obtain optimal randomized solutions to renaming, and that, in turn, the existence of these solutions implies a linear lower bound on the complexity of the problem. In sum, the results in this thesis suggest that deterministic implementations of shared-memory data structures do not scale well in terms of worst-case time complexity. On the positive side, we emphasize randomization as a natural alternative, which can circumvent the deterministic lower bounds with high probability. Thus, a promising direction for future work is to extend our randomized renaming techniques to obtain efficient implementations of concurrent data structures.

2 citations


Journal ArticleDOI
TL;DR: This paper introduces a transformation technique from synchronous algorithms to indulgent algorithms, which induces only a constant overhead in terms of time complexity in well-behaved executions and works for the class of colorless distributed tasks.
Abstract: Synchronous distributed algorithms are easier to design and prove correct than algorithms that tolerate asynchrony. Yet, in the real world, networks experience asynchrony and other timing anomalies. In this paper, we address the question of how to efficiently transform an algorithm that relies on synchronous timing into an algorithm that tolerates asynchronous executions. We introduce a transformation technique from synchronous algorithms to indulgent algorithms (Guerraoui, in PODC, pp. 289-297, 2000), which induces only a constant overhead in terms of time complexity in well-behaved executions. Our technique is based on a new abstraction we call an asynchrony detector, which the participating processes implement collectively. The resulting transformation works for the class of colorless distributed tasks, including consensus and set agreement. Interestingly, we also show that our technique is relevant for colored tasks, by applying it to the renaming problem, to obtain the first indulgent renaming algorithm.

2 citations


Proceedings ArticleDOI
25 Jun 2012
TL;DR: It is shown that the overhead of composition can be negligible in the case of some important shared-memory abstractions, and a more general light-weight specification is presented that allows the designer to transfer very little state between modules, by taking advantage of the semantics of the implemented object.
Abstract: Decades of research in distributed computing have led to a variety of perspectives on what it means for a concurrent algorithm to be efficient, depending on model assumptions, progress guarantees, and complexity metrics. It is therefore natural to ask whether one could compose algorithms that perform efficiently under different conditions, so that the composition preserves the performance of the original components when their conditions are met.In this paper, we evaluate the cost of composing shared-memory algorithms. First, we formally define the notion of safely composable algorithms and we show that every sequential type has a safely composable implementation, as long as enough state is transferred between modules. Since such generic implementations are inherently expensive, we present a more general light-weight specification that allows the designer to transfer very little state between modules, by taking advantage of the semantics of the implemented object. Using this framework, we implement a composed long-lived test-and-set object, with the property that each of its modules is asymptotically optimal with respect to the progress condition it ensures, while the entire implementation only uses objects with consensus number at most two. Thus, we show that the overhead of composition can be negligible in the case of some important shared-memory abstractions.

2 citations