scispace - formally typeset
Search or ask a question
Conference

International Symposium on Distributed Computing 

About: International Symposium on Distributed Computing is an academic conference. The conference publishes majorly in the area(s): Distributed algorithm & Algorithm design. Over the lifetime, 1961 publications have been published by the conference receiving 23856 citations.


Papers
More filters
Book ChapterDOI
18 Sep 2006
TL;DR: TL2 as mentioned in this paper is a software transactional memory (STM) algorithm based on a combination of commit-time locking and a novel global version-clock based validation technique, which is ten times faster than a single lock.
Abstract: The transactional memory programming paradigm is gaining momentum as the approach of choice for replacing locks in concurrent programming. This paper introduces the transactional locking II (TL2) algorithm, a software transactional memory (STM) algorithm based on a combination of commit-time locking and a novel global version-clock based validation technique. TL2 improves on state-of-the-art STMs in the following ways: (1) unlike all other STMs it fits seamlessly with any system's memory life-cycle, including those using malloc/free (2) unlike all other lock-based STMs it efficiently avoids periods of unsafe execution, that is, using its novel version-clock validation, user code is guaranteed to operate only on consistent memory states, and (3) in a sequence of high performance benchmarks, while providing these new properties, it delivered overall performance comparable to (and in many cases better than) that of all former STM algorithms, both lock-based and non-blocking. Perhaps more importantly, on various benchmarks, TL2 delivers performance that is competitive with the best hand-crafted fine-grained concurrent structures. Specifically, it is ten-fold faster than a single lock. We believe these characteristics make TL2 a viable candidate for deployment of transactional memory today, long before hardware transactional support is available.

891 citations

Book ChapterDOI
03 Oct 2001
TL;DR: This work presents a new non-blocking implementation of concurrent linked-lists supporting linearizable insertion and deletion operations, conceptually simpler and substantially faster than previous schemes.
Abstract: We present a new non-blocking implementation of concurrent linked-lists supporting linearizable insertion and deletion operations. The new algorithm provides substantial benefits over previous schemes: it is conceptually simpler and our prototype operates substantially faster.

528 citations

Book ChapterDOI
26 Sep 2005
TL;DR: This paper considers four dimensions of the STM design space and presents a new Adaptive STM (ASTM) system that adjusts to the offered workload, allowing it to match the performance of the best known existing system on every tested workload.
Abstract: Software Transactional Memory (STM) is a generic synchronization construct that enables automatic conversion of correct sequential objects into correct nonblocking concurrent objects. Recent STM systems, though significantly more practical than their predecessors, display inconsistent performance: differing design decisions cause different systems to perform best in different circumstances, often by dramatic margins. In this paper we consider four dimensions of the STM design space: (i) when concurrent objects are acquired by transactions for modification; (ii) how they are acquired; (iii) what they look like when not acquired; and (iv) the non-blocking semantics for transactions (lock-freedom vs. obstruction-freedom). In this 4-dimensional space we highlight the locations of two leading STM systems: the DSTM of Herlihy et al. and the OSTM of Fraser and Harris. Drawing motivation from the performance of a series of application benchmarks, we then present a new Adaptive STM (ASTM) system that adjusts to the offered workload, allowing it to match the performance of the best known existing system on every tested workload.

259 citations

Book ChapterDOI
28 Oct 2002
TL;DR: This paper builds CAS2 from CAS1 and, in fact, builds an arbitrary multiword compare-and-swap (CASN), providing compelling evidence that current primitives are not only universal in the theoretical sense introduced by Herlihy, but are also universal in their use as foundations for practical algorithms.
Abstract: Work on non-blocking data structures has proposed extending processor designs with a compare-and-swap primitive, CAS2, which acts on two arbitrary memory locations. Experience suggested that current operations, typically single-word compare-and-swap (CAS1), are not expressive enough to be used alone in an efficient manner. In this paper we build CAS2 from CAS1 and, in fact, build an arbitrary multiword compare-and-swap (CASN). Our design requires only the primitives available on contemporary systems, reserves a small and constant amount of space in each word updated (either 0 or 2 bits) and permits non-overlapping updates to occur concurrently. This provides compelling evidence that current primitives are not only universal in the theoretical sense introduced by Herlihy, but are also universal in their use as foundations for practical algorithms. This provides a straightforward mechanism for deploying many of the interesting non-blocking data structures presented in the literature that have previously required CAS2.

251 citations

Book ChapterDOI
28 Oct 2002
TL;DR: An algorithm that emulates atomic read/write shared objects in a dynamic network setting that guarantees atomicity for arbitrary patterns of asynchrony and failure, and satisfies a variety of conditional performance properties.
Abstract: This paper presents an algorithm that emulates atomic read/write shared objects in a dynamic network setting. To ensure availability and faulttolerance, the objects are replicated. To ensure atomicity, reads and writes are performed using quorum configurations, each of which consists of a set of members plus sets of read-quorums and write-quorums. The algorithm is reconfigurable: the quorum configurations may change during computation, and such changes do not cause violations of atomicity. Any quorum configuration may be installed at any time. The algorithm tolerates processor stopping failure and message loss. The algorithm performs three major tasks, all concurrently: reading and writing objects, introducing new configurations, and "garbage-collecting" obsolete configurations. The algorithm guarantees atomicity for arbitrary patterns of asynchrony and failure. The algorithm satisfies a variety of conditional performance properties, based on timing and failure assumptions. In the "normal case", the latency of read and write operations is at most 8d, where d is the maximum message delay.

226 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
20231
202247
202118
2020124
201998
2018220