scispace - formally typeset
Search or ask a question

Showing papers by "Michael Spear published in 2021"


Book ChapterDOI
01 Jan 2021
TL;DR: This chapter describes how to use consensus objects to build a universal construction, an algorithm for implementing a linearizable concurrent object for any sequential object type, and presents two algorithms, a lock-free and wait-free one.
Abstract: This chapter describes how to use consensus objects to build a universal construction, an algorithm for implementing a linearizable concurrent object for any sequential object type. It presents two algorithms, a lock-free one and a wait-free one. These universal constructions demonstrate the consensus objects are universal; that is, they can be used to implement a wait-free linearizable implementation of any object type.

2 citations


Book ChapterDOI
01 Jan 2021
TL;DR: This chapter begins the study of practical concurrency by exploring the impact of system architecture on the performance of spin locks by examining how to exploit knowledge of the architecture to design locking algorithms that reduce contention and so improve performance.
Abstract: This chapter begins our study of practical concurrency by exploring the impact of system architecture on the performance of spin locks. Understanding the memory hierarchy and how processors communicate is critical to being able to write effective concurrent programs. We examine how to exploit knowledge of the architecture to design locking algorithms that reduce contention and so improve performance.

1 citations


Proceedings ArticleDOI
06 Jul 2021
TL;DR: Transactional Data Structure Library (TDSL) as mentioned in this paper improves the programmability and performance of concurrent software by making it possible for programmers to compose multiple concurrent data structure operations into coarse-grained transactions.
Abstract: The Transactional Data Structure Library (TDSL) methodology improves the programmability and performance of concurrent software by making it possible for programmers to compose multiple concurrent data structure operations into coarse-grained transactions. Like transactional memory, TDSL enables arbitrarily many operations on arbitrarily many data structures to appear to other threads as a single atomic, isolated transaction. Like concurrent data structures, the individual operations on a TDSL data structure are optimized to avoid artificial contention. We introduce techniques for reducing false conflicts in TDSL implementations. Our approach allows expressing the postconditions of operations entirely via semantic properties, instead of through low-level structural properties. Our design is general enough to support lists, deques, ordered and unordered maps, and vectors. It supports richer programming interfaces than are available in existing TDSL implementations. It is also capable of precise memory management, which is necessary in low-level languages like C++.

1 citations


Proceedings ArticleDOI
07 Jul 2021
TL;DR: The skip vector as discussed by the authors is a high-performance concurrent data structure based on skip lists, which flattens the index and data layers of the skip list into vectors to increase spatial locality, reduce synchronization overhead and avoid much of the costly pointer chasing that skip lists incur.
Abstract: We present the skip vector, a novel highperformance concurrent data structure based on the skip list. The key innovation in the skip vector is to flatten the index and data layers of the skip list into vectors. This increases spatial locality, reduces synchronization overhead, and avoids much of the costly pointer chasing that skip lists incur. We evaluate a skip vector implementation in C++. Our implementation coordinates interactions among threads by utilizing optimistic traversal with sequence locks. To ensure memory safety, it employs hazard pointers; this leads to tight bounds on wasted space, but due to the skip vector design, does not lead to high overhead. Performance of the skip vector for small data set sizes is higher than for a comparable skip list, and as the amount of data increases, the benefits of the skip vector over a skip list increase.

Book ChapterDOI
01 Jan 2021
TL;DR: This chapter shows how to decompose and analyze such applications, introducing the notions of work and span, and introduces thread pools, an efficient and robust mechanism for executing such applications that insulates the programmer from platform-dependent details.
Abstract: Some applications break down naturally into many parallel tasks. This chapter shows how to decompose and analyze such applications, introducing the notions of work and span. The chapter also introduces thread pools, an efficient and robust mechanism for executing such applications that insulates the programmer from platform-dependent details. Finally, it examines work stealing and other techniques for distributing the tasks among threads in a thread pool, and shows how to implement work stealing efficiently using specialized double-ended queues.

Book ChapterDOI
01 Jan 2021
TL;DR: This chapter covers several useful patterns for distributed coordination: combining, counting, diffraction, and sampling; some of these techniques are deterministic; others use randomization.
Abstract: This chapter covers several useful patterns for distributed coordination: combining, counting, diffraction, and sampling. Some of these techniques are deterministic; others use randomization. We cover two basic structures underlying these patterns, trees and combinatorial networks. Although these techniques support a high degree of parallelism with high throughput, they can also increase latency for uncontended execution.

Book ChapterDOI
01 Jan 2021
TL;DR: This chapter begins the study of the foundations of concurrent computation by examining its most basic primitive, the read–write register, and presents a series of constructions of increasingly powerful registers, culminating in an arbitrary-sized atomic multi-reader, multi-writer register.
Abstract: This chapter begins our study of the foundations of concurrent computation by examining its most basic primitive, the read–write register. A register is a single location of shared memory. We characterize a register by the values it can store, the number of threads that can access it and what operations they can do, and the properties it guarantees. Starting from a one-bit register that supports only a single reader and a single writer, and guarantees the values read only if the read does not overlap a write, we present a series of constructions of increasingly powerful registers, culminating in an arbitrary-sized atomic multi-reader, multi-writer register. We also show how to use atomic registers to implement an atomic snapshot object, which can read multiple registers atomically. Although most of the algorithms presented in this chapter are impractical for real systems, they illustrate useful techniques for designing and reasoning about concurrent systems.