scispace - formally typeset
Search or ask a question

Showing papers on "Concurrent data structure published in 2009"


Proceedings ArticleDOI
11 Aug 2009
TL;DR: An inherent tradeoff for implementations of transactional memories is proved: they cannot be both disjoint-access parallel and have read-only transactions that are invisible and always terminate successfully.
Abstract: Transactional memory (TM) is a promising approach for designing concurrent data structures, and it is essential to develop better understanding of the formal properties that can be achieved by TM implementations. Two fundamental properties of TM implementations are disjoint-access parallelism, which is critical for their scalability, and the invisibility of read operations, which reduces memory contention.This paper proves an inherent tradeoff for implementations of transactional memories: they cannot be both disjoint-access parallel and have read-only transactions that are invisible and always terminate successfully. In fact, a lower bound of Ω(t) is proved on the number of writes needed in order to implement a read-only transaction of t items, which successfully terminates in a disjoint-access parallel TM implementation. The results assume strict serializability and thus hold under the assumption of opacity. It is shown how to extend the results to hold also for weaker consistency conditions, serializability and snapshot isolation.

56 citations


Book ChapterDOI
28 Mar 2009
TL;DR: This study formally shows that sequential consistency and linearizability can be characterized in terms of observational refinement, and provides a new understanding of sequential inconsistency and linearIZability in Terms of abstraction of dependency between computation steps of client programs.
Abstract: Concurrent data structures are usually designed to satisfy correctness conditions such as sequential consistency and linearizability. In this paper, we consider the following fundamental question: what guarantees are provided by these conditions for client programs? We formally show that these conditions can be characterized in terms of observational refinement. Our study also provides a new understanding of sequential consistency and linearizability in terms of abstraction of dependency between computation steps of client programs.

47 citations


Book ChapterDOI
23 Sep 2009
TL;DR: A contention-sensitive data structure is a concurrent data structure in which the overhead introduced by locking is eliminated in the common cases, when there is no contention, or when processes with non-interfering operations access it concurrently.
Abstract: A contention-sensitive data structure is a concurrent data structure in which the overhead introduced by locking is eliminated in the common cases, when there is no contention, or when processes with non-interfering operations access it concurrently. When a process invokes an operation on a contentionsensitive data structure, in the absence of contention or interference, the process must be able to complete its operation in a small number of steps and without using locks. Using locks is permitted only when there is interference. We formally define the notion of contention-sensitive data structures, propose four general transformations that facilitate devising such data structures, and illustrate the benefits of the approach by implementing a contention-sensitive consensus algorithm, a contention-sensitive double-ended queue data structure, and a contention-sensitive election algorithm. Finally, we generalize the result to enable to avoid locking also when contention is low.

38 citations


Proceedings ArticleDOI
15 Jun 2009
TL;DR: Shoal is a system for checking data-sharing in multithreaded programs that includes a new concept that is called groups, a collection of objects all having the same sharing mode, and demonstrates the necessity and practicality of groups by applying Shoal to a wide range of concurrent C programs.
Abstract: SharC is a recently developed system for checking data-sharing in multithreaded programs. Programmers specify sharing rules (read-only, protected by a lock, etc.) for individual objects, and the SharC compiler enforces these rules using static and dynamic checks. Violations of these rules indicate unintended data sharing, which is the underlying cause of harmful data-races. Additionally, SharC allows programmers to change the sharing rules for a specific object using a sharing cast, to capture the fact that sharing rules for an object often change during the object's lifetime. SharC was successfully applied to a number of multi-threaded C programs.However, many programs are not readily checkable using SharC because their sharing rules, and changes to sharing rules, effectively apply to whole data structures rather than to individual objects. We have developed a system called Shoal to address this shortcoming. In addition to the sharing rules and sharing cast of SharC, our system includes a new concept that we call groups. A group is a collection of objects all having the same sharing mode. Each group has a distinguished member called the group leader. When the sharing mode of the group leader changes by way of a sharing cast, the sharing mode of all members of the group also changes. This operation is made sound by maintaining the invariant that at the point of a sharing cast, the only external pointer into the group is the pointer to the group leader. The addition of groups allows checking safe concurrency at the level of data structures rather than at the level of individual objects.We demonstrate the necessity and practicality of groups by applying Shoal to a wide range of concurrent C programs (the largest approaching a million lines of code). In all benchmarks groups entail low annotation burden and no significant additional performance overhead.

28 citations


Journal ArticleDOI
TL;DR: This paper presents a software library of non-blocking abstract data types that have been designed to facilitate lock-free programming for non-experts and provides experimental results that show that the library can considerably improve the performance of software systems.
Abstract: An essential part of programming for multi-core and multi-processor includes ef cient and reliable means for sharing data. Lock-free data structures are known as very suitable for this purpose, although experienced to be very complex to design. In this paper, we present a software library of non-blocking abstract data types that have been designed to facilitate lock-free programming for non-experts. The system provides: i) ef cient implementations of the most commonly used data types in concurrent and sequential software design, ii) a lock-free memory management system, and iii) a run time-system. The library provides clear semantics that are at least as strong as those of corresponding lock-based implementations of the respective data types. Our software library can be used for facilitating lockfree programming; its design enables the programmer to: i) replace lock-based components of sequential or parallel code easily and ef ciently , ii) use well-tuned concurrent algorithms inside a software or hardware transactional system. In the paper we describe the design and functionality of the system. We also provide experimental results that show that the library can considerably improve the performance of software systems.

12 citations


Journal ArticleDOI
Maurice Herlihy1
TL;DR: Writing lock-free algorithms, like writing device drivers and cosine routines, requires some care and expertise, but the results can be very simple and straightforward to write.
Abstract: Writing lock-free algorithms, like writing device drivers and cosine routines, requires some care and expertise.

9 citations


Patent
12 Jun 2009
TL;DR: In this paper, a multiprocessor system of the invention includes processors for executing multiple threads for which data is processed and a data processing control unit for determining the condition which the order in which the processors execute the threads must satisfy and starting the threads while the order satisfies the condition.
Abstract: Conventionally, when the amount of data to be processed decreases only in a part of threads, the processing efficiency of the whole transaction degrades. A multiprocessor system of the invention includes processors for executing multiple threads for which data is processed and a data processing control unit for determining the condition which the order in which the processors execute the threads must satisfy and starting to execute the threads while the order satisfies the condition.

4 citations


Proceedings Article
30 Jul 2009
TL;DR: A formal verification of a recent concurrent data structure Scalable NonZero Indicators is described, which proves that the algorithm satisfies linearizability, by showing a trace refinement relation from the concrete implementation to its abstract specification.
Abstract: Concurrent algorithms are notoriously difficult to design correctly, and high performance algorithms that make little or no use of locks even more so. In this paper, we describe a formal verification of a recent concurrent data structure Scalable NonZero Indicators. The algorithm supports incrementing, decrementing, and querying the shared counter in an efficient and linearizable way without blocking. The algorithm is highly non-trivial and it is challenging to prove the correctness. We have proved that the algorithm satisfies linearizability, by showing a trace refinement relation from the concrete implementation to its abstract specification. These models are specified in CSP and verified automatically using the model checking toolkit PAT.

4 citations


Patent
Emad Omara1, John Joseph Duffy1
31 Mar 2009
TL;DR: In this paper, an enumerable concurrent data structure referred to as a concurrent bag is provided, which is accessible by concurrent threads and includes a set of local lists configured as a linked list and a dictionary.
Abstract: An enumerable concurrent data structure referred to as a concurrent bag is provided. The concurrent bag is accessible by concurrent threads and includes a set of local lists configured as a linked list and a dictionary. The dictionary includes an entry for each local list that identifies the thread that created the local list and the location of the local list. Each local list includes a set of data elements configured as a linked list. A global lock on the concurrent bag and local locks on each local list allow operations that involve enumeration to be performed on the concurrent bag.

2 citations


Patent
Yoshihiko Nishihata1
12 Jun 2009
TL;DR: In this paper, a plurality of processors executing multiple threads to process data is considered, and a means which, based on an amount of data to be processed for each thread, determines a condition which an order of processors execute the threads should satisfy and starts to execute each thread so that the condition is satisfied.
Abstract: Conventionally, when the amount of data to be processed increases only for a part of threads, the processing efficiency of the whole transaction degrades. A multiprocessor system of the invention includes a plurality of processors executing multiple threads to process data; and a means which, based on an amount of data to be processed for each thread, determines a condition which an order in which the plurality of processors execute the threads should satisfy and starts to execute each thread so that the condition is satisfied.

1 citations


Patent
18 Jun 2009
TL;DR: In this paper, a concurrent data structure allows synchronization to be elided for read accesses, and each processing resource maintains an indicator that indicates whether the processing resource has reached as safe point.
Abstract: A concurrent data structure allows synchronization to be elided for read accesses. Processing resources that remove one or more elements of the concurrent data structure are allowed to delete the elements only after all other processing resources have reached a safe point. Each processing resource maintains an indicator that indicates whether the processing resource has reached as safe point (i.e., will not access the concurrent data structure). When the indicators indicate that all processing resources have reached a safe point, elements of the data structure may be deleted.