scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Proceedings ArticleDOI
07 Jan 2008
TL;DR: This paper develops semantics and type systems for the constructs of the Automatic Mutual Exclusion (AME) programming model and model STM systems that use in-place update, optimistic concurrency, lazy conflict detection, and roll-back.
Abstract: Software Transactional Memory (STM) is an attractive basis for the development of language features for concurrent programming. However, the semantics of these features can be delicate and problematic. In this paper we explore the tradeoffs between semantic simplicity, the viability of efficient implementation strategies, and the flexibilityof language constructs. Specifically, we develop semantics and type systems for the constructs of the Automatic Mutual Exclusion (AME) programming model; our results apply also to other constructs, such as atomic blocks. With this semantics as a point of reference, we study several implementation strategies. We model STM systems that use in-place update, optimistic concurrency, lazy conflict detection, and roll-back. These strategies are correct only under non-trivial assumptions that we identify and analyze. One important source of errors is that some efficient implementations create dangerous 'zombie' computations where a transaction keeps running after experiencing a conflict; the assumptions confine the effects of these computations.

154 citations

Proceedings ArticleDOI
11 Mar 2007
TL;DR: This system is the first to demonstrate that transactions integrate well with an unmanaged language, and can perform as well as fine- grain locking while providing the programming ease of coarse-grain locking even on an unmanaging environment.
Abstract: Transactional memory offers significant advantages for concurrency control compared to locks. This paper presents the design and implementation of transactional memory constructs in an unmanaged language. Unmanaged languages pose a unique set of challenges to transactional memory constructs - for example, lack of type and memory safety, use of function pointers, aliasing of local variables, and others. This paper describes novel compiler and runtime mechanisms that address these challenges and optimize the performance of transactions in an unmanaged environment. We have implemented these mechanisms in a production-quality C compiler and a high-performance software transactional memory runtime. We measure the effectiveness of these optimizations and compare the performance of lock-based versus transaction-based programming on a set of concurrent data structures and the SPLASH-2 benchmark suite. On a 16 processor SMP system, the transaction-based version of the SPLASH-2 benchmarks scales much better than the coarse-grain locking version and performs comparably to the fine-grain locking version. Compiler optimizations significantly reduce the overheads of transactional memory so that, on a single thread, the transaction-based version incurs only about 6.4% overhead compared to the lock-based version for the SPLASH-2 benchmark suite. Thus, our system is the first to demonstrate that transactions integrate well with an unmanaged language, and can perform as well as fine-grain locking while providing the programming ease of coarse-grain locking even on an unmanaged environment.

154 citations

Robert Ennals1
01 Jan 2005
TL;DR: This paper argues that obstruction-freedom is not an important property for software transactional memory, and demonstrates that, if the authors are prepared to drop the goal of obstruction- freedom, software transaction memory can be made significantly faster.
Abstract: Much previous work on Software Transactional Memory has gone to great lengths to be “obstruction-free” — meaning that a transaction is guaranteed to make progress when all other transactions are suspended. In this paper we argue that obstruction-freedom is not an important property for software transactional memory, and demonstrate that, if we are prepared to drop the goal of obstruction-freedom, software transactional memory can be made significantly faster.

153 citations

Proceedings ArticleDOI
20 Feb 2008
TL;DR: Cluster-STM is presented, an STM designed for high performance on large-scale commodity clusters that addresses several novel issues posed by this domain, including aggregating communication, managing locality, and distributing transactional metadata onto the nodes.
Abstract: While there has been extensive work on the design of software transactional memory (STM) for cache coherent shared memory systems, there has been no work on the design of an STM system for very large scale platforms containing potentially thousands of nodes. In this work, we present Cluster-STM, an STM designed for high performance on large-scale commodity clusters. Our design addresses several novel issues posed by this domain, including aggregating communication, managing locality, and distributing transactional metadata onto the nodes. We also re-evaluate several STM design choices previously studied for cache-coherent machines and conclude that, in some cases, different choices are appropriate on clusters. Finally, we show that our design scales well up to 512 processors. This is because on a cluster, the main barrier to STM scalability is the remote communication overhead imposed by the STM operations, and our design aggregates most of that communication with the communication of the underlying data.

151 citations

Proceedings ArticleDOI
01 Dec 2007
TL;DR: This paper examines different organizations to achieve hardware-efficient and accurate TM signatures, and finds that implementing each signature with a single k-hash- function Bloom filter (True Bloom signature) is inefficient, as it requires multi-ported SRAMs.
Abstract: Transactional Memory (TM) systems must track the read and write sets--items read and written during a transaction--to detect conflicts among concurrent trans- actions. Several TMs use signatures, which summarize unbounded read/write sets in bounded hardware at a per- formance cost of false positives (conflicts detected when none exists). This paper examines different organizations to achieve hardware-efficient and accurate TM signatures. First, we find that implementing each signature with a single k-hash- function Bloom filter (True Bloom signature) is inefficient, as it requires multi-ported SRAMs. Instead, we advocate using k single-hash-function Bloom filters in parallel (Par- allel Bloom signature), using area-efficient single-ported SRAMs. Our formal analysis shows that both organiza- tions perform equally well in theory and our simulation- based evaluation shows this to hold approximately in prac- tice. We also show that by choosing high-quality hash func- tions we can achieve signature designs noticeably more ac- curate than the previously proposed implementations. Fi- nally, we adapt Pagh and Rodler's cuckoo hashing to im- plement Cuckoo-Bloom signatures. While this representa- tion does not support set intersection, it mitigates false pos- itives for the common case of small read/write sets and per- forms like a Bloom filter for large sets.

150 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888