scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Patent
16 Sep 2015
TL;DR: In this article, a transactional memory system salvages a hardware transaction by stopping transactional execution at a first instruction in the code region and executing the about-to-fail handler using the information about the about to fail handler.
Abstract: A transactional memory system salvages a hardware transaction. A processor of the transactional memory system records information about an about-to-fail handler for transactional execution of a code region, and records information about a lock elided to begin transactional execution of the code region. The processor detects a pending point of failure in the code region during the transactional execution, and based on the detecting, stops transactional execution at a first instruction in the code region and executes the about-to-fail handler using the information about the about-to-fail handler. The processor, executing the about-to-fail handler, acquires the lock using the information about the lock, commits speculative state of the stopped transactional execution, and starts non-transactional execution at a second instruction following the first instruction in the code region.

2 citations

01 Jan 2008
TL;DR: This paper analysis across a wide range of applications reveals that simply stalling before arbitrating helps side-step conflicts and avoid making the wrong decision, and evaluates a mixed conflict detection mode that combines the best of eager and lazy.
Abstract: In the search for high performance, most transactional memory (TM) systems execute atomic blocks concurrently and must thus be prepared for data conflicts. These conflicts must be detected and the system must choose a policy in terms of when and how to manage the resulting contention. Conflict detection essentially determines when the conflict manager is invoked, which can be dealt with eagerly (when the transaction reads/writes the location), lazily at commit time, or somewhere in between. In this paper, we analyze the interaction between conflict detection and contention manager heuristics. We show that this has a significant impact on exploitation of available parallelism and overall throughput. First, our analysis across a wide range of applications reveals that simply stalling before arbitrating helps side-step conflicts and avoid making the wrong decision. HTM systems that don’t support stalling after detecting a conflict seem to be prone to cascaded aborts and livelock. Second, we show that the time at which the contention manager is invoked is an important policy decision: lazy systems are inherently more robust while eager systems seem prone to pathologies, sometimes introduced by the contention manager itself. Finally, we evaluate a mixed conflict detection mode that combines the best of eager and lazy. It resolves write-write conflicts early, saving wasted work, and read-write conflicts lazily, allowing the reader to commit/serialize prior to the writer while executing concurrently.

2 citations

Proceedings ArticleDOI
23 Mar 2012
TL;DR: Some basic concepts about transactional memory are presented at first, and several key implementation technologies are analyzed, and some typical TM systems are introduced in detail.
Abstract: To solve the synchronization problems in the parallel programming, such as dead lock, priority inversion and convoy, the concept of transactional memory(TM) had been introduced. In this paper, some basic concepts about TM are presented at first, and several key implementation technologies are analyzed. Then some typical TM systems are introduced in detail. Finally, the examples using Open TM or Intel C++ STM Compiler Prototype Edition which are the extensions of Open are given.

2 citations

Proceedings ArticleDOI
12 Nov 1997
TL;DR: The behaviour of the store supports the view that a scalable cached "transactional" store architecture is a practical objective for high performance based on parallel computation across distributed memories.
Abstract: The development of scalable architectures at store levels of a layered model has concentrated on processor parallelism balanced against scalable memory bandwidth, primarily through distributed memory structures of one kind or another. A great deal of attention has been paid to hiding the distribution of memory to produce a single store image across the memory structure. It is unlikely that the distribution and concurrency aspects of scalable computing can be completely hidden at that level. This paper argues for a store layer which respects the need for caching and replication, and to do so at an "object" level granularity of memory use. These facets are interrelated through atomic processes, leading to an interface for the store which is strongly transactional in character. The paper describes the experimental performance of such a layer on a scalable multi-computer architecture. The behaviour of the store supports the view that a scalable cached "transactional" store architecture is a practical objective for high performance based on parallel computation across distributed memories.

2 citations

Book ChapterDOI
24 Sep 2015
TL;DR: It is described how composable memory transactions can be implemented in Java as a state passing monad, in which transactional blocks are compiled into an intermediate monadic language.
Abstract: Transactional memory is a new programming abstraction that simplifies concurrent programming. This paper describes the parallel implementation of a Java extension for writing composable memory transactions in Java. Transactions are composable i.e., they can be combined to generate new transactions, and are first-class values, i.e., transactions can be passed as arguments to methods and can be returned as the result of a method call. We describe how composable memory transactions can be implemented in Java as a state passing monad, in which transactional blocks are compiled into an intermediate monadic language. We show that this intermediated language can support different transactional algorithms, such as TL2i¾?[9] and SWissTMi¾?[10]. The implementation described here also provides the high level construct retry, which allows possibly-blocking transactions to be composed in sequence. Although our prototype implementation is in Java using BGGA Closures, it could be implemented in any language that supports objects and closures in some way, e.g. C#, C++, and Python.

2 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888