scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Patent
10 Dec 2019
TL;DR: In a branch predictor in a processor capable of executing transactional memory transactions, the branch predictor speculatively predicts the outcome of branch instructions such as taken/not-taken, the target address and the target instruction as mentioned in this paper.
Abstract: In a branch predictor in a processor capable of executing transactional memory transactions, the branch predictor speculatively predicts the outcome of branch instructions, such as taken/not-taken, the target address and the target instruction. Branch prediction information is buffered during a transaction, and is only loaded into the branch predictor when the transaction is completed. The branch prediction information is discarded if the transaction aborts.
Proceedings ArticleDOI
06 Sep 2022
TL;DR: This paper introduces a new abstraction called Open Transactional Actions (OTAs) that provides a framework for wrapping non-transactional resources in a transactional layer and believes that OTAs could be used by expert programmers to implement useful system libraries and also to give a transactions semantics to fast linearizable data structures, i.e., transactional boosting.
Abstract: This paper addresses the problem of accessing external resources from inside transactions in STM Haskell, and for that purpose introduces a new abstraction called Open Transactional Actions (OTAs) that provides a framework for wrapping non-transactional resources in a transactional layer. OTAs allow the programmer to access resources through IO actions, from inside transactions, and also to register commit and abort handlers: the former are used to make the accesses to resources visible to other transactions at commit time, and the latter to undo changes in the resource if the transaction has to roll back. OTAs, once started, are guaranteed to be executed completely before the hosting transaction can be aborted, guarantying that if a resource is accessed, its respective commit and abort actions will be properly registered. We believe that OTAs could be used by expert programmers to implement useful system libraries and also to give a transactional semantics to fast linearizable data structures, i.e., transactional boosting. As a proof of concept, we present examples that use OTAs to implement transactional file access and transactional boosted data types that are faster than pure STM Haskell in most cases.
Book ChapterDOI
09 May 2013
TL;DR: This paper proposes an approach to make each cycle in PG detected by those transactions, which construct this cycle, together in parallel way, instead of detecting cycle individually, and shows that the average execution time and communication cost of all transactions can be decreased.
Abstract: CS (Conflict Serializability) is a recently proposed relaxer correctness criterion that can increase transactional memory’s parallelism. DDA (Distributed Dependency-Aware) model is currently proposed to implement CS in distributed STM (Software Transactional Memory) for the first time. However, its transactions detect conflicts individually via detecting cycles in PG (Precedence Graph) and cause extra runtime overhead, especially at the condition that the transactions access lots of objects or the PG is large. In this paper, we propose an approach to make each cycle in PG detected by those transactions, which construct this cycle, together in parallel way, instead of detecting cycle individually. Experimental results show that the average execution time and communication cost of all transactions, including aborted ones, in our approach, can be decreased to 76% and 78% of those in DDA respectively. Its speedup is up to 2.56× against baseDSTM, employing two-phase locking.
Patent
31 Jul 2015
TL;DR: In this article, a memory node for fault tolerant computing includes a log pipeline coupled to a compute node, which includes an active log having multiple log entries received from the log pipeline of changes to a set of transactional memory pages in the compute node.
Abstract: A memory node for fault tolerant computing includes a log pipeline coupled to a compute node. The memory node includes an active log having multiple log entries received from the log pipeline of changes to a set of transactional memory pages in the compute node, and a set of active pages. The memory node processes the active log to keep the set of active pages in one of a fully recovered fresh state and a stale state reflecting within a predetermined delta of log entries of a corresponding transactional memory page where heavily accessed and critical pages are acted on before other pages.
Proceedings ArticleDOI
19 Oct 2008
TL;DR: Transactional memory is a means of simplifying mutual exclusion in sharedmemory applications, at its core, transactional memory provides a simple language primitive to programmers, the atomic block.
Abstract: While the number of cores in the CPUs of modern computers has been steadily increasing, improvements in the way we program concurrent applications have proceeded at a slower pace. One idea that seems to have gained a fair amount of traction is transactional memory (TM). Transactional memory is a means of simplifying mutual exclusion in sharedmemory applications. While it is a complex topic, at its core, transactional memory provides a simple language primitive to programmers, the atomic block. Code executing inside of the atomic block will execute as if no other threads were executing at the same time.

Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888