scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Posted Content
TL;DR: This paper compares the level of concurrency one can obtain by converting a sequential program into a concurrent one using optimistic or pessimistic techniques, and proposes a list-based set algorithm that is optimal in the sense that it accepts all correct concurrent schedules.
Abstract: Modern concurrent programming benefits from a large variety of synchronization techniques. These include conventional pessimistic locking, as well as optimistic techniques based on conditional synchronization primitives or transactional memory. Yet, it is unclear which of these approaches better leverage the concurrency inherent to multi-cores. In this paper, we compare the level of concurrency one can obtain by converting a sequential program into a concurrent one using optimistic or pessimistic techniques. To establish fair comparison of such implementations, we introduce a new correctness criterion for concurrent programs, defined independently of the synchronization techniques they use. We treat a program's concurrency as its ability to accept a concurrent schedule, a metric inspired by the theories of both databases and transactional memory. We show that pessimistic locking can provide strictly higher concurrency than transactions for some applications whereas transactions can provide strictly higher concurrency than pessimistic locks for others. Finally, we show that combining the benefits of the two synchronization techniques can provide strictly more concurrency than any of them individually. We propose a list-based set algorithm that is optimal in the sense that it accepts all correct concurrent schedules. As we show via experimentation, the optimality in terms of concurrency is reflected by scalability gains.

4 citations

Proceedings ArticleDOI
01 May 2018
TL;DR: DMP-TM (Dynamic Memory Partitioning-TM), a novel HyTM algorithm that exploits the idea of leveraging operating system-level memory protection mechanisms to detect conflicts between HTM and STM transactions, to achieve robust performance even in unfavourable workload that exhibits high contention between the STM and HTM path.
Abstract: Transactional Memory (TM) is an emerging paradigm that promises to significantly ease the development of parallel programs. Hybrid TM (HyTM) is probably the most promising implementation of the TM abstraction, which seeks to combine the high efficiency of hardware implementations (HTM) with the robustness and flexibility of software-based ones (STM). Unfortunately, though, existing Hybrid TM systems are known to suffer from high overheads to guarantee correct synchronization between concurrent transactions executing in hardware and software. This article introduces DMP-TM (Dynamic Memory Partitioning-TM), a novel HyTM algorithm that exploits, to the best of our knowledge for the first time in the literature, the idea of leveraging operating system-level memory protection mechanisms to detect conflicts between HTM and STM transactions. This innovative design allows for employing highly scalable STM implementations, while avoiding instrumentation on the HTM path. This allows DMP-TM to achieve up to ~ 20× speedups compared to state of the art Hybrid TM solutions in uncontended workloads. Further, thanks to the use of simple and lightweight self-tuning mechanisms, DMP-TM achieves robust performance even in unfavourable workload that exhibits high contention between the STM and HTM path.

4 citations

Proceedings ArticleDOI
04 Jan 2015
TL;DR: This paper proposes ReDstm, a modular and non-intrusive framework for DTM that supports multiple data replication models in a general purpose programming language (Java), and shows its application in the implementation of distributed software transactional memories with different replication models.
Abstract: Distributed transactional memory (DTM) presents itself as a highly expressive and programmer friendly model for concurrency control in distributed programming. Current DTM systems make use of both data distribution and replication as a way of providing scalability and fault tolerance, but both techniques have advantages and drawbacks. As such, each one is suitable for different target applications, and deployment environments. In this paper we address the support of different data replication models in DTM. To that end we propose ReDstm, a modular and non-intrusive framework for DTM, that supports multiple data replication models in a general purpose programming language (Java). We show its application in the implementation of distributed software transactional memories with different replication models, and evaluate the framework via a set of well-known benchmarks, analysing the impact of the different replication models on memory usage and transaction throughput.

4 citations

Posted Content
TL;DR: It is shown that the total number of remote memory references RMRs that take place in an execution of a progressive TM in which n concurrent processes perform transactions on a single data item might reach $$\varOmega n \log n$$, which appears to be the first RMR complexity lower bound for transactional memory.
Abstract: Transactional memory (TM) allows concurrent processes to organize sequences of operations on shared \emph{data items} into atomic transactions. A transaction may commit, in which case it appears to have executed sequentially or it may \emph{abort}, in which case no data item is updated. The TM programming paradigm emerged as an alternative to conventional fine-grained locking techniques, offering ease of programming and compositionality. Though typically themselves implemented using locks, TMs hide the inherent issues of lock-based synchronization behind a nice transactional programming interface. In this paper, we explore inherent time and space complexity of lock-based TMs, with a focus of the most popular class of \emph{progressive} lock-based TMs. We derive that a progressive TM might enforce a read-only transaction to perform a quadratic (in the number of the data items it reads) number of steps and access a linear number of distinct memory locations, closing the question of inherent cost of \emph{read validation} in TMs. We then show that the total number of \emph{remote memory references} (RMRs) that take place in an execution of a progressive TM in which $n$ concurrent processes perform transactions on a single data item might reach $\Omega(n \log n)$, which appears to be the first RMR complexity lower bound for transactional memory.

4 citations

01 Jan 2010
TL;DR: Early results using hardware transactional memory for high-performance computing applications shows promising results in terms of improving the quality and efficiency of memory-based systems.
Abstract: Early results using hardware transactional memory for high-performance computing applications

4 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888