scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Proceedings ArticleDOI
10 Feb 2018
TL;DR: A notion of transactional DRF is proposed and proved that, if a TM satisfies a certain condition generalizing opacity and a program using it is DRF assuming strong atomicity, then the program indeed has strongly atomic semantics.
Abstract: Transactional memory (TM) facilitates the development of concurrent applications by letting the programmer designate certain code blocks as atomic. Programmers using a TM often would like to access the same data both inside and outside transactions, e.g., to improve performance or to support legacy code. In this case, programmers would ideally like the TM to guarantee strong atomicity, where transactions can be viewed as executing atomically also with respect to non-transactional accesses. Since guaranteeing strong atomicity for arbitrary programs is prohibitively expensive, researchers have suggested guaranteeing it only for certain data-race free (DRF) programs, particularly those that follow the privatization idiom: from some point on, threads agree that a given object can be accessed non-transactionally. Supporting privatization safely in a TM is nontrivial, because this often requires correctly inserting transactional fences, which wait until all active transactions complete. Unfortunately, there is currently no consensus on a single definition of transactional DRF, in particular, because no existing notion of DRF takes into account transactional fences. In this paper we propose such a notion and prove that, if a TM satisfies a certain condition generalizing opacity and a program using it is DRF assuming strong atomicity, then the program indeed has strongly atomic semantics. We show that our DRF notion allows the programmer to use privatization idioms. We also propose a method for proving our generalization of opacity and apply it to the TL2 TM.

4 citations

Patent
27 Jun 2012
TL;DR: In this paper, a transactional memory system under a distributed environment, which comprises a transaction logic module, transaction management module, a shared data management module and a network communication module, is described.
Abstract: The invention belongs to the field of parallel programming, in particular to a transactional memory system under a distributed environment, which comprises a transaction logic module, a transaction management module, a shared data management module and a network communication module. The transaction logic module is responsible for achieving basic functions of a single transaction, the transactionmanagement module is responsible for managing a plurality of transactions existing in the system, the shared data management module is responsible for managing all distributed shared data in the system and transaction operation related to the distributed shared data, and the network communication module is responsible for receiving network communication information in the shared data management module and enabling the information to be transmitted to shared data management module on a goal node. The transactional memory system can control consistency of distributed shared variable under the distributed environment, enables distributed procedures to be capable of visiting the distributed shared variable in a transaction mode, and does not need to use distributed locks to control the consistency of the shared variable.

4 citations

Proceedings ArticleDOI
12 Aug 2007
TL;DR: LSTM is the first nonblocking implementation of dynamic software transactional memory with local step complexity and contention, where k is the size of the operation's data set.
Abstract: LSTM (local software transactional memory) is the first nonblocking implementation of dynamic software transactional memory with O(k)-local step complexity and contention, where k is the size of the operation's data set.

4 citations

Proceedings ArticleDOI
TL;DR: In this paper, speculative heterogeneous transactional memory (HeTM) is proposed to reduce the complexity of programming heterogeneous systems by introducing the abstraction of Heterogeneous Transactional Memory.
Abstract: Modern heterogeneous computing architectures, which couple multi-core CPUs with discrete many-core GPUs (or other specialized hardware accelerators), enable unprecedented peak performance and energy efficiency levels. Unfortunately, though, developing applications that can take full advantage of the potential of heterogeneous systems is a notoriously hard task. This work takes a step towards reducing the complexity of programming heterogeneous systems by introducing the abstraction of Heterogeneous Transactional Memory (HeTM). HeTM provides programmers with the illusion of a single memory region, shared among the CPUs and the (discrete) GPU(s) of a heterogeneous system, with support for atomic transactions. Besides introducing the abstract semantics and programming model of HeTM, we present the design and evaluation of a concrete implementation of the proposed abstraction, which we named Speculative HeTM (SHeTM). SHeTM makes use of a novel design that leverages on speculative techniques and aims at hiding the inherently large communication latency between CPUs and discrete GPUs and at minimizing inter-device synchronization overhead. SHeTM is based on a modular and extensible design that allows for easily integrating alternative TM implementations on the CPU's and GPU's sides, which allows the flexibility to adopt, on either side, the TM implementation (e.g., in hardware or software) that best fits the applications' workload and the architectural characteristics of the processing unit. We demonstrate the efficiency of the SHeTM via an extensive quantitative study based both on synthetic benchmarks and on a porting of a popular object caching system.

4 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888