scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Book ChapterDOI
24 Feb 2011
TL;DR: This paper introduces two systems, integrating only off-the-shelf components, that respectively use a centralized and a distributed approach, presenting their hardware and software design, and analyzes and compares these two architectures to a lock based multiprocessor prototype.
Abstract: In this paper we discuss the development of two emulation platforms for transactional memory systems on a single Field Programmable Gate Array (FPGA). We introduce two systems, integrating only off-the-shelf components, that respectively use a centralized and a distributed approach, presenting their hardware and software design. We analyze and compare these two architectures to a lock based multiprocessor prototype, discussing the trade-offs in terms of design complexity, performance and scalability.

1 citations

Proceedings ArticleDOI
04 May 2009
TL;DR: This work reports on the experience in modifying Apache's cache module to employ transactional memory instead of locks, a process it refers to as transactification, and presents performance results from running Apache on a 32-core machine, showing that, there are scenarios where the performance of the STM-based version is close to that of the lock- based version.
Abstract: Apache is a large-scale industrial multi-process and multithreaded application, which uses lock-based synchronization. We report on our experience in modifying Apache's cache module to employ transactional memory instead of locks, a process we refer to as transactification; we are not aware of any previous efforts to transactify legacy software of such a large scale. Along the way, we learned some valuable lessons about which tools one should use, which parts of the code one should transactify and which are better left untouched, as well as on the intricacy of commit handlers. We also stumbled across weaknesses of existing software transactional memory (STM) toolkits, leading us to identify desirable features they are currently lacking. Finally, we present performance results from running Apache on a 32-core machine, showing that, there are scenarios where the performance of the STM-based version is close to that of the lock-based version. These results suggest that there are applications for which the overhead of using a software-only implementation of transactional memory is insignificant.

1 citations

01 Jan 2008
TL;DR: A dynamic contention manager that monitors current performance and chooses the proper contention manager accordingly is introduced, which yields a higher average performance level over time when compared with existing static implementations.
Abstract: ADAPTIVE SOFTWARE TRANSACTIONAL MEMORY: DYNAMIC CONTENTION MANAGEMENT by Joel Cameron Frank This thesis addresses the problem of contention management in Software Transactional Memory (STM), which is a scheme for managing shared memory in a concurrent programming environment. STM views shared memory in a way similar to that of a database; read and write operations are handled through transactions, with changes to the shared memory becoming permanent through commit operations. Research on this subject reveals that there are currently varying methods for collision detection, data validation, and contention management, each of which has different situations in which they become the preferred method. This thesis introduces a dynamic contention manager that monitors current performance and chooses the proper contention manager accordingly. Performance calculations, and subsequent polling of the underlying library, are minimized. As a result, this adaptive contention manager yields a higher average performance level over time when compared with existing static implementations.

1 citations

Proceedings ArticleDOI
01 Sep 2010
TL;DR: Quick TM is proposed, a new hardware transactional memory (HTM) architecture that incorporates three features to address known bottlenecks in the existing HTM architectures, and handles transaction overflow gracefully and outperforms the current overflow-aware HTM proposal, One TM-concurrent by 12% on average.
Abstract: Transactional Memory (TM) is an emerging technology which simplifies the concurrency control in a parallel program. In this paper we propose Quick TM, a new hardware transactional memory (HTM) architecture. It incorporates three features to address known bottlenecks in the existing HTM architectures. First, we propose hardware-only dynamic detection of true-shared variables. Our result shows that true-shared variables account for only about 20% in the commit set of any transaction. Rest can be completely disregarded from the commit phase. This shortens every commit phase drastically resulting in a significant overall speed-up. Second, we keep both the speculative and the last committed version local to each processor. This benefits when a transaction is repeated in a loop. The processor request gets satisfied from the L1 data cache(L1D) itself. Furthermore, since both the versions are locally maintained, the commit action involves only broadcast of addresses. Third, we have proposed a mechanism to address overflow in transactions. In our proposal, each processor continues to run transactions even if one processor has overflown its L1D. Our technique eliminates the stall of a thread even if it conflicts with the overflown transaction. Overflown transaction commits in-place and periodically broadcasts its write set addresses, termed “partial commit”. This gradually reduces conflicts and allows other threads to progress towards commit. Moreover, the technique does not require any additional hardware at any memory hierarchy level beyond L1. Quick TM outperforms the state-of-the-art scalable HTM architecture, Scalable-TCC, on average by 20% in the latest TM benchmark suite STAMP. It outperforms the original TCC proposal with serialized commit by 28% on average. Maximum speed-up achieved in these two cases are 43% and 67% respectively. Our proposal handles transaction overflow gracefully and outperforms the current overflow-aware HTM proposal, One TM-concurrent by 12% on average.

1 citations

Proceedings ArticleDOI
01 Oct 2014
TL;DR: This work considers the multi-versioning (MV) model of using multiple object versions in DTM to avoid unnecessary aborts, and presents a transactional scheduler, called partial rollback-based transactional Scheduler (or PTS), for a multi- versioned DTM model.
Abstract: In-memory transactional data girds, often referred to as NoSQL data grids demand high concurrency for scalability and high performance in data-intensive applications. As an alternative concurrency control model, distributed transactional memory (DTM) promises to alleviate the difficulties of lock-based distributed synchronization. We consider the multi-versioning (MV) model of using multiple object versions in DTM to avoid unnecessary aborts. MV transactional memory inherently guarantees commits of read-only transactions, but limits concurrency of write transactions. We present a transactional scheduler, called partial rollback-based transactional scheduler (or PTS), for a multi-versioned DTM model. The model supports multiple object versions to exploit concurrency of read-only transactions, and detects conflicts of write transactions at an object level. Instead of aborting a transaction, PTS assigns backoff times for conflicting transactions, and the transaction is rolled-back partially. Our implementation, integrated with a popular open-source transactional in-memory data store (i.e., Red Hat's Infinispan) reveals that PTS improves transactional throughput over MV DTM without PTS by as much as 2.4×.

1 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888