Topic
Transactional memory
About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.
Papers published on a yearly basis
Papers
More filters
••
04 Jun 2012TL;DR: This article extends two DTM algorithms, Transactional Forwarding Algorithm and SCORe with support for open nested transactions and implements them into two frameworks for running distributed transactions, such as Hyflow and Infinispan.
Abstract: Distributed Transactional Memory (DTM) is a recent but promising model for programming distributed systems. It aims to present programmers with a simple to use distributed concurrency control abstraction (transactions), while maintaining performance and scalability similar to distributed fine-grained locks. Any complications usually associated with such locks (e.g., distributed deadlocks) are avoided. Building upon the previously proposed Transactional Forwarding Algorithm (TFA), we add support for open-nested transactions. We discuss the mechanisms and performance implications of such nesting, and identify the cases where using open nesting is warranted and the relevant parameters for such a decision. To the best of our knowledge, our work contributes the first ever implementation of a DTM system with support for open-nested transactions.
24 citations
••
06 Feb 2014TL;DR: This paper seeks to identify a sweet spot between permissiveness and efficiency by introducing the Time-Warp Multi-version algorithm (TWM), and demonstrates the practicality of this approach through an extensive experimental study, where it is shown to show an average performance improvement of 65% in high concurrency scenarios.
Abstract: The notion of permissiveness in Transactional Memory (TM) translates to only aborting a transaction when it cannot be accepted in any history that guarantees correctness criterion. This property is neglected by most TMs, which, in order to maximize implementation's efficiency, resort to aborting transactions under overly conservative conditions. In this paper we seek to identify a sweet spot between permissiveness and efficiency by introducing the Time-Warp Multi-version algorithm (TWM). TWM is based on the key idea of allowing an update transaction that has performed stale reads (i.e., missed the writes of concurrently committed transactions) to be serialized by committing it in the past, which we call a time-warp commit. At its core, TWM uses a novel, lightweight validation mechanism with little computational overheads. TWM also guarantees that read-only transactions can never be aborted. Further, TWM guarantees Virtual World Consistency, a safety property that is deemed as particularly relevant in the context of TM. We demonstrate the practicality of this approach through an extensive experimental study, where we compare TWM with four other TMs, and show an average performance improvement of 65% in high concurrency scenarios.
24 citations
••
21 Jan 2009TL;DR: The costs of strong isolation are reduced by customizing isolation barriers for their observed usage by using hot swap to tighten or loosen the hypothesized pattern, while preserving strong isolation and introducing a family of optimization hypotheses that balance verification cost against generality.
Abstract: Speed improvements in today's processors have largely been delivered in the form of multiple cores, increasing the importance of abstractions that ease parallel programming. Software transactional memory (STM) addresses many of the complications of concurrency by providing a simple and composable model for safe access to shared data structures. Software transactions extend a language with an atomic primitive that declares that the effects of a block of code should not be interleaved with actions executing concurrently on other threads. Adding barriers to shared memory accesses provides atomicity, consistency and isolation.Strongly isolated STMs preserve the safety properties of transactions for all memory operations in a program, not just those inside an atomic block. Isolation barriers are added to non-transactional loads and stores in such a system to prevent those accesses from observing or corrupting a partially completed transaction. Strong isolation is especially important when integrating transactions into an existing language and memory model. Isolation barriers have a prohibitive performance overhead, however, so most STM proposals have chosen not to provide strong isolation.In this paper we reduce the costs of strong isolation by customizing isolation barriers for their observed usage. The customized barriers provide accelerated execution by blocking threads whose accesses do not follow the expected pattern. We use hot swap to tighten or loosen the hypothesized pattern, while preserving strong isolation. We introduce a family of optimization hypotheses that balance verification cost against generality.We demonstrate the feasibility of dynamic barrier optimization by implementing it in a bytecode-rewriting Java STM. Feedback-directed customization reduces the overhead of strong isolation from 505% to 38% across 11 non-transactional benchmarks; persistent feedback data further reduces the overhead to 16%. Dynamic optimization accelerates a multi-threaded transactional benchmark by 31% for weakly-isolated execution and 34% for strongly-isolated execution.
24 citations
••
28 Mar 2010TL;DR: This paper carefully studies a range of factors that can adversely influence transactional memory performance.
Abstract: Transactional memory promises to generalize transactional programming to mainstream languages and data structures. The purported benefit of transactions is that they are easier to program correctly than fine-grained locking and perform just as well. This performance claim is not always borne out because an application may violate a common-case assumption of the TM designer or because of external system effects. This paper carefully studies a range of factors that can adversely influence transactional memory performance.
24 citations
••
04 May 2008TL;DR: A novel energy-efficient hardware semaphore construction in which cores spin on local scratchpad memory, reducing the load on the shared bus is proposed and evaluated.
Abstract: We evaluate the energy-efficiency and performance of a number of synchronization mechanisms adapted for embedded devices. We focus on simple hardware accelerators for common software synchronization patterns. We compare the energy efficiency of a range of shared memory benchmarks using both spin-locks and a simple hardware transactional memory. In most cases, transactional memory provides both significantly reduced energy consumption and increased throughput. We also consider applications that employ concurrency patterns based on semaphores, such as pipelines and barriers. We propose and evaluate a novel energy-efficient hardware semaphore construction in which cores spin on local scratchpad memory, reducing the load on the shared bus.
23 citations