Topic
Transactional memory
About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.
Papers published on a yearly basis
Papers
More filters
••
03 Feb 1992TL;DR: The authors propose a distributed shared memory model based on a paged segmented two-level address space and an extended set of memory operations which support mapping between local and global address spaces and mapping of processes to transactions.
Abstract: The authors propose a distributed shared memory model based on a paged segmented two-level address space and an extended set of memory operations. In addition to the traditional read and write operations, the memory model includes operations which support mapping between local and global address spaces and mapping of processes to transactions. An architecture and associated algorithm are outlined for a virtual memory management unit to provide concurrency control for transactions. Although the traditional concept of the transaction is assumed, only the aspects of concurrency control and coherence are addressed. >
9 citations
••
01 Oct 2018TL;DR: In this paper, a software protocol combined with a persistent memory controller is proposed to ensure the atomicity of transactions on persistent memory resident data and maintaining consistency between the order in which processors perform stores and that in which the updated values become durable.
Abstract: Emerging Persistent Memory technologies (also pm, Non-Volatile DIMMs, Storage Class Memory or scm) hold tremendous promise for accelerating popular data-management applications like in-memory databases. However, programmers now need to deal with ensuring the atomicity of transactions on Persistent Memory resident data and maintaining consistency between the order in which processors perform stores and that in which the updated values become durable. The problem is specially challenging when high-performance isolation mechanisms like Hardware Transactional Memory (htm) are used for concurrency control. This work shows how htm transactions can be ordered correctly and atomically into PM by the use of a novel software protocol combined with a Persistent Memory Controller, without requiring changes to processor cache hardware or htm protocols. In contrast, previous approaches require significant changes to existing processor microarchitectures. Our approach, evaluated using both micro-benchmarks and the stamp suite compares well with standard (volatile) htm transactions. It also yields significant gains in throughput and latency in comparison with persistent transactional locking.
9 citations
••
07 Oct 2015
TL;DR: A general model for HyTM implementations is proposed, which captures the ability of hardware transactions to buffer memory accesses and captures for the first time the trade-off between the degree of hardware-software TM concurrency and the amount of instrumentation overhead.
Abstract: Several Hybrid Transactional Memory HyTM schemes have recently been proposed to complement the fast, but best-effort nature of Hardware Transactional Memory HTM with a slow, reliable software backup. However, the costs of providing concurrency between hardware and software transactions in HyTM are still not well understood.
In this paper, we propose a general model for HyTM implementations, which captures the ability of hardware transactions to buffer memory accesses. The model allows us to formally quantify and analyze the amount of overhead instrumentation caused by the potential presence of software transactions. We prove that 1 it is impossible to build a strictly serializable HyTM implementation that has both uninstrumented reads and writes, even for very weak progress guarantees, and 2 the instrumentation cost incurred by a hardware transaction in any progressive opaque HyTM is linear in the size of the transaction's data set. We further describe two implementations which exhibit optimal instrumentation costs for two different progress conditions. In sum, this paper proposes the first formal HyTM model and captures for the first time the trade-off between the degree of hardware-software TM concurrency and the amount of instrumentation overhead.
8 citations
••
01 Oct 2015TL;DR: This paper presents methods based on hardware transactional memory (HTM) for executing OpenMP barrier, critical, and taskwait directives without blocking, and shows a 73 % performance improvement over traditional locking approaches, and 23 % better than other HTM approaches on critical sections.
Abstract: OpenMP applications with abundant parallelism are often characterized by their high-performance. Unfortunately, OpenMP applications with a lot of synchronization or serialization-points perform poorly because of blocking, i.e. the threads have to wait for each other. In this paper, we present methods based on hardware transactional memory (HTM) for executing OpenMP barrier, critical, and taskwait directives without blocking. Although HTM is still relatively new in the Intel and IBM architectures, we experimentally show a 73 % performance improvement over traditional locking approaches, and 23 % better than other HTM approaches on critical sections. Speculation over barriers can decrease execution time by up-to 41 %. We expect that future systems with HTM support and more cores will have a greater benefit from our approach as they are more likely to block.
8 citations
01 Jan 2013
TL;DR: This paper provides a first study investigating the combination of different error detection mechanisms with transactional memory, with the objective to improve energy efficiency and introduces an analytical model that allows for recommendations for future work.
Abstract: The power envelope has become a major issue for the design of computer systems. One way of reducing energy consumption is to downscale the voltage of microprocessors. However, this does not come without costs. By decreasing the voltage, the likelihood of failures increases drastically and without mechanisms for reliability, the systems would not operate anymore. For reliability we need (1) error detection and (2) error recovery mechanisms. We provide in this paper a first study investigating the combination of different error detection mechanisms with transactional memory, with the objective to improve energy efficiency. We notably introduce an analytical model that allows us to give recommendations for future work.
8 citations