scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Transactional memory is a leading paradigm for designing concurrent applications for tomorrow's multi-core architectures and is being seriously considered by the industry---both as part of software solutions and as the basis for novel hardware designs.
Abstract: Transactional memory is a leading paradigm for designing concurrent applications for tomorrow's multi-core architectures. It follows and draws much inspiration from earlier research on concurrent data structures and concurrency control. Quite remarkably, it has succeeded in breaking out of the research community, and is being seriously considered by the industry---both as part of software solutions and as the basis for novel hardware designs.

15 citations

Proceedings ArticleDOI
24 Oct 2012
TL;DR: The Atomic Dataflow model is presented, a new task-based parallel programming model for C/C++ which integrates dataflow abstractions into the shared memory programming model and employs transactional memory which guarantees atomicity of shared memory updates.
Abstract: In this paper we present Atomic Dataflow model (ADF), a new task-based parallel programming model for C/C++ which integrates dataflow abstractions into the shared memory programming model. The ADF model provides pragma directives that allow a programmer to organize a program into a set of tasks and to explicitly define input data for each task. The task dependency information is conveyed to the ADF runtime system which constructs the dataflow task graph and builds the necessary infrastructure for dataflow execution. Additionally, the ADF model allows tasks to share data. The key idea is that computation is triggered by dataflow between tasks but that, within a task, execution occurs by making atomic updates to common mutable state. To that end, the ADF model employs transactional memory which guarantees atomicity of shared memory updates. We show examples that illustrate how the programmability of shared memory can be improved using the ADF model. Moreover, our evaluation shows that the ADF model performs well in comparison with programs parallelized using OpenMP and transactional memory.

15 citations

Proceedings ArticleDOI
09 Mar 2011
TL;DR: Hybrid binary rewriting is proposed, which aims to automatically instrument all shared memory accesses in critical sections of x86 binaries, while achieving overhead close to that obtained when performing manual instrumentation at the source code level.
Abstract: Memory access instrumentation is fundamental to many applications such as software transactional memory systems, profiling tools and race detectors. We examine the problem of efficiently instrumenting memory accesses in x86 machine code to support software transactional memory and profiling. We aim to automatically instrument all shared memory accesses in critical sections of x86 binaries, while achieving overhead close to that obtained when performing manual instrumentation at the source code level.The two primary options in building such an instrumentation system are static and dynamic binary rewriting: the former instruments binaries at link time before execution, while the latter binary rewriting instruments binaries at runtime. Static binary rewriting offers extremely low overhead but is hampered by the limits of static analysis. Dynamic binary rewriting is able to use runtime information but typically incurs higher overhead. This paper proposes an alternative: hybrid binary rewriting. Hybrid binary rewriting is built around the idea of a persistent instrumentation cache (PIC) that is associated with a binary and contains instrumented code from it. It supports two execution modes when using instrumentation: active and passive modes. In the active execution mode, a dynamic binary rewriting engine (PIN) is used to intercept execution, and generate instrumentation into the PIC, which is an on-disk file. This execution mode can take full advantage of runtime information. Later, passive execution can be used where instrumented code is executed out of the PIC. This allows us to attain overheads similar to those incurred with static binary rewriting.This instrumentation methodology enables a variety of static and dynamic techniques to be applied. For example, in passive mode, execution occurs directly from the original executable save for regions that require instrumentation. This has allowed us to build a low-overhead transactional memory profiler. We also demonstrate how we can use the combination of static and dynamic techniques to eliminate instrumentation for accesses to locations that are thread-private.

15 citations

01 Jan 2009
TL;DR: A novel tool, TMunit, is introduced to assist researchers in designing and optimizing transactional memory (TMs) and provides a domain-specific language for specifying workloads, and tests the performance and semantics of TMs.
Abstract: Transactional memory (TM) is expected to become a widely used parallel programming paradigm for multi-core architectures. To reach this goal, we need tools that do not only help develop TMs, but also test them and evaluate them on a wide range of workloads. In this paper, we introduce a novel tool, TMunit, to assist researchers in designing and optimizing TMs. TMunit provides a domain-specific language for specifying workloads, and tests the performance and semantics of TMs. TMunit is freely available online. It comes with a test suite that compares the performance of TMs and explain their differences using semantics tests that outlines behavioral characteristics.

15 citations

Proceedings ArticleDOI
24 Jan 2015
TL;DR: A thorough investigation of the interplay between memory allocators and software transactional memory (STM) systems shows that allocators can interfere with the way memory addresses are mapped to versioned locks on state-of-the-art softwaretransactional memory implementations.
Abstract: Although dynamic memory management accounts for a significant part of the execution time on many modern software systems, its impact on the performance of transactional memory systems has been mostly overlooked. In order to shed some light into this subject, this paper conducts a thorough investigation of the interplay between memory allocators and software transactional memory (STM) systems. We show that allocators can interfere with the way memory addresses are mapped to versioned locks on state-of-the-art software transactional memory implementations. Moreover, we observed that key aspects of allocators such as false sharing avoidance, scalability, and locality have a drastic impact on the final performance. For instance, we have detected performance differences of up to 171% in the STAMP applications when using distinct allocators. Moreover, we show that optimizations at the STM-level (such as caching transactional objects) are not effective when a modern allocator is already in use. All in all, our study highlights the importance of reporting the allocator utilized in the performance evaluation of transactional memory systems.

15 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888