scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Proceedings ArticleDOI
02 Aug 1999
TL;DR: A minimal set of functionalities which the applications must provide in order to participate in transactional coordination processes is identified and how the missing database functionality can be added to arbitrary applications using transactionalcoordination agents is discussed.
Abstract: Composite systems are collections of autonomous, heterogeneous and distributed software applications. In these systems, data dependencies are continuously violated by local operations, and therefore coordination processes are necessary to guarantee overall correctness and consistency. Such coordination processes must be endowed with some form of execution guarantees, which require the participating subsystems to have certain database functionality (such as atomicity of local operations, order preservation, and either compensation of operations or the deferment of their commit). However, this functionality is not present in many applications and must be implemented by a transactional coordination agent coupled with the application. In this paper, we discuss the requirements to be met by the applications and their associated transactional coordination agents. We identify a minimal set of functionalities which the applications must provide in order to participate in transactional coordination processes, and we also discuss how the missing database functionality can be added to arbitrary applications using transactional coordination agents. Then, we identify the structure of a generic transactional coordination agent and provide an implementation example of a transactional coordination agent tailored to SAP R/3.

22 citations

Proceedings ArticleDOI
04 Feb 2017
TL;DR: This paper demonstrates a multi-threaded DBT-based emulator that scales in an architecture-independent manner, explores the trade-offs that exist when emulating atomic operations across ISAs, and presents a novel approach for correct and scalable emulation of load-locked/store-conditional instructions based on hardware transactional memory (HTM).
Abstract: Speed, portability and correctness have traditionally been the main requirements for dynamic binary translation (DBT) systems. Given the increasing availability of multi-core machines as both emulation guests and hosts, scalability has emerged as an additional design objective. It has however been an elusive goal for two reasons: contention on common data structures such as the translation cache is difficult to avoid without hurting performance, and instruction set architecture (ISA) disparities between guest and host (such as mismatches in the memory consistency model and the semantics of atomic operations) can compromise correctness. In this paper we address these challenges in a simple and memory-efficient way, demonstrating a multi-threaded DBT-based emulator that scales in an architecture-independent manner. Furthermore, we explore the trade-offs that exist when emulating atomic operations across ISAs, and present a novel approach for correct and scalable emulation of load-locked/store-conditional instructions based on hardware transactional memory (HTM). By adding around 1000 lines of code to QEMU, we demonstrate the scalability of both user-mode and full-system emulation on a 64-core x86_64 host running x86_64 guest code, and a 12-core, 96-thread POWER8 host running x86_64 and Aarch64 guest code.

22 citations

Proceedings ArticleDOI
09 Sep 2013
TL;DR: This work proposes a self-regulation approach of the concurrency level, which relies on a parametric analytical performance model aimed at predicting the scalability of the STM application as a function of the actual workload profile.
Abstract: Software Transactional Memory (STM) is recognized as an effective programming paradigm for concurrent applications. On the other hand, a core problem to cope with in STM deals with (dynamically) regulating the degree of concurrency, in order to deliver optimal performance. We address this problem by proposing a self-regulation approach of the concurrency level, which relies on a parametric analytical performance model aimed at predicting the scalability of the STM application as a function of the actual workload profile. The regulation scheme allows achieving optimal performance during the whole lifetime of the application via dynamic change of the number of concurrent threads according to the predictions by the model. The latter is customized for a specific application/platform through regression analysis, which is based on a lightweight sampling phase. We also present a real implementation of the model-based concurrency self-regulation architecture integrated within the open source TinySTM framework, and an experimental study based on standard STM benchmark applications.

22 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: This work introduces a new implementation of condition variables, which uses transactions internally, which can be used from within both transactions and lock-based critical sections, and which is compatible with existing C/C++ interfaces for condition synchronization.
Abstract: Recent microprocessors and compilers have added support for transactional memory (TM). While state-of-the-art TM systems allow the replacement of lock-based critical sections with scalable, optimistic transactions, there is not yet an acceptable mechanism for supporting the use of condition variables in transactions. We introduce a new implementation of condition variables, which uses transactions internally, which can be used from within both transactions and lock-based critical sections, and which is compatible with existing C/C++ interfaces for condition synchronization. By moving most of the mechanism for condition synchronization into user-space, our condition variables have low overhead and permit flexible interfaces that can avoid some of the pitfalls of traditional condition variables. Performance evaluation on an unmodified PARSEC benchmark suite shows equivalent performance to lock-basedcode, and our transactional condition variables also make it possible to replace all locks in PARSEC with transactions.

22 citations

01 Jan 2008
TL;DR: This approach is the first Hy-TM to combine three desirable features: execution whether or not the architectural support is present, execution of a single common code path, and immunity, for correctly synchronized programs, from the “privatization” problem.
Abstract: To reduce the overhead of Software Transactional Memory (STM) there are many recent proposals to build hybrid systems that use architectural support either to accelerate parts of a particular STM algorithm (Ha-TM), or to form a hybrid system allowing hardware-transactions and software-transactions to inter-operate in the same address space (Hy-TM). In this paper we introduce a Hy-TM design based on multi-reader, single-writer locking when a transaction tries to commit. This approach is the first Hy-TM to combine three desirable features: (i) execution whether or not the architectural support is present, (ii) execution of a single common code path, whether a transaction is running in software or hardware, (iii) immunity, for correctly synchronized programs, from the “privatization” problem. Our architectural support can be any traditional HTM supporting bounded or unbounded-size transactions, along with an instruction to test whether or not the current thread is running inside a hardware transaction. With this we carefully design the Hy-TM so that portions of its work can be elided when running a transaction in hardware mode. While not compared with the native HTM system, our simulations show that, when running with HW support, the main runtime overheads of the STM system are elided: Depending on the workload, the speedup with read-only transactions is up to 3.03× in the single-thread execution and 61× in the 32-thread case, while with read-and-write transactions it reaches over 10×.

22 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888