scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
DOI
01 Jan 2010
TL;DR: Transactional memory (TM) is a promising paradigm for concurrent programming, in which threads of an application communicate, and synchronize their actions, via inmemory transactions as mentioned in this paper, where each transaction can perform any number of operations on shared data, and then either commit or abort.
Abstract: Transactional memory (TM) is a promising paradigm for concurrent programming, in which threads of an application communicate, and synchronize their actions, via inmemory transactions. Each transaction can perform any number of operations on shared data, and then either commit or abort. When the transaction commits, the effects of all its operations become immediately visible to other transactions; when it aborts, however, those effects are entirely discarded. Transactions are atomic: programmers get the illusion that every transaction executes all its operations instantaneously, at some single and unique point in time. The TM paradigm has raised a lot of hope for mastering the complexity of concurrent programming. The aim is to provide programmers with an abstraction, i.e., the transaction, that makes handling concurrency as easy as with coarse-grained locking, while exploiting the parallelism of the underlying multi-core or multi-processor hardware as well as hand-crafted fine-grained locking (which is typically an engineering challenge). It is thus not surprising to see a large body of work devoted to implementing this paradigm efficiently, and integrating it with common programming languages. Very little work, however, was devoted to the underlying theory and principles. The aim of this thesis is to provide theoretical foundations for transactional memory. This includes defining a model of a TM, as well as answering precisely when a TM implementation is correct, what kind of properties it can ensure, what the power and limitations of a TM are, and what inherent trade-offs are involved in designing a TM algorithm. In particular, this manuscript contains precise definitions of properties that capture the safety and progress semantics of TMs, as well as several fundamental results related to computability and complexity of TM implementations. While the focus of the thesis is on theory, its goal is to capture the common intuition behind the semantics of TMs and the properties of existing TM implementations.

6 citations

Proceedings ArticleDOI
01 Aug 2016
TL;DR: The notion of approximate consistency in transactional memory is introduced with K-opacity as a relaxed consistency property where read instructions in a read-only transaction may read one of K most recent written values, whileread instructions in an update transaction read always the latest value.
Abstract: In multi-version transactional memory read-onlytransactions do not have to abort, while update transactionsmay abort. There are situations where system delays donot allow to have precise consistency, such as in large scalenetwork and database applications, due to network delays orother factors. In order to cope with such systems, we introducehere the notion of approximate consistency in transactionalmemory. We define K-opacity as a relaxed consistencyproperty where read instructions in a read-only transactionmay read one of K most recent written values, while readinstructions in an update transaction read always the latestvalue. The relaxed consistency for read-only transactionshas two benefits: (i) it reduces space requirements, since anew object version is saved once every K object updates, which reduces the total number of saved object versionsby a factor of K, and (ii) it reduces the number of aborts, since there is smaller chance for read-only transactions toabort update transactions. This framework allows to haveworst-case consistency guarantees and simultaneously goodperformance characteristics. In addition to correctness proofs, we demonstrate the performance benefits of our approachwith experimental analysis. We tested our algorithm fordifferent values of K using different benchmarks and weobserved that when we increase K the number of abortsdecreases and at the same time the throughput increases.

6 citations

Book
01 Jan 2015
TL;DR: This paper discusses Hybrid MPI/OpenMP Programming, Hybrid Object Implementations and Abortable Objects, and how many Threads Will Be Too Many?
Abstract: Concurrent Systems: Hybrid Object Implementations and Abortable Objects.- Runtime-Aware Architectures.- MPI Thread-Level Checking for MPI+OpenMP Applications.- Event-Action Mappings for Parallel Tools Infrastructures.- Low-Overhead Detection of Memory Access Patterns and Their Time Evolution.- Automatic On-line Detection of MPI Application Structure with Event Flow Graphs.- Online Automated Reliability Classification of Queueing Models for Streaming Processing Using Support Vector Machines.- A Duplicate-Free State-Space Model for Optimal Task Scheduling.- On the Heterogeneity Bias of Cost Matrices when Assessing Scheduling Algorithms.- Hardware Round-Robin Scheduler for Single-ISA Asymmetric Multi-Core.- Moody Scheduling for Speculative Parallelization.- Allocating Jobs with Periodic Demands.- A Multi-Level Hypergraph Partitioning Algorithm Using Rough Set Clustering.- Non-preemptive Throughput Maximization for Speed-Scaling with Power-Down.- Scheduling Tasks from Selfish Multi-tasks Agents.- Locality and Balance for Communication-Aware Thread Mapping in Multicore Systems.- Concurrent Priority Queues Are not Good Priority Schedulers.- Load Balancing Prioritized Tasks via Work-Stealing.- Optimizing Task Parallelism with Library-Semantics-Aware Compilation.- Data Layout Optimization for Portable Performance.- Automatic Data Layout Optimizations for GPUs.- Performance Impacts with Reliable Parallel File Systems at Exascale Level.- Rapid Tomographic Image Reconstruction via Large-Scale Parallelization.- Software consolidation as an efficient energy and cost Saving Solution for a SaaS/PaaS Cloud Model.- VMPlaceS A Generic Tool to Investigate and Compare VM Placement Algorithms.- A Connectivity Model for Agreement in Dynamic Systems.- DFEP: Distributed Funding-based Edge Partitioning.- PR-STM: Priority Rule Based Software Transactions on the GPU.- Leveraging MPI-3 Shared-Memory Extensions for Efficient PGAS Runtime Systems.- A Practical Transactional Memory Interface.- A Multicore Parallelization of Continuous Skyline Queries on Data Streams.- A Fast and Scalable Graph Coloring Algorithm for Multi-core and Many-core Architectures.- A Composable Deadlock-Free Approach to Object-Based Isolation.- Scalable Data-Driven PageRank: Algorithms, System Issues & Lessons Learned.- How Many Threads Will Be Too Many? On the Scalability of OpenMP Implementations.- Efficient Nested Dissection for Multicore Architectures.- Scheduling Trees of Malleable Tasks for Sparse Linear Algebra.- Elastic Tasks: Unifying Task Parallelism and SPMD Parallelism with an Adaptive Runtime.- Semi-discrete Matrix-Free Formulation of 3D Elastic Full Waveform Inversion Modeling.- 10,000 Performance Models per Minute - Scalability of the UG4 Simulation Framework.- Exploiting Task-Based Parallelism in Bayesian Uncertainty Quantification.- Parallelization of an Advection-Diffusion Problem Arising in Edge Plasma Physics Using Hybrid MPI/OpenMP Programming.- Behavioral Non-Portability in Scientific Numeric Computing.- Fast Parallel Suffix Array on the GPU.- Effective Barrier Synchronization on Intel Xeon Phi Coprocessor.- High Performance Multi-GPU SpMV for Multi-component PDE-based Applications.- Accelerating Lattice Boltzmann Applications with OpenACC.- High-Performance and Scalable Design of MPI-3 RMA on Xeon Phi Clusters.- Improving Performance of Convolutional Neural Networks by Separable Filters on GPU.- Iterative Sparse Triangular Solves for Preconditioning.- Targeting the Parallella.- Systematic Fusion of CUDA Kernels for Iterative Sparse Linear System Solvers.- Efficient Execution of Multiple CUDA Applications using Transparent Suspend, Resume and Migration.

6 citations

Posted Content
TL;DR: Last-use opacity and strong last- use opacity are introduced, a pair of new TM safety properties meant to be a compromise between strong properties like opacity and minimal ones like serializability, which eliminate all but a small class of benign inconsistent views and pose no stringent conditions on transactions.
Abstract: Transaction Memory (TM) is a concurrency control abstraction that allows the programmer to specify blocks of code to be executed atomically as transactions. However, since transactional code can contain just about any operation attention must be paid to the state of shared variables at any given time. E.g., contrary to a database transaction, if a TM transaction reads a stale value it may execute dangerous operations, like attempt to divide by zero, access an illegal memory address, or enter an infinite loop. Thus serializability is insufficient, and stronger safety properties are required in TM, which regulate what values can be read, even by transactions that abort. Hence, a number of TM safety properties were developed, including opacity, and TMS1 and TMS2. However, such strong properties preclude using early release as a technique for optimizing TM, because they virtually forbid reading from live transactions. On the other hand, properties that do allow early release are either not strong enough to prevent any of the problems mentioned above (recoverability), or add additional conditions on transactions with early release that limit their applicability (elastic opacity, live opacity, virtual world consistency). This paper introduces last-use opacity, a new TM safety property that is meant to be a compromise between strong properties like opacity and serializability. The property eliminates all but a small class of inconsistent views and poses no stringent conditions on transactions. For illustration, we present a last-use opaque TM algorithm and show that it satisfies the new safety property.

6 citations

Proceedings ArticleDOI
07 Dec 2011
TL;DR: This work has developed a system named SAW that decouples the synchronization mechanism from the application logic of a Java program and enables the programmer to statically select a suitable synchronization mechanisms from a lock or an STM.
Abstract: To rewrite a sequential program into a concurrent one, the programmer has to enforce atomic execution of a sequence of accesses to shared memory to avoid unexpected inconsistency. There are two means of enforcing this atomicity: one is the use of lock-based synchronization and the other is the use of software transactional memory (STM). However, it is difficult to predict which one is more suitable for an application than the other without trying both mechanisms because their performance heavily depends on the application. We have developed a system named \emph{SAW} that decouples the synchronization mechanism from the application logic of a Java program and enables the programmer to statically select a suitable synchronization mechanism from a lock or an STM. We introduce annotations to specify critical sections and shared objects. In accordance with the annotated source program and the programmer's choice of a synchronization mechanism, SAW generates aspects representing the synchronization processing. By comparing the rewriting cost using SAW and that using individual synchronization mechanism directly, we show that SAW relieves the programmer's burden. Through several benchmarks, we demonstrate that SAW is an effective way of switching synchronization mechanisms according to the characteristics of each application.

6 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888