scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Proceedings ArticleDOI
01 Jun 2022
TL;DR: In this paper , the authors apply CSP (Communicating Sequential Processes) to formally analyze the components of DPSTM v2 architecture, the data exchange process between components, and two different transaction processing modes.
Abstract: Transactional memory is designed for developing parallel programs and improving the efficiency of parallel pro-grams. PSTM (python software transactional memory) mainly supports multi-core parallel programs based on the python language. In order to better adapt to the developing requirements of distributed concurrent programs and enhance the safety of the system, DPSTM (distributed python software transactional memory) was developed. Compared with PSTM, DPSTM has the advantages of higher operating efficiency and stronger fault tolerance. In this paper, we apply CSP (Communicating Sequential Processes) to formally analyze the components of DPSTM v2 architecture, the data exchange process between components, and two different transaction processing modes. We use the model checker PAT (Process Analysis Toolkit) to model the DPSTM v2 architecture and verify eight properties, including deadlock freedom, ACI (atomicity, isolation, and consistency), sequential consistency, data server availability, read tolerance, and crash tolerance. The verification results show that the DPSTM v2 archi-tecture can guarantee all of the above properties. In particular, the normal operation of the system can be maintained when some of the data servers are crashed, ensuring the safety of a distributed system.
DissertationDOI
10 Jun 2022
TL;DR: In this article , the authors study transaction scheduling problem in several systems that differ through the variation of the intra-core communication cost in accessing shared resources, and make several theoretical contributions providing tight, near-tight, and/or impossibility results on three different performance evaluation metrics: execution time, communication cost, and load, for any transaction scheduling algorithm.
Abstract: Transactional memory provides an alternative synchronization mechanism that removes many limitations of traditional lock-based synchronization so that concurrent program writing is easier than lock-based code in modern multicore architectures. The fundamental module in a transactional memory system is the transaction which represents a sequence of read and write operations that are performed atomically to a set of shared resources; transactions may conflict if they access the same shared resources. A transaction scheduling algorithm is used to handle these transaction conflicts and schedule appropriately the transactions. In this dissertation, we study transaction scheduling problem in several systems that differ through the variation of the intra-core communication cost in accessing shared resources. Symmetric communication costs imply tightly-coupled systems, asymmetric communication costs imply large-scale distributed systems, and partially asymmetric communication costs imply non-uniform memory access systems. We made several theoretical contributions providing tight, near-tight, and/or impossibility results on three different performance evaluation metrics: execution time, communication cost, and load, for any transaction scheduling algorithm. We then complement these theoretical results by experimental evaluations, whenever possible, showing their benefits in practical scenarios. To the best of our knowledge, the contributions of this dissertation are either the first of their kind or significant improvements over the best previously known results.
Patent
08 Oct 2019
TL;DR: In this paper, the second compiler output includes type check code to verify one or more type inferences associated with the first compiler output, where the compiler output is executed in parallel via different threads.
Abstract: Systems, apparatuses and methods may provide for technology that generates a first compiler output based on input code that includes dynamically typed variable information and generates a second compiler output based on the input code, wherein the second compiler output includes type check code to verify one or more type inferences associated with the first compiler output. The technology may also execute the first compiler output and the second compiler output in parallel via different threads.
Proceedings Article
01 Jan 2007
TL;DR: This paper considers a practical case where internal fabric buses and connectivity gives a shared-memory characteristic to the architecture and relies on static reconfigurability and high-level programming techniques to render automated memory access scheduling feasible in a deterministic manner.
Abstract: Reconfigurable fabrics are designed by tiling operators and memories. In the context of SoC, multiple local memories are critical for algorithmic performance, since they provide concurrent data accesses for configured compute processes. This paper considers a practical case where internal fabric buses and connectivity gives a shared-memory characteristic to the architecture. We rely on static reconfigurability and high-level programming techniques to render automated memory access scheduling feasible in a deterministic manner. A programming model has been developed to enable task synchronization on memory resources. Static scheduling is achieved at compile time by observing the sequence of operations in the concurrent processes, and by synthesizing a controller program to support the best schedule of operations favoring high throughput. The hardware target is a reconfigurable fabric designed at STMicroelectronics in 65nm. This hardware/software solution is scalable, flexible and provide high throughput on memories, as shown on a graphic accelerator for an MPEG-4 deblocking application.
01 Jan 2010
TL;DR: Modular Transactional Memory is presented, the framework which allows programmers to extend STM with concurrency control algorithms tailored to the data structures they use in concurrent programs, and two example structures: a finite map which allows concurrent transactions to operate on disjoint sets of keys, and a non-deterministic channel which supports concurrent sources and sinks.
Abstract: Software transactional memory has the potential to greatly simplify development of concurrent software, by supporting safe composition of concurrent shared-state abstractions. However, STM semantics are defined in terms of low-level reads and writes on individual memory locations, so implementations are unable to take advantage of the properties of user-defined abstractions. Consequently, the performance of transactions over some structures can be disappointing. We present Modular Transactional Memory, our framework which allows programmers to extend STM with concurrency control algorithms tailored to the data structures they use in concurrent programs. We describe our implementation in Concurrent Haskell, and two example structures: a finite map which allows concurrent transactions to operate on disjoint sets of keys, and a non-deterministic channel which supports concurrent sources and sinks. Our approach is based on previous work by others on boosted and open-nested transactions, with one significant development: transactions are given types which denote the concurrency control algorithms they employ. Typed transactions offer a higher level of assurance for programmers reusing transactional code, and allow more flexible abstract concurrency control.

Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888