scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Posted Content
TL;DR: A novel algorithm is described that reduces the problem to reachability, so that off-the-shelf program analysis tools can perform the reasoning necessary for proving commutativity, and abstracts away effects of methods that would be the same regardless of the order.
Abstract: Commutativity of data structure methods is of ongoing interest, with roots in the database community. In recent years commutativity has been shown to be a key ingredient to enabling multicore concurrency in contexts such as parallelizing compilers, transactional memory, speculative execution and, more broadly, software scalability. Despite this interest, it remains an open question as to how a data structure's commutativity specification can be verified automatically from its implementation. In this paper, we describe techniques to automatically prove the correctness of method commutativity conditions from data structure implementations. We introduce a new kind of abstraction that characterizes the ways in which the effects of two methods differ depending on the order in which the methods are applied, and abstracts away effects of methods that would be the same regardless of the order. We then describe a novel algorithm that reduces the problem to reachability, so that off-the-shelf program analysis tools can perform the reasoning necessary for proving commutativity. Finally, we describe a proof-of-concept implementation and experimental results, showing that our tool can verify commutativity of data structures such as a memory cell, counter, two-place Set, array-based stack, queue, and a rudimentary hash table. We conclude with a discussion of what makes a data structure's commutativity provable with today's tools and what needs to be done to prove more in the future.

2 citations

Proceedings ArticleDOI
10 Aug 2009
TL;DR: The approach is summarized and novel techniques for performing recovery lazily and detecting cyclic dependencies are described, for concurrent execution of non-commuting operations from distinct boosted transactions.
Abstract: Transactional boosting is a methodology which improves transaction performance by using data-structure commutativity and abstract locks for synchronization.We announce a method for concurrent execution of non-commuting operations from distinct boosted transactions. Abstract locks are passed from one transaction to the next, and dependencies are created, enforcing certain commit orders. We summarize the approach and describe novel techniques for (i) performing recovery lazily and (ii) detecting cyclic dependencies.

2 citations

Patent
13 Feb 2013
TL;DR: In this paper, an implementing method of real-time transactional memory of software is described, where a maximum number of rollbacks is limited and the worst execution time of a task can be limited in a certain range.
Abstract: The invention discloses an implementing method of real-time transactional memory of software. Because a maximum number of roll-back is limited, the time for executing a transactional memory code can be guaranteed to be maintained in a certain range; blockage caused by a low-priority task occurs under two conditions, the first condition is that the blockage occurs when the task is just scheduled, and the other condition is that the blockage occurs when the task is arrayed at the head of a queue and awakened; a maximum blockage time is always the time for the low-priority task to execute the transactional memory for one time (or submitting and roll-back which consumes more time is selected ) under the two conditions, and is selected from the maximum value of the transactional memory executed by all the tasks having lower priorities than the current task; and because the number of the tasks in the queue is fixed, the worst execution time of the task can be limited in the certain range.

2 citations

01 Jan 2008
TL;DR: A method for model-checking safety and liveness properties over procedural programs by first augmenting a concrete procedural program with a well founded ranking function, and then abstracting the augmented program by a finitary state abstraction.
Abstract: Transactional memory is a programming abstraction intended to simplify the synchronization of conflicting concurrent memory accesses without the difficulties associated with locks. In the first part of this thesis we provide a framework and tools that allow to formally verify that a transactional memory implementation satisfies its specification. First we show how to specify transactional memory in terms of admissible interchanges of transaction operations, and give proof rules for showing that an implementation satisfies its specification. We illustrate how to verify correctness, first using a model checker for bounded instantiations, and subsequently by using a theorem prover, thus eliminating all bounds. We provide a mechanical proof of the soundness of the verification method, as well as mechanical proofs for several implementations from the literature, including one that supports non-transactional memory accesses. Procedural programs with unbounded recursion present a challenge to symbolic model-checkers since they ostensibly require the checker to model an unbounded call stack. In the second part of this thesis we present a method for model-checking safety and liveness properties over procedural programs. Our method performs by first augmenting a concrete procedural program with a well founded ranking function, and then abstracting the augmented program by a finitary state abstraction. Using procedure summarization the procedural abstract program is then reduced to a finite-state system, which is model checked for the property.

2 citations

Dissertation
01 Jan 2010
TL;DR: The subjects the authors have addressed are not directly related to TM, but may be possibly applied to transactional programs, and some solutions use type-and-effect systems to guarantee the absence of dataraces and deadlocks.
Abstract: Syntax Tree (AST) Although more often used in program translation or compilation, an Abstract Syntax Tree (AST) [Jon03] is also a frequent starting point for static program analysis. An AST is another graph representation of a program, which is concerned with source code structure. Unlike the CFG, which details the behavior of a program, the AST shows the relation between different statements in a program. Since it is a syntactic representation, and a direct mapping to points in the source code, it is suitable for easy program transformations. Even though it is not as adequate for program analysis, since it does not feature context information, it is often the starting point for building more complex representations, such as a CFG. 2.2.4 Type and Effect Systems Type systems allow to statically assure that certain operations are only performed on data that corresponds to a required type [FQ03b]. The types of values may be either declared or inferred during the analysis. They are implemented in most common-use languages through annotations that define the data type of each value to use. A type-checker observes the program before compilation to assure that certain runtime errors derived from incorrect interaction between data types do not occur, rejecting programs that are not correct according to the type system. A type system defines the available types, the type resulting from an interaction between two specific types, the allowed combinations, and as such, the possible type-errors. A type system works by trying to apply a type to a program. Effect systems specify the effects that result from an operation [FQ03b]. Effect systems are typically extensions of type systems, in which case they are collectively called a type-and-effect system. These specify not only what kind of values there are in the program, but also what happens to those values, maintaining the same idea of mechanically applying rules to check whether the program is correct with respect to the constraints of the system. Type-and-effect systems may be used to statically check a program in respect to almost any correctness criterion, including those related to concurrency. We will see in Section 2.4 how some solutions use type-and-effect systems to guarantee the absence of dataraces and deadlocks. 2.2.5 Symbolic Execution One major drawback in dynamic program analysis is the difficulty in providing guarantees for all possible program inputs. Symbolic execution [Kin76] is a static analysis technique that allows us to have some knowledge of variable values in each point in the program. Instead of assigning literal values to variables, the flow of a program is followed by providing inferred symbolic values, which represent classes of values of the variables. A simple example is the inference of the value of an integer variable. Certain mathematical properties may be used to eliminate certain values. If two positive integers are multiplied, then the result is known to be positive as well. This guarantee may be used to determine control flow of certain sections in the program. 17 2. RELATED WORK 2.3. Program Transformation and Analysis Tools 2.2.6 Discussion Program analysis is a very wide subject, with many ramifications that are out of the scope of this work. We have presented a few general concepts, some of which are of particular interest to us, and will be referred later in this dissertation. The concepts introduced in the following sections and in the remainder of this document will be defined by using the definitions presented in this section. In Section 2.3 we will see how some tools are used to provide program analysis for varied purposes, and in Section 2.4 we will see in which ways datarace detection tools make use of program analysis. Later, in Chapters 3 and 4, many concepts presented here will also be invoked in order to explain the approaches taken in detecting anomalies. Although the subjects we have addressed are not directly related to TM, they may be possibly applied to transactional programs. 2.3 Program Transformation and Analysis Tools Our goal is to scan TM programs for spots that may cause an anomaly. For this, we must build tools that parse a subject program and find pieces of code that match a set of rules. In this section we consider systems that attempt to simplify the process of defining analyses and transformations of programs. These transformation systems vary in many aspects. While some of them are confined to a single specific language (or a small set of languages), others have a language-definition component, typically through the usage of a language-definition-language. The approach for defining transformations may be through the use of a transformation language, or imperative manipulation of an abstract program representation. Particularly, each system generally falls into one of two categories; we shall call them rewriting systems and transformation frameworks. These are very different types of systems, whose differences are relevant to the scope of this thesis. However, because they have coincident goals, we present and compare them together. On the rest of this section we analyze this difference closer, as well as other forms of categorization. We then take a look at some of these solutions, list their features, and make a comparison between them. 2.3.1 Rewriting Engines vs Compiler Frameworks We present two typical characterizations of transformations systems. These are mere guidelines, in the sense that a system that fits on one of these classifications does not necessarily present all of the related attributes. Rewriting systems follow a declarative approach. They are easier and quicker to use, and can be used for any language. This is because their functioning is based on transformation languages, often divided into a language-definition language and a substitution language. A languagedefinition language defines a grammar, enumerating terminal symbols and tokens, much in the way of Backus-Naur Form (BNF). Substitution languages define the output for each matched token, optionally according to free symbols that are to be matched with attributes of the matched 18 2. RELATED WORK 2.3. Program Transformation and Analysis Tools token. This is, however, very limited compared with the context-dependent transformation available with program analysis tools presented next. Many rewriting systems work by examples, i.e., “find all x=x+1 and replace them with x++”. Examples of such systems include TXL [Cor06], ASF+SDF Meta-environment [vHKO02], and Stratego/XT [Vis04]. Compiler Frameworks are more complex and powerful. Instead of making simple text transformations on input code, they manipulate some form of abstract representation of the target program, such as an Abstract Syntax Tree (AST). This representation is typically just referred to as the Intermediate Representation (IR). They are composed of front-, back-, and mid-ends. The front-end parses the input and generates an instance of the IR, which is passed to the mid-end. The mid-end then manipulates the IR and applies all required transformations on the program structure. This modified IR is then passed on to the back-end, which unparses it, i.e., generates the output code from it. This architecture around IRs is very good to handle programs written in a specific language, but makes it hard to adapt the system to a new one. These frameworks make a more semantical interpretation of the whole input program (which may span more than one file), and so this logic is spread throughout the framework. Although some provide facilities that allow them to be extended to new languages, this extension is not trivial and can not be accomplished without modifying at least one component of the architecture. In contrast to the declarative environment used in rewrite systems, compiler frameworks take a more imperative approach. They provide libraries for representing and manipulating IR instances, often implemented in the same language they manipulate. Another difference from rewriters is that, because the whole program is handled on a static representation, a lot more context information from any part of the program can be used in the transformation or analysis. Note that representing a large part of the program in IR, or all of it, makes compiler frameworks a lot heavier than simple rewrite engines. Examples of such frameworks include ROSE [SQ03], LLVM [LA04], DMS-SRT [DMS04] and Polyglot [NCM03]. In conclusion, rewriting is very simple and quick to use, with light systems. They are great for quickly testing language extensions and direct translation between languages. Compiler frameworks are less flexible, consisting of more complex systems. They do provide additional power by supplying the user with specific and extensive API. for IR manipulation. This method can be a lot heavier: it is not cheap to handle an IR representing a program with millions of lines, compared with simple pattern detection and substitution. However, it does allow more complex program optimizations and analyses. 2.3.2 Examples of Analysis and Transformation Tools We now present some of the most well-known systems, and try to understand how they relate. ROSE The ROSE Compiler Infrastructure [SQ03] was developed at the Lawrence Livermore National Laboratory by Daniel Quinlan. It is still under development but seems to be stable. Because it is open-sourced and supports addition of new front-ends and backends, it could support additional languages, but out-of-the-box it is only designed for use with C, C++ and Fortran. ROSE allows attribute evaluation and IR manipulation through 19 2. RELATED WORK 2.3. Program Transformation and Analysis Tools a specific C++ API. It supports out-of-the-box graphical representation of programs, and high-level analyses such as control-flow and data-flow. LLVM The Low Level Virtual Machine [LA04] (LLVM), is a compiler infrastructure designed for program optimization throughout th

2 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888