scispace - formally typeset
Search or ask a question
Book•

Concepts and applications of multilevel transactions and open nested transactions

03 Jan 1992-pp 515-553
About: The article was published on 1992-01-03 and is currently open access. It has received 272 citations till now. The article focuses on the topics: Nested transaction & Distributed transaction.
Citations
More filters
Proceedings Article•DOI•
27 Feb 2006
TL;DR: This paper presents a new implementation of transactional memory, log-based transactionalMemory (LogTM), that makes commits fast by storing old values to a per-thread log in cacheable virtual memory and storing new values in place.
Abstract: Transactional memory (TM) simplifies parallel programming by guaranteeing that transactions appear to execute atomically and in isolation. Implementing these properties includes providing data version management for the simultaneous storage of both new (visible if the transaction commits) and old (retained if the transaction aborts) values. Most (hardware) TM systems leave old values "in place" (the target memory address) and buffer new values elsewhere until commit. This makes aborts fast, but penalizes (the much more frequent) commits. In this paper, we present a new implementation of transactional memory, log-based transactional memory (LogTM), that makes commits fast by storing old values to a per-thread log in cacheable virtual memory and storing new values in place. LogTM makes two additional contributions. First, LogTM extends a MOESI directory protocol to enable both fast conflict detection on evicted blocks and fast commit (using lazy cleanup). Second, LogTM handles aborts in (library) software with little performance penalty. Evaluations running micro- and SPLASH-2 benchmarks on a 32-way multiprocessor support our decision to optimize for commit by showing that only 1-2% of transactions abort.

724 citations

Proceedings Article•DOI•
10 Feb 2007
TL;DR: This paper proposes a hardware transactional memory system called LogTM Signature Edition (LogTM-SE), which uses signatures to summarize a transactions read-and write-sets and detects conflicts on coherence requests (eager conflict detection), and allows cache victimization, unbounded nesting, thread context switching and migration, and paging.
Abstract: This paper proposes a hardware transactional memory (HTM) system called LogTM Signature Edition (LogTM-SE). LogTM-SE uses signatures to summarize a transactions read-and write-sets and detects conflicts on coherence requests (eager conflict detection). Transactions update memory "in place" after saving the old value in a per-thread memory log (eager version management). Finally, a transaction commits locally by clearing its signature, resetting the log pointer, etc., while aborts must undo the log. LogTM-SE achieves two key benefits. First, signatures and logs can be implemented without changes to highly-optimized cache arrays because LogTM-SE never moves cached data, changes a blocks cache state, or flash clears bits in the cache. Second, transactions are more easily virtualized because signatures and logs are software accessible, allowing the operating system and runtime to save and restore this state. In particular, LogTM-SE allows cache victimization, unbounded nesting (both open and closed), thread context switching and migration, and paging

384 citations


Cites background from "Concepts and applications of multil..."

  • ...To increase concurrency, some also argue for open nesting [30] which allows an inner transaction to commit its changes and release isolation before the outer transactions commit....

    [...]

Journal Article•DOI•
TL;DR: This work describes the mechanism incorporating both transactions and exceptions and a validation technique allowing to assess the correctness of process specifications and presents a solution for implementing more reliable processes by using exception handling and atomicity.
Abstract: Fault tolerance is a key requirement in process support systems (PSS), a class of distributed computing middleware encompassing applications such as workflow management systems and process centered software engineering environments. A PSS controls the flow of work between programs and users in networked environments based on a "metaprogram" (the process). The resulting applications are characterized by a high degree of distribution and a high degree of heterogeneity (properties that make fault tolerance both highly desirable and difficult to achieve). We present a solution for implementing more reliable processes by using exception handling, as it is used in programming languages, and atomicity, as it is known from the transaction concept in database management systems. We describe the mechanism incorporating both transactions and exceptions and present a validation technique allowing to assess the correctness of process specifications.

360 citations


Cites background from "Concepts and applications of multil..."

  • ...In addition, a considerable amount of work toward flexible recovery has been done in the context of advanced transaction models [11], [19], [18], [24], [21], [ 49 ], [48]....

    [...]

Book•DOI•
01 Jan 2012
TL;DR: This chapter deals with the flexibility needs of both pre-specified and loosely-specified processes and elicitates requirements for flexible process support in a PAIS.
Abstract: Traditionally, process-ware information systems (PAISs) have focused on the support of predictable and repetitive business processes. Even though respective processes are suited to be fully pre-specified in a process model, flexibility is required to support dynamic process adaptations in case of exceptions. Flexibility is also needed to accommodate the need for evolving business processes and to cope with business process variability. Furthermore, PAISs are increasingly used to support less structured processes which can often be characterized as knowledgeintensive. Processes of this category are neither fully predictable nor repetitive, and therefore cannot be fully pre-specified at build-time. The (partial) unpredictability of these processes also demands a certain amount of looseness. This chapter deals with the flexibility needs of both pre-specified and loosely-specified processes and elicitates requirements for flexible process support in a PAIS. In addition, the chapter discusses PAIS features needed to accommodate flexibility needs in practice like, for example, traceability, business compliance and user support.

348 citations

Proceedings Article•DOI•
20 Oct 2006
TL;DR: This paper extends the recently-proposed flat Log-based Transactional Memory (LogTM) with nested transactions and proposes escape actions that allow trusted code to run outside the confines of the transactional memory system.
Abstract: Nested transactional memory (TM) facilitates software composition by letting one module invoke another without either knowing whether the other uses transactions. Closed nested transactions extend isolation of an inner transaction until the toplevel transaction commits. Implementations may flatten nested transactions into the top-level one, resulting in a complete abort on conflict, or allow partial abort of inner transactions. Open nested transactions allow a committing inner transaction to immediately release isolation, which increases parallelism and expressiveness at the cost of both software and hardware complexity.This paper extends the recently-proposed flat Log-based Transactional Memory (LogTM) with nested transactions. Flat LogTM saves pre-transaction values in a log, detects conflicts with read (R) and write (W) bits per cache block, and, on abort, invokes a software handler to unroll the log. Nested LogTM supports nesting by segmenting the log into a stack of activation records and modestly replicating R/W bits. To facilitate composition with nontransactional code, such as language runtime and operating system services, we propose escape actions that allow trusted code to run outside the confines of the transactional memory system.

229 citations


Cites background from "Concepts and applications of multil..."

  • ...Nested LogTM also supports open nested transactions [24, 32], which provide greater concurrency and richer semantics....

    [...]

  • ...When an open nested transaction S commits within an enclosing transaction L, (a) the TM system releases data read or written by S, so that other transactions can access them without generating conflicts and (b) S may register commit and compensating actions to be run when transaction L commits or aborts, respectively [9, 32]....

    [...]

References
More filters
Book•
01 Aug 1990
TL;DR: This third edition of a classic textbook can be used to teach at the senior undergraduate and graduate levels and concentrates on fundamental theories as well as techniques and algorithms in distributed data management.
Abstract: This third edition of a classic textbook can be used to teach at the senior undergraduate and graduate levels. The material concentrates on fundamental theories as well as techniques and algorithms. The advent of the Internet and the World Wide Web, and, more recently, the emergence of cloud computing and streaming data applications, has forced a renewal of interest in distributed and parallel data management, while, at the same time, requiring a rethinking of some of the traditional techniques. This book covers the breadth and depth of this re-emerging field. The coverage consists of two parts. The first part discusses the fundamental principles of distributed data management and includes distribution design, data integration, distributed query processing and optimization, distributed transaction management, and replication. The second part focuses on more advanced topics and includes discussion of parallel database systems, distributed object management, peer-to-peer data management, web data management, data stream systems, and cloud computing. New in this Edition: New chapters, covering database replication, database integration, multidatabase query processing, peer-to-peer data management, and web data management. Coverage of emerging topics such as data streams and cloud computing Extensive revisions and updates based on years of class testing and feedback Ancillary teaching materials are available.

2,395 citations

Book•
01 Jan 1985
TL;DR: The method for implementing nested transactions is novel in that it uses locking for concurrency control and the necessary algorithms for locking, recovery, distributed commitment, and distributed deadlock detection for a nested transaction system are presented.
Abstract: This report addresses the issue of providing software for reliable distributed systems. In particular, we examine how to program a system so that the software continues to work in the face of a variety of failures of parts of the system. The design presented uses the concept of transactions: collections of primitive actions that are indivisible. The indivisibility of transactions insures that consistent results are obtained even when requests are processed concurrently or failures occur during a request. Our design permits transactions to be nested. Nested transactions provide nested universes of synchronization and recovery from failures. The advantages of nested transactions over single-level transactions are that they provide concurrency control within transactions by serializing subtransactions appropriately, and that they permit parts of a transaction to fail without necessarily aborting the entire transaction. The method for implementing nested transactions described in this report is novel in that it uses locking for concurrency control. We present the necessary algorithms for locking, recovery, distributed commitment, and distributed deadlock detection for a nested transaction system. While the design has not been implemented, it has been simulated.

975 citations

Book•
01 Jun 1988
TL;DR: Some areas which require further study are described: the integration of the transaction concept with the notion of abstract data type, some techniques to allow transactions to be composed of sub- transactions, and handles which last for extremely long times.
Abstract: A transaction is a transformation of state which has the properties of atomicity (all or nothing), durability (effects survive failures) and consistency (a correct transformation). The transaction concept is key to the structuring of data management applications. The concept may have applicability to programming systems in general. This paper restates the transaction concepts and attempts to put several implementation approaches in perspective. It then describes some areas which require further study: (1) the integration of the transaction concept with the notion of abstract data type, (2) some techniques to allow transactions to be composed of sub- transactions, and (3) handling transactions which last for extremely long times (days or months).

759 citations

Journal Article•DOI•
TL;DR: This paper investigates how the semantic knowledge of an application can be used in a distributed database to process transactions efficiently and to avoid some of the delays associated with failures.
Abstract: This paper investigates how the semantic knowledge of an application can be used in a distributed database to process transactions efficiently and to avoid some of the delays associated with failures. The main idea is to allow nonserializable schedules which preserve consistency and which are acceptable to the system users. To produce such schedules, the transaction processing mechanism receives semantic information from the users in the form of transaction semantic types, a division of transactions into steps, compatibility sets, and countersteps. Using these notions, we propose a mechanism which allows users to exploit their semantic knowledge in an organized fashion. The strengths and weaknesses of this approach are discussed.

537 citations

Journal Article•DOI•
TL;DR: Two novel concurrency control algorithms for abstract data types are presented and it is proved that both algorithms ensure a local atomicity property called dynamic atomicity, which means that they can be used in combination with any other algorithms that also ensureynamic atomicity.
Abstract: Two novel concurrency algorithms for abstract data types are presented that ensure serializability of transactions. It is proved that both algorithms ensure a local atomicity property called dynamic atomicity. The algorithms are quite general, permitting operations to be both partial and nondeterministic. The results returned by operations can be used in determining conflicts, thus allowing higher levels of concurrency than otherwise possible. The descriptions and proofs encompass recovery as well as concurrency control. The two algorithms use different recovery methods: one uses intentions lists, and the other uses undo logs. It is shown that conflict relations that work with one recovery method do not necessarily work with the other. A general correctness condition that must be satisfied by the combination of a recovery method and a conflict relation is identified. >

318 citations