scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1986"


Journal ArticleDOI
TL;DR: The Escrow Method offered here is designed to support nonblocking record updates by transactions that are “long lived” and thus require long periods to complete, and several advantages result.
Abstract: A method is presented for permitting record updates by long-lived transactions without forbidding simultaneous access by other users to records modified. Earlier methods presented separately by Gawlick and Reuter are comparable but concentrate on “hot-spot” situations, where even short transactions cannot lock frequently accessed fields without causing bottlenecks. The Escrow Method offered here is designed to support nonblocking record updates by transactions that are “long lived” and thus require long periods to complete. Recoverability of intermediate results prior to commit thus becomes a design goal, so that updates as of a given time can be guaranteed against memory or media failure while still retaining the prerogative to abort. This guarantee basically completes phase one of a two-phase commit, and several advantages result: (1) As with Gawlick's and Reuter's methods, high-concurrency items in the database will not act as a bottleneck; (2) transaction commit of different updates can be performed asynchronously, allowing natural distributed transactions; indeed, distributed transactions in the presence of delayed messages or occasional line disconnection become feasible in a way that we argue will tie up minimal resources for the purpose intended; and (3) it becomes natural to allow for human interaction in the middle of a transaction without loss of concurrent access or any special difficulty for the application programmer. The Escrow Method, like Gawlick's Fast Path and Reuter's Method, requires the database system to be an “expert” about the type of transactional updates performed, most commonly updates involving incremental changes to aggregate quantities. However, the Escrow Method is extendable to other types of updates.

271 citations


Journal ArticleDOI
TL;DR: The goal of the work presented here is to establish the minimum set of modifications to LTMs that allow synchronized retrievals and distributed updates (whenever semantic conflicts can be resolved), and will continue to maintain a high degree of local DBMS autonomy.

85 citations


Journal ArticleDOI
15 Jun 1986
TL;DR: A model of multiple layers of abstraction that explains this phenomenon and suggests an approach to building layered systems with transaction oriented synchronization and roll back may make it easier to provide the high data integrity of reliable database transaction processing in a broader class of information systems.
Abstract: There are many examples of actions on abstract data types which can be correctly implemented with nonserializable and nonrecoverable schedules of reads and writes. We examine a model of multiple layers of abstraction that explains this phenomenon and suggests an approach to building layered systems with transaction oriented synchronization and roll back. Our model may make it easier to provide the high data integrity of reliable database transaction processing in a broader class of information systems. We concentrate on the recovery aspects here, a technical report [Moss et al 85] has a more complete discussion of concurrency control.

79 citations


Journal ArticleDOI
TL;DR: This work study and formalize mutual exclusion mechanisms that are more general and that may provide higher reliability than the voting mechanisms that have been proposed as solutions to this problem.
Abstract: A network partition can break a distributed computing system into groups of isolated nodes. When this occurs, a mutual exclusion mechanism may be required to ensure that isolated groups do not concurrently perform conflicting operations. We study and formalize these mechanisms in three basic scenarios: where there is a single conflicting type of action; where there are two conflicting types, but operations of the same type do not conflict; and where there are two conflicting types, but operations of one type do not conflict among themselves. For each scenario, we present applications that require mutual exclusion (e.g., name servers, termination protocols, concurrency control). In each case, we also present mutual exclusion mechanisms that are more general and that may provide higher reliability than the voting mechanisms that have been proposed as solutions to this problem.

73 citations


Journal Article
TL;DR: In this paper, the authors have briefly discussed the questions about concurrency control in distributed database systems, its eritieron of correctness, algorithms and techniques, and some questions related to its implementation such as deadloek, locks in relational database, robustness and so on.
Abstract: Concurrency control is necessary and important in any multiusers, especially distributed database systems. In the paper we have briefly discussed the questions about concurrency control in distributed database systems, its eritieron of correctness, algorithms and techniques, and some questions related to its implementation such as deadloek, locks in relational database, robustness and so on.

72 citations


Book
01 Oct 1986
TL;DR: This website will be so easy for you to access the internet service, so you can really keep in mind that the book is the best book for you.
Abstract: We present here because it will be so easy for you to access the internet service. As in this new era, much technology is sophistically offered by connecting to the internet. No any problems to face, just for this day, you can really keep in mind that the book is the best book for you. We offer the best here to read. After deciding how your feeling will be, you can enjoy to visit the link and get the book.

51 citations


Proceedings ArticleDOI
05 Feb 1986
TL;DR: The Time Warp mechanism is introduced as a new method for concurrency control in distributed database systems that uses the use of object rollback as the fundamental tool for synchronization instead of blocking or abortion.
Abstract: In this paper we introduce the Time Warp mechanism as a new method for concurrency control in distributed database systems. Its major distinguishing features are first, a unification of transactions and data as two forms of the more general notion of object, and second, the use of object rollback as the fundamental tool for synchronization instead of blocking or abortion.

47 citations


Journal ArticleDOI
TL;DR: A method is proposed which uses a Petri net model to formally identify both the state and the state reachability tree of a distributed system, which are used to define systematically the boundaries of a conversation.
Abstract: The problems of error detection and recovery are examined in a number of concurrent processes expressed as a set of communicating sequential processes (CSP). A method is proposed which uses a Petri net model to formally identify both the state and the state reachability tree of a distributed system. These are used to define systematically the boundaries of a conversation, including the recovery and test lines which are essential parts of the fault-tolerant mechanism. The techniques are implemented using the OCCAM programming language, which is derived from CSP. The application of this method is shown by a control example.

36 citations


Proceedings ArticleDOI
01 Nov 1986
TL;DR: This paper describes several new optimistic concurrency control techniques for objects in distributed systems, proves their correctness and optimality properties, and characterizes the circumstances under which each is likely to be useful.
Abstract: A concurrency control technique is optimistic if it allows transactions to execute without synchronization, relying on commit-time validation to ensure serializability. This paper describes several new optimistic concurrency control techniques for objects in distributed systems, proves their correctness and optimality properties, and characterizes the circumstances under which each is likely to be useful. These techniques have the following novel aspects. First, unlike many methods that classify operations only as reads or writes, these techniques systematically exploit type-specific properties of objects to validate more interleavings. Necessary and sufficient validation conditions are derived directly from an object's data type specification. Second, these techniques are modular: they can be applied selectively on a per-object (or even per-operation) basis in conjunction with standard pessimistic techniques such as two-phase locking, permitting optimistic methods to be introduced exactly where they will be most effective. Third, when integrated with quorum-consensus replication, these techniques circumvent certain tradeoffs between concurrency and availability imposed by comparable pessimistic techniques. Finally, the accuracy and efficiency of validation are further enhanced by some technical improvements: distributed validation is performed as a side- effect of the commit protocol, and validation takes into account the results of operations, accepting certain interleavings that would have produced delays in comparable pessimistic schemes.

32 citations


Proceedings Article
25 Aug 1986
TL;DR: A new architecture that fully integrates local and global database management in a transparent for the user fashion is presented that utilizes the workstation’s local processing and uses the global mainframe for sharing and maintenance of consistency.
Abstract: This paper presents a new architecture that fully integrates local and global database management in a transparent for the user fashion. The architecture utilizes the workstation’s local processing and uses the global mainframe for sharing and maintenance of consistency. Two access path distribution protocols distribute data and processing by localizing uncommon paths to their requesting workstations while avoiding repetition of globally shared paths in workstations. A new concurrency control protocol is used which has its foundation on the deferred update strategy, the concept of differential files, and a new lock for derived objects.

32 citations


Journal ArticleDOI
D. Z. Badal1
TL;DR: The proposed algorithm has a hierarchical design intended to detect the most frequent deadlocks with maximum efficiency and is suitable for distributed computer systems.
Abstract: We propose a distributed deadlock detection algorithm for distributed computer systems. We consider two types of resources, depending on whether the remote resource lock granularity and mode can or cannot be determined without access to the remote resource site. We present the algorithm, its performance analysis, and an informal argument about its correctness. The proposed algorithm has a hierarchical design intended to detect the most frequent deadlocks with maximum efficiency.


Book ChapterDOI
09 Jul 1986
TL;DR: This research is directed towards producing a core database system that can be easily extended to meet the demands of new applications, and the use of object-oriented programming and rule-based specification techniques as bases for this work.
Abstract: The design of database systems capable of supporting non-traditional application areas such as expert systems and related AI applications, CAD/CAM and VLSI data management, scientific and statistical applications, and image/voice applications has recently emerged as an important direction of database system research. These new applications differ from conventional applications in a number of critical aspects, including data modeling requirements, processing functionality, concurrency control and recovery mechanisms, and access methods and storage structures. The goal of this project is to simplify the development of database management systems for new applications by designing and prototyping a new generation of database systems. Our research is directed towards producing a core database system that can be easily extended to meet the demands of new applications. This system will permit extensions to the data modeling, query processing, access method, storage structure, concurrency control, and recovery components of the system. We are currently considering the use of object-oriented programming and rule-based specification techniques as bases for this work.

Proceedings ArticleDOI
05 Feb 1986
TL;DR: A validation scheme which especially supports read transactions, so that the number of backups decreases substantially, is discussed, and a very simple solution for the starvation problem is presented.
Abstract: The original approach of optimistic concurrency control /2/ has some serious weaknesses with respect to validation, long transactions and starvation. This paper first discusses design alternatives which avoid these disadvantages. Essential improvements can be reached by a new validation scheme which is called snapshot validation. The paper then discusses a validation scheme which especially supports read transactions, so that the number of backups decreases substantially. Finally a very simple solution for the starvation problem is presented. The proposal is perfectly consistent with the underlying optimistic approach.

Book
02 Jan 1986
TL;DR: In this paper, the authors describe the design and proposed implementation of a new application program interface to a database management system, called a portal, which allows a program to request a collection of tuples at once and supports novel concurrency control schemes.
Abstract: This paper describes the design and proposed implementation of a new application program interface to a database management system. Programs which browse through a database making ad-hoc updates are not well served by conventional embedding of DBMS commands in programming languages. a new embedding is suggested which overcomes all deficiencies. This construct, called a portal, allows a program to request a collection Of tuples at once and supports novel concurrency control schemes.

Journal Article
TL;DR: A formal framework is developed for proving correctness of algorithms which implement nested transactions and a simple "action tree" data structure is defined, which describes the ancestor relationships among executing transactions and also describes the views which different transactions have of the data.
Abstract: A formal framework is developed for proving correctness of algorithms which implement nested transactions. In particular, a simple "action tree" data structure is defined, which describes the ancestor relationships among executing transactions and also describes the views which different transactions have of the data. A generalization of "serializability" to the domain of nested transactions with failures, is defined. A characterization is given for this generalization of serializability, in terms of absence of cycles in an appropriate dependency relation on transactions. A slightly simplified version of Moss'' locking algorithm is presented in detail, and a careful correctness proof is given. The style of correctness proof appears to be quite interesting in its own right. The description of the algorithm, from its initial specification to its detailed implementation, is presented as a series of "event-state algebra" levels, each of which "simulates" the previous one in a straightforward way.

Proceedings ArticleDOI
05 Feb 1986
TL;DR: The implementation of the concurrency control algorithm for the new class of logs, which is different from all previously known classes such as two phase locking (2PL), strictly serializable (SSR), timestamp ordering (TO), which have been defined in literature are discussed.
Abstract: We propose multidimensional timestamp protocols where each transaction has a timestamp vector of multiple elements. The timestamp vectors need not be distinct but do define a partial order. The serializability order among the transactions is determined by any topological sort of their timestamp vectors. The timestamp in our protocols is constructed dynamically, not just based on the starting/finishing time as in conservative and optimistic timestamp methods, and thus the concurrency control can be enforced based on more precise dependency information derived from the operations of the transactions. Several classes of logs have been identified based on the degree of concurrency which represents the number of logs accepted by a concurrency controller [12]. The class for our protocols is different from any previously known classes such as two phase locking (2PL), D-serializable (DSR), strictly serializable (SSR), timestamp ordering (TO), which have been defined in [5, 9, 12, 13]. If the dimension of the timestamp vector is one, then our protocols recognize the class timestamp ordering (TO). We will briefly discuss the implementation of the concurrency control algorithm for the new class.

Book
01 Jun 1986
TL;DR: Algorithm for ensuring the consistency of a distributed relational data base subject to multiple, concurrent updates and mechanisms to correctly update multiple copies of objects are included.
Abstract: This paper contains algorithms for ensuring the consistency of a distributed relational data base subject to multiple, concurrent updates. Also included are mechanisms to correctly update multiple copies of objects and to continue operation when less than all machines in the network are operational. Together with [4] and [12], this paper constitutes the significant portions of the design for a distributed data base version of INGRES.


Proceedings ArticleDOI
01 May 1986
TL;DR: A model of a distributed database system is presented which provides a framework to study the performance of different concurrency control algorithms and performance criteria to evaluate different algorithms are discussed.
Abstract: In this paper, we analyze the performance of a concurrency control algorithm for replicated database systems. We present a model of a distributed database system which provides a framework to study the performance of different concurrency control algorithms. We discuss performance criteria to evaluate different algorithms. We use the model to analyze the performance of an algorithm for concurrency control in replicated database systems. The technique used in analysis is iterative and approximate. We plot a set of performance measures for several values of the model parameters. The results of analysis are compared against a simulation study.


Proceedings Article
25 Aug 1986
TL;DR: A scheme for transaction routing that significantly reduces the lock contention probability and leads to a substantial reduction in transaction response time and the reduction in inter-system data contenlion produces a large impact on the performance of optimislic type concurrency control.
Abstract: Multiple systems coupling incurs performance degradation due to inter-system (global) lock conlention and database buffer invalidation. At high transaction rates, the level of inter-syslem interference can have a severe hnpact on performance. In this paper, we propose a scheme for transaction routing lhat reduces inter-system interference while keeping load nearly balanced. The routing decision is based on affinity relations defined between transactions and databases. A methodology, employing an integer linear programming technique, is developed to classify incoming transactions into affinity groups based on their dotuba& call reference puffem. Based on traces from two of IBM’s high volume single system customers, we find that. at high lransaction rates, the proposed aflinily based routing significantly reduces the lock contention probability and leads to a substantial reduction in transaction response time. Further, the reduction in inter-system data contenlion, produces a large hnpact on the performance of optimislic type concurrency control.

31 Dec 1986
TL;DR: A model of multi-layers of inertia is examined to provide the high data INTEGRITY of RELIABLE DATABASE TRANSACTION PROCESSing in a broader class of information systems.
Abstract: THERE ARE MANY EXAMPLES OF ACTIONS ON ABSTRACT DATA TYPES WHICH CAN BE CORRECTLY IMPLEMENTED WITH NONSERIALIZABLE AND NONRECOVERABLE SCHEDULES OF READS AND WRITES. WE EXAMINE A MODEL OF MULTIPLE LAYERS OF ABSTRACTION THAT EXPLAINS THIS PHENOMENON AND SUGGESTS AN APPROACH TO BUILDING LAYERED SYSTEMS WITH TRANSACTION ORIENTED SYNCHRONIZATION AND ROLL BACK. OUR MODEL MAY MAKE IT EASIER TO PROVIDE THE HIGH DATA INTEGRITY OF RELIABLE DATABASE TRANSACTION PROCESSING IN A BROADER CLASS OF INFORMATION SYSTEMS.

Journal ArticleDOI
TL;DR: In this paper, the authors present a proof method for correctness of concurrency control algorithms in a hierarchically decomposed database and an application of the proof method to show the correctness of a two-phase locking-based algorithm, called partitioned twophase locking, for hierarchical decomposed databases.
Abstract: In a large integrated database, there often exists an “information hierarchy,” where both raw data and derived data are stored and used together. Therefore, among update transactions, there will often be some that perform only read accesses from a certain (i.e., the “raw” data) portion of the database and write into another (i.e., the “derived” data) portion. A conventional concurrency control algorithm would have treated such transactions as regular update transactions and subjected them to the usual protocols for synchronizing update transactions. In this paper such transactions are examined more closely. The purpose is to devise concurrency control methods that allow the computation of derived information to proceed without interfering with the updating of raw data.The first part of the paper presents a proof method for correctness of concurrency control algorithms in a hierarchically decomposed database. The proof method provides a framework for understanding the intricacies in dealing with hierarchically decomposed databases. The second part of the paper is an application of the proof method to show the correctness of a two-phase-locking- based algorithm, called partitioned two-phase locking, for hierarchically decomposed databases. This algorithm is a natural extension to the Version Pool method proposed previously in the literature.

31 Dec 1986
TL;DR: A gentle introduction to Nested Transactions, including presentations of CONCURRENCY CONTORL, DEADLOCK DETECTION and AVOIDANCE, and RECOVERY (in TERMS of SHADOW COPIES).
Abstract: THE CONCEPT OF ATOMIC TRANSACTIONS HAS PROVED TO BE USEFUL IN THINKING ABOUT AND IMPLEMENTING CONCURRENCY CONTROL AND RECOVERY MANAGEMENT IN RELI- ABLE MULTI-USER SYSTEMS OPERATING ON SHARED DATA. NESTED TRANSACTIONS EN- HANCE CONCURRENCY AND RECOVERY SEMANTICS BY PROVIDING MORE COMPOSABLE, FINER GRAINED CONTROL. HERE IS OFFERED A GENTLE INTRODUCTION TO NESTED TRANSACTIONS, INCLUDING PRESENTATIONS OF CONCURRENCY CONTORL (IN TERMS OF LOCKING), DEADLOCK DETECTION AND AVOIDANCE, AND RECOVERY (IN TERMS OF SHADOW COPIES). THE CONCEPTS EXTEND TO OTHER CONCURRENCY CONTROL AND RECOV ERY METHODS, SUCH AS TIMESTAMPS AND LOGGING, THOUGH DETAILS ARE NOT INCLUD- ED. WHILE THEY HAVE USES IN CENTRALIZED SYSTEMS, NESTED TRANSACTIONS ARE ESPECIALLY HELPFUL IN DISTRIBUTED SYSTEMS. TO ILLUSTRATE THIS, SOME SIMPLE DISTRIBUTED APPLICATIONS ARE SKETCHED, AS WELL AS TECHNIQUES FOR IMPLEMENT- ING NESTED TRANSACTION IN DISTRIBUTED SYSTEMS.


Journal ArticleDOI
TL;DR: A Two Phase Locking (2PL) derivative is presented and it is shown that in a system employing W2PL no ww synchronization is needed and more importantly, no transaction will restart on a READ request and under the No Blind Writes assumption no Reader will be restarted or starved.

Journal ArticleDOI
A. Hac1
TL;DR: The author presents a new approach to modeling file systems using queueing networks based on the analysis of execution of transactions in the system, which allows multiple classes of transactions and shared files to be represented.
Abstract: The author presents a new approach to modeling file systems using queueing networks. The delays due to locking the files are modeled using service centers whose service times and probabilities of access are estimated from the values of measurable quantities. The model of a lock is based on the analysis of execution of transactions in the system. The lock for every file is modeled as a sequence of service centers. The decomposition method can be used to solve the model, which allows multiple classes of transactions and shared files to be represented. An example involving measurement data collected in a small business installation is given to compare performance measures provided by the simulation and analytic models.

Book
01 Jun 1986
TL;DR: This paper describes a study of the performance of centralized concurrency control algorithms and shows that in general, locking algorithms provide the best performance.
Abstract: This paper describes a study of the performance of centralized concurrency control algorithms. An algorithm-independent simulation framework was developed in order to support comparative studies of various concurrency control algorithms. We describe this framework in detail and present performance results which were obtained for what we believe to be a representative cross-section of the many proposed algorithms. The basic algorithms studied include four locking algorithms. two timestamp algorithms. and one optimistic algorithm. Also. we briefly summarize studies of several multiple version algorithms and several hierarchical algorithms. We show that. in general, locking algorithms provide the best performance.

01 Jan 1986
TL;DR: This work describes a layered design for a dislributed operating system with dislributed protocols that can be modified -or even completely changed -while the system is running, which makes it suitable for diverse applications.
Abstract: There is a need to design large database systems that are not rigid in their choice of algorithms and are responsive to faults/failures and performance degradation. To attack this challenge, we fonnalize and experiment with design principles that allow the implementation of an adaptable distributed system. By adaptable, we imply that systems can be reconfigured at run-time based on perfonnance and continuity of operations requirements and load conditions. Our research focus is on algorithms for concurrency control, resiliency to site failures, network partitioning, and failure of communication systems. The strategies for dynamic reconfiguration of the software algorithms and determining their impact are being studied both theoretically and via experiments on a prototype system called RAID being developed at Purdue. We describe a layered design for a dislributed operating system with dislributed protocols that can be modified -or even completely changed -while the system is running. This capability will be a help in tuning the system to improve its perfonnance and reliability. In addition, the increased flexibility of this design makes it suitable for diverse applications. and capable of incorporating new distributed systems technology as it becomes available, unlike existing systems.