scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1983"


Journal ArticleDOI
TL;DR: This paper presents a theory for analyzing the correctness of concurrency control algorithms for multiversion database systems and uses the theory to analyze some new algorithms and some previously published ones.
Abstract: Concurrency control is the activity of synchronizing operations issued by concurrently executing programs on a shared database. The goal is to produce an execution that has the same effect as a serial (noninterleaved) one. In a multiversion database system, each write on a data item produces a new copy (or version) of that data item. This paper presents a theory for analyzing the correctness of concurrency control algorithms for multiversion database systems. We use the theory to analyze some new algorithms and some previously published ones.

483 citations


Journal ArticleDOI
TL;DR: It is shown that all true deadlocks are detected and that no false deadlock reported, and the algorithms can be applied in distributed database and other message communication systems.
Abstract: Distributed deadlock models are presented for resource and communication deadlocks. Simple distributed algorithms for detection of these deadlocks are given. We show that all true deadlocks are detected and that no false deadlocks are reported. In our algorithms, no process maintains global information; all messages have an identical short length. The algorithms can be applied in distributed database and other message communication systems.

449 citations


Journal ArticleDOI
TL;DR: Multilevel atomicity, a new correctness criteria for database concurrency control, weakens the usual notion of serializability by permitting controlled interleaving among transactions and appears to be especially suitable for applications in which the set of transactions has a natural hierarchical structure based on the hierarchical structure of an organization.
Abstract: Multilevel atomicity, a new correctness criteria for database concurrency control, is defined. It weakens the usual notion of serializability by permitting controlled interleaving among transactions. It appears to be especially suitable for applications in which the set of transactions has a natural hierarchical structure based on the hierarchical structure of an organization. A characterization for multilevel atomicity, in terms of the absence of cycles in a dependency relation among transaction steps, is given. Some remarks are made concerning implementation.

201 citations


Proceedings ArticleDOI
17 Aug 1983
TL;DR: A theory for proving the correctness of algorithms that manage replicated data is presented, an extension of serializability theory, which is applied to three replicated data algorithms: Gifford's “quorum consensus” algorithm, Eager and Sevcik’s “missing writes’ algorithm, and Computer Corporation of America's ”available copies” algorithms.
Abstract: A replicated database is a distributed database in which some data items are stored redundantly at multiple sites. The main goal is to improve system reliability. By storing critical data at multiple sites, the system can operate even though some sites have failed. However, few distributed database systems support replicated data, because it is difficult to manage as sites fail and recover.A replicated data algorithm has two parts. One is a discipline for reading and writing data item copies. The other is a concurrency control algorithm for synchronizing those operations. The read-write discipline ensures that if one transaction writes logical data item ×, and another transaction reads or writes x, there is some physical manifestation of that logical conflict. The concurrency control algorithm synchronizes physical conflicts; it knows nothing about logical conflicts. In a correct replicated data algorithm, the physical manifestation of conflicts must be strong enough so that synchronizing physical conflicts is sufficient for correctness.This paper presents a theory for proving the correctness of algorithms that manage replicated data. The theory is an extension of serializability theory. We apply it to three replicated data algorithms: Gifford's “quorum consensus” algorithm, Eager and Sevcik's “missing writes” algorithm, and Computer Corporation of America's “available copies” algorithm.

175 citations


Journal ArticleDOI
TL;DR: A level of robustness termed maximal partial operability is identified, which is the highest level attainable without significantly degrading performance under models of concurrency control and robustness.
Abstract: The problem of concurrency control in distributed database systems in which site and communication link failures may occur is considered. The possible range of failures is not restricted; in particular, failures may induce an arbitrary network partitioning. It is desirable to attain a high “level of robustness” in such a system; that is, these failures should have only a small impact on system operation.A level of robustness termed maximal partial operability is identified. Under our models of concurrency control and robustness, this robustness level is the highest level attainable without significantly degrading performance.A basis for the implementation of maximal partial operability is presented. To illustrate its use, it is applied to a distributed locking concurrency control method and to a method that utilizes timestamps. When no failures are present, the robustness modifications for these methods induce no significant additional overhead.

143 citations


Journal ArticleDOI
TL;DR: Transactions have proven to be a useful tool for constructing reliable database systems and are likely to be useful in many types of distributed systems, but existing mechanisms for synchronization, recovery, deadlock management and communication are often inadequate to implement these types efficiently.
Abstract: Transactions have proven to be a useful tool for constructing reliable database systems and are likely to be useful in many types of distributed systems. To exploit transactions in a general purpose distributed system, each node can execute a transaction kernel that provides services necessary to support transactions at higher system levels. The transaction model that the kernel supports must permit arbitrary operations on the wide collection of data types used by programmers. New techniques must be developed for specifying the synchronization and recovery properties of abstract types that are used in transactions. Existing mechanisms for synchronization, recovery, deadlock management and communication are often inadequate to implement these types efficiently, and they must be adapted or replaced.

72 citations


01 Jan 1983
TL;DR: Two previously proposed schemes for improving the performance of concurrency control algorithms, multiple versions and granularity hierarchies, are examined and all were found to improve performance in situations where the cost of concurrence control was high, but were of little use otherwise.
Abstract: In database management systems, transactions are provided for constructing programs which appear to execute atomically. If more than one transaction is allowed to run at once, a concurrency control algorithm must be employed to properly synchronize their execution. Many concurrency control algorithms have been proposed, and this thesis examines the costs and performance characteristics associated with a number of these algorithms. Two models of concurrency control algorithms are described. The first is an abstract model which is used to evaluate and compare the relative storage and CPU costs of concurrency control algorithms. Three algorithms, two-phase locking, basic timestamp ordering, and serial validation, are evaluated using this model. It is found that the costs associated with two-phase locking are at least as low as those for the other two algorithms. The second model is a simulation model which is used to investigate the performance characteristics of concurrency control algorithms. Results are presented for seven different algorithms, including four locking algorithms, two timestamp algorithms, and serial validation. All performed about equally well in situations where conflicts between transactions were rare. When conflicts were more frequent, the algorithms which minimized the number of transaction restarts were generally found to be superior. In situations where several algorithms each restarted the same number of transactions, those which restarted transactions which had done less work tended to perform the best. Two previously proposed schemes for improving the performance of concurrency control algorithms, multiple versions and granularity hierarchies, are also examined. A new multiple version algorithm based on serial validation is presented, and performance results are given for this algorithm, the CCA version pool algorithm, and multiversion timestamp ordering. Unlike their single version counterparts, all three algorithms performed comparably under the workloads considered. Three new hierarchical concurrency control algorithms, based on serial validation, basic timestamp ordering, and multiversion timestamp ordering, are presented. Performance results are given for these algorithms and a hierarchical locking algorithm. All were found to improve performance in situations where the cost of concurrency control was high, but were of little use otherwise.

66 citations


Proceedings ArticleDOI
29 Aug 1983
TL;DR: The effect of concurrency control methods on the performance of computer systems is analyzed in the context of a centralized database with a static lock request policy and shows that the analysis based on no resampling of locks is quite accurate and outperforms the simplified analysis with resamplings in accuracy.
Abstract: The effect of concurrency control methods on the performance of computer systems is analyzed in the context of a centralized database with a static lock request policy, i.e., database transactions should acquire all locks before their activation. In the lock conflict model the L locks required by each transaction are uniformly distributed over the N locks in the database. The computer system is modelled as a queueing network. Two scheduling policies for transaction activation are considered: FCFS with and without skip. In each case the scheduling overhead for scanning the blocked transactions is taken into account. The number of transactions to be scanned is limited by a window size parameter. The system is analyzed using a hierarchical decomposition method, where the highest level model yields the mean user response time. The results of the approximate solution are validated using a detailed simulation, which shows that the analysis based on no resampling of locks is quite accurate and outperforms the simplified analysis with resampling of locks in accuracy. The effect of varying the values of parameters such as transaction size, granularity of locking, scheduling discipline for transaction activation, scheduling overhead, and window size on system performance is investigated.

56 citations


Proceedings ArticleDOI
21 Mar 1983
TL;DR: In this paper, a formal framework is developed for proving correctness of algorithms which implement nested transactions, in particular, a simple "action tree" data structure is defined, which describes the ancestor relationships among executing transactions and also describes the views which different transactions have of the data.
Abstract: A formal framework is developed for proving correctness of algorithms which implement nested transactions. In particular, a simple "action tree" data structure is defined, which describes the ancestor relationships among executing transactions and also describes the views which different transactions have of the data. A generalization of "serializability" to the domain of nested transactions with failures, is defined. A characterization is given for this generalization of serializability, in terms of absence of cycles in an appropriate dependency relation on transactions. A slightly simplified version of Moss'' locking algorithm is presented in detail, and a careful correctness proof is given. The style of correctness proof appears to be quite interesting in its own right. The description of the algorithm, from its initial specification to its detailed implementation, is presented as a series of "event-state algebra" levels, each of which "simulates" the previous one in a straightforward way.

54 citations




01 Jan 1983
TL;DR: A distributed solution to the deadlock detection problem when requests have AND/OR form is presented and will be correctly detected for general resource requests of the form.
Abstract: : The authors present a procedure to detect deadlock in a distributed system. The procedure is dynamic and distributed. Deadlock will be correctly detected for general resource requests of the form: 'Lock file A and file B at NY or lock file A and file B at LA'. The contribution of this paper is that it presents a distributed solution to the deadlock detection problem when requests have AND/OR form.

Proceedings ArticleDOI
17 Aug 1983
TL;DR: This paper presents a formal treatment of atomicity, and treats serializability and recoverability together together, facilitating the precise analysis of online implementations, and focuses on local properties of components of a system, thus supporting modular design.
Abstract: Maintaining the consistency of long-lived, on-line data is a difficult task, particularly in a distributed system. A variety of researchers have suggested atomicity as a fundamental organizational concept for such systems. In this paper we present a formal treatment of atomicity. Our treatment is novel in three respects: First, we treat serializability and recoverability together, facilitating the precise analysis of online implementations. Second, we explore how to analyze user specified semantic information to achieve greater concurrency. Third, we focus on local properties of components of a system, thus supporting modular design. We present three local properties, verify that they ensure atomicity, and show that they are optimal. Previously published protocols are suboptimal. We show that these differences are the result of fundamental limitations in the model used to analyze those protocols; these limitations are not shared by our model.

Journal ArticleDOI
TL;DR: It is formally show that the performance of a scheduler, i.e., the amount of parallelism that it supports, depends explicitly upon the amounts of information that is available to the scheduler.
Abstract: A concurrency control mechanism (or a scheduler) is the component of a database system that safeguards the consistency of the database in the presence of interleaved accesses and update requests. We formally show that the performance of a scheduler, i.e., the amount of parallelism that it supports, depends explicitly upon the amount of information that is available to the scheduler. We point out that most previous work on concurrency control is simply concerned with specific points of this basic trade-off between performance and information. In fact, several of these approaches are shown to be optimal for the amount of information that they use.

Journal ArticleDOI
TL;DR: The chunk manager provides simple transaction management, concurrency control and allocates, administers and retrieves apparently contiguous chunks of data of arbitrary and varying size on disc to permit students and research workers to rapidly assemble and test their own DBMS.
Abstract: The chunk manager provides simple transaction management, concurrency control and allocates, administers and retrieves apparently contiguous chunks of data of arbitrary and varying size on disc. This system is designed to permit students and research workers to rapidly assemble and test their own DBMS, supporting any data model. Currently it is being used to support PS-Algol, an implementation of DAPLEX, a relational system and student database exercises. A chunk is similar to the common meaning of ‘record’ except that there is no implication of internal structure or consistent and static size. An efficient table processing capability is also provided. Locking and access control are implemented at the bag level to prevent conflicting use of data without excessive operating cost. The problems associated with concurrent access of shared bags have been studied and the currently implemented and future proposed solutions are discussed.

Proceedings ArticleDOI
21 Mar 1983
TL;DR: Hierarchical versions of a validation algorithm, a timestamp algorithm, and a multiversion algorithm are given, and hierarchical algorithm issues relating to request escalation and distributed databases are discussed.
Abstract: This paper shows that granularity hierarchies may be used with many types of concurrency control algorithms. Hierarchical versions of a validation algorithm, a timestamp algorithm, and a multiversion algorithm are given, and hierarchical algorithm issues relating to request escalation and distributed databases are discussed as well. It is argued that these hierarchical algorithms should offer improved performance for certain transaction mixes.

Journal ArticleDOI
TL;DR: The underlying decision problems of serializability are defined and shown to be NP-complete in a model, which is typical for most modern transaction oriented database management systems and it is most probable that both optimistic concurrency control types cannot be implemented efficiently in the general case.

Journal ArticleDOI
TL;DR: This work describes a proof schema for analyzing concurrency control correctness and illustrates the proof schema by presenting two new concurrency algorithms for distributed database systems.
Abstract: Concurrency control algorithms for database systems are usually regarded as methods for synchronizing Read and Write operations. Such methods are judged to be correct if they only produce serializable executions. However, Reads and Writes are sometimes inaccurate models of the operations executed by a database system. In such cases, serializability does not capture all aspects of concurrency control executions. To capture these aspects, we describe a proof schema for analyzing concurrency control correctness. We illustrate the proof schema by presenting two new concurrency algorithms for distributed database systems.

Proceedings ArticleDOI
21 Mar 1983
TL;DR: A model to help us understand, compare and control the behavior of locking and timestamping is presented, which it is hoped will eventually play such a role, but which is simple to understand and use.
Abstract: Many different algorithms have been proposed for database concurrency control, and many more can be synthesized by combining locking and timestamping. The correctness of these algorithms is already well understood, their performance is not. We need a model to help us understand, compare and control the behavior of locking and timestamping we present here a model which we hope will eventually play such a role, but which we believe is simple to understand and use.

Proceedings Article
31 Oct 1983
TL;DR: It is shown that both t,hese classes yield additional concurrency through t.he use of mu!tiple versions and this characterization is used to derive the first general multiversion protocol which does not use transaction rollback as a means for ensuring serializability.
Abstract: MO>! database systems ensure the consistency of the data by means of a concurrency control scheme that USE a polynomial time on-line scheduler. Papadimirrlou and Kanellakis have shown that for the most general multiversion database model no such effective scheduler exists. In this paper we focus our attention 0~ an efficient multiversion database model and derive necessary and sufficient conditions for ensuring serializability and serializability without the use of transaction rollback for this model. It is shown that both t,hese classes yield additional concurrency through t.he use of mu!tiple versions. This characterization is used to derive the first general multiversion protocol which does not use transaction rollback as a means for ensuring serializability.


Proceedings ArticleDOI
TL;DR: The paper investigates the suitability of several well-known abstraction mechanisms for database programming and presents some new abstraction mechanisms particularly designed to manage typical database problems like integrity and concurrency control.
Abstract: Databases contain vast amounts of highly related data accessed by programs of considerable size and complexity. Therefore, database programming has a particular need for high level constructs that abstract from details of data access, data manipulation, and data control. The paper investigates the suitability of several well-known abstraction mechanisms for database programming (e.g., control abstraction and functional abstraction). In addition, it presents some new abstraction mechanisms (access abstraction and transactional abstraction) particularly designed to manage typical database problems like integrity and concurrency control.


Journal ArticleDOI
TL;DR: An event order based model for specifying and analyzing concurrency control algorithms for distributed database systems has been presented in this article, where an expanded notion of history that includes the database access events as well as synchronization events is used to study the correctness, degree of concurrency, and other aspects of the algorithms such as deadlocks and reliability.
Abstract: An event order based model for specifying and analyzing concurrency control algorithms for distributed database systems has been presented. An expanded notion of history that includes the database access events as well as synchronization events is used to study the correctness, degree of concurrency, and other aspects of the algorithms such as deadlocks and reliability. The algorithms are mapped into serializable classes that have been defined based on the order of synchronization events such as lock points, commit point, arrival of a transaction, etc,.

Book
01 Jan 1983
TL;DR: The paper discusses the use of versions in the optimistic approach in order to free read transactions from concurrency control at all; the proposed concept does not restrict update transactions.
Abstract: The original approach of optimistic concurrency control has some serious weaknesses concerning long transactions and starvation. This paper first discusses design alternatives which avoid these disadvantages. Essential improvements can be reached by a new validation scheme which is called snapshot validation. The paper then discusses the use of versions in the optimistic approach in order to free read transactions from concurrency control at all; the proposed concept does not restrict update transactions.

Journal ArticleDOI
TL;DR: In this paper, the authors present the resiliency features of optimistic approach to concurrency control and demonstrate how it lends itself to a design of a reliable distributed database system, including concurrency, integrity, and atomicity control.
Abstract: This paper presents the resiliency features of the optimistic approach to concurrency control and demonstrates how it lends itself to a design of a reliable distributed database system. The validation of concurrency control, integrity control, and atomicity control has been integrated. This integration provides a high degree of concurrency and continuity of operations in spite of failures of transactions, processors, and communication system.

Journal ArticleDOI
TL;DR: The Micros operating system executes on a modular multimicroprocessor system that provides system-wide high-level control as well as local operating systems for individual nodes.
Abstract: The Micros operating system executes on a modular multimicroprocessor system. It provides system-wide high-level control as well as local operating systems for individual nodes.

Book ChapterDOI
01 Jan 1983
TL;DR: This paper compares the performances of two concurrency control algorithms: two-phase locking and timestamp ordering by solving analytically a queuing network which gives response times of the SABRE data base machine.
Abstract: This paper compares the performances of two concurrency control algorithms: two-phase locking and timestamp ordering This is achieved by solving analytically a queuing network which gives response times of the SABRE data base machine It is shown that locking is better than timestamp ordering when there is a high probability of conflict between transactions However, if the mean number of requests per transaction is high then timestamp ordering is the better technique; this is improved when the frequency of small transactions increases (ie number of requests having a geometrical distribution law)

Proceedings Article
31 Oct 1983
TL;DR: Here the authors will assume serializability as the criterion of correctness of the actions performed by the transactions on the data items, that is, the interleaved execution of the transaction should be equivalent to some serial Execution of the the transactions.
Abstract: A database is viewed as a collection or data objects which can be read or written by concurrent transactions. Interleaving of updates can leave the database in an inconsistent state. A sufficient condition to guarantee consistency of the databme is seriolizobilify of the actions (reads or writes) performed by the transactions on the data items, that is, the interleaved execution of the transaction should be equivalent to some serial execution of the the transactions [1,2,7]. Here we will assume serializability as the criterion of correctness.

Journal ArticleDOI
TL;DR: The two-phase deadlock detection protocol in the above paperl detects false deadlocks, contrary to what the authors claim.
Abstract: The two-phase deadlock detection protocol in the above paperl detects false deadlocks. This is contrary to what the authors claim. The false detection o. f deadlocks is shown using a counterexample.