scispace - formally typeset
Search or ask a question

Showing papers on "Multiversion concurrency control published in 1983"


Journal ArticleDOI
TL;DR: This paper presents a theory for analyzing the correctness of concurrency control algorithms for multiversion database systems and uses the theory to analyze some new algorithms and some previously published ones.
Abstract: Concurrency control is the activity of synchronizing operations issued by concurrently executing programs on a shared database. The goal is to produce an execution that has the same effect as a serial (noninterleaved) one. In a multiversion database system, each write on a data item produces a new copy (or version) of that data item. This paper presents a theory for analyzing the correctness of concurrency control algorithms for multiversion database systems. We use the theory to analyze some new algorithms and some previously published ones.

483 citations


Journal ArticleDOI
TL;DR: The notion of intention modes from System R is extended to arbnrary lock modes, and the mteractmn among the classes of lock modes thus created is studied and used as a basis to define generalized update modes that correspond to arbitrary lock conversions.
Abstract: Locking ~s a frequently used concurrency control mechamsm m database systems. Most systems offer one or more lock modes, usually read and write modes. Here, one operatwnal lock mode is assumed for each database operation, and a criterion for \"good\" lock compatibdlty functions, called maximal permtssiveness, is gtven Operatmnal modes are used as a basis to define generalized update modes that correspond to arbitrary lock conversions. The notion of intention modes from System R is extended to arbnrary lock modes, and the mteractmn among the classes of lock modes thus created is studied.

229 citations


01 Jan 1983
TL;DR: Two previously proposed schemes for improving the performance of concurrency control algorithms, multiple versions and granularity hierarchies, are examined and all were found to improve performance in situations where the cost of concurrence control was high, but were of little use otherwise.
Abstract: In database management systems, transactions are provided for constructing programs which appear to execute atomically. If more than one transaction is allowed to run at once, a concurrency control algorithm must be employed to properly synchronize their execution. Many concurrency control algorithms have been proposed, and this thesis examines the costs and performance characteristics associated with a number of these algorithms. Two models of concurrency control algorithms are described. The first is an abstract model which is used to evaluate and compare the relative storage and CPU costs of concurrency control algorithms. Three algorithms, two-phase locking, basic timestamp ordering, and serial validation, are evaluated using this model. It is found that the costs associated with two-phase locking are at least as low as those for the other two algorithms. The second model is a simulation model which is used to investigate the performance characteristics of concurrency control algorithms. Results are presented for seven different algorithms, including four locking algorithms, two timestamp algorithms, and serial validation. All performed about equally well in situations where conflicts between transactions were rare. When conflicts were more frequent, the algorithms which minimized the number of transaction restarts were generally found to be superior. In situations where several algorithms each restarted the same number of transactions, those which restarted transactions which had done less work tended to perform the best. Two previously proposed schemes for improving the performance of concurrency control algorithms, multiple versions and granularity hierarchies, are also examined. A new multiple version algorithm based on serial validation is presented, and performance results are given for this algorithm, the CCA version pool algorithm, and multiversion timestamp ordering. Unlike their single version counterparts, all three algorithms performed comparably under the workloads considered. Three new hierarchical concurrency control algorithms, based on serial validation, basic timestamp ordering, and multiversion timestamp ordering, are presented. Performance results are given for these algorithms and a hierarchical locking algorithm. All were found to improve performance in situations where the cost of concurrency control was high, but were of little use otherwise.

66 citations



Journal ArticleDOI
TL;DR: This work presents a geometric method for studying concurrency control by locking that yields an exact characterization of safe locking policies and also of deadlock-free locking policies when there are only two transactions.
Abstract: We present a geometric method for studying concurrency control by locking. When there are only two transactions, our method yields an exact characterization of safe locking policies and also of deadlock-free locking policies. Our results can be extended to more than two transactions, but in that case the problem becomes NP-complete.

31 citations


Journal ArticleDOI
TL;DR: It is formally show that the performance of a scheduler, i.e., the amount of parallelism that it supports, depends explicitly upon the amounts of information that is available to the scheduler.
Abstract: A concurrency control mechanism (or a scheduler) is the component of a database system that safeguards the consistency of the database in the presence of interleaved accesses and update requests. We formally show that the performance of a scheduler, i.e., the amount of parallelism that it supports, depends explicitly upon the amount of information that is available to the scheduler. We point out that most previous work on concurrency control is simply concerned with specific points of this basic trade-off between performance and information. In fact, several of these approaches are shown to be optimal for the amount of information that they use.

30 citations


Proceedings ArticleDOI
01 May 1983
TL;DR: An abstract model of concurrency control algorithms is presented, allowing them to be specified in terms of the information that they require, the conditions under which blocking or restarts are called for, and the manner in which requests are processed.
Abstract: An abstract model of concurrency control algorithms is presented. The model facilitates implementation-independent descriptions of various algorithms, allowing them to be specified in terms of the information that they require, the conditions under which blocking or restarts are called for, and the manner in which requests are processed. The model also facilitates comparisons of the relative storage and CPU overheads of various algorithms based on their descriptions. Results are given for single-site versions of two-phase locking, basic timestamp ordering, and serial validation. Extensions which will allow comparisons of multiple version and distributed algorithms are discussed as well.

29 citations


Proceedings Article
31 Oct 1983

27 citations


Proceedings ArticleDOI
21 Mar 1983
TL;DR: Hierarchical versions of a validation algorithm, a timestamp algorithm, and a multiversion algorithm are given, and hierarchical algorithm issues relating to request escalation and distributed databases are discussed.
Abstract: This paper shows that granularity hierarchies may be used with many types of concurrency control algorithms. Hierarchical versions of a validation algorithm, a timestamp algorithm, and a multiversion algorithm are given, and hierarchical algorithm issues relating to request escalation and distributed databases are discussed as well. It is argued that these hierarchical algorithms should offer improved performance for certain transaction mixes.

24 citations


01 Mar 1983
TL;DR: A concurrency control scheme using multiple versions of data objects which allows increased concurrency is presented which grants an appropriate version to each read request and when old versions can be discarded and how to eliminate the effects of aborted transactions is described in detail.
Abstract: Abstract A concurrency control scheme using multiple versions of data objects is presented which allows increased concurrency. The scheme grants an appropriate version to each read request. Transactions issuing write requests which might destroy database integrity are aborted. It is precisely stated when old versions can be discarded and how to eliminate the effects of aborted transactions is described in detail. The scheduler outputs only (multi-version) ww-serializable histories which preserve database consistency. It is shown that any “D-serializable” history of Papadimitriou (J. Assoc. Comput. Mach. 26 (4) (1979), 631–653) (or “conflict-preserving serializable log” of Bernstein et al., IEEE Trans. Software Engrg. SE-5 (3) (1979), 203–216) is ww-serializable.

21 citations


Journal ArticleDOI
TL;DR: The underlying decision problems of serializability are defined and shown to be NP-complete in a model, which is typical for most modern transaction oriented database management systems and it is most probable that both optimistic concurrency control types cannot be implemented efficiently in the general case.

Journal ArticleDOI
TL;DR: This work describes a proof schema for analyzing concurrency control correctness and illustrates the proof schema by presenting two new concurrency algorithms for distributed database systems.
Abstract: Concurrency control algorithms for database systems are usually regarded as methods for synchronizing Read and Write operations. Such methods are judged to be correct if they only produce serializable executions. However, Reads and Writes are sometimes inaccurate models of the operations executed by a database system. In such cases, serializability does not capture all aspects of concurrency control executions. To capture these aspects, we describe a proof schema for analyzing concurrency control correctness. We illustrate the proof schema by presenting two new concurrency algorithms for distributed database systems.

Proceedings Article
31 Oct 1983
TL;DR: It is shown that both t,hese classes yield additional concurrency through t.he use of mu!tiple versions and this characterization is used to derive the first general multiversion protocol which does not use transaction rollback as a means for ensuring serializability.
Abstract: MO>! database systems ensure the consistency of the data by means of a concurrency control scheme that USE a polynomial time on-line scheduler. Papadimirrlou and Kanellakis have shown that for the most general multiversion database model no such effective scheduler exists. In this paper we focus our attention 0~ an efficient multiversion database model and derive necessary and sufficient conditions for ensuring serializability and serializability without the use of transaction rollback for this model. It is shown that both t,hese classes yield additional concurrency through t.he use of mu!tiple versions. This characterization is used to derive the first general multiversion protocol which does not use transaction rollback as a means for ensuring serializability.

Journal ArticleDOI
TL;DR: A technique is presented which allows locking of the smallest possible set oftuples even when the selecuon is based on joins w~th other relauons.
Abstract: Access to a relation R m a relational database is sometimes based on how R joins with other relations rather than on what values appear m the attributes of R-tuples Usmg sunple predicate locks forces the entire relation to be locked m these cases. A technique is presented which allows locking of the smallest possible set oftuples even when the selecuon is based on joins w~th other relauons The algonthms are based on a generalization of tableaux The tableaux used here can represent relational algebra quenes wtth any of the domain companson operators =, #, <, _<, >, and >-..

Book ChapterDOI
TL;DR: This paper is a review of recent theoretical work on the problems which arise when many users access the same database.


Proceedings ArticleDOI
TL;DR: The paper investigates the suitability of several well-known abstraction mechanisms for database programming and presents some new abstraction mechanisms particularly designed to manage typical database problems like integrity and concurrency control.
Abstract: Databases contain vast amounts of highly related data accessed by programs of considerable size and complexity. Therefore, database programming has a particular need for high level constructs that abstract from details of data access, data manipulation, and data control. The paper investigates the suitability of several well-known abstraction mechanisms for database programming (e.g., control abstraction and functional abstraction). In addition, it presents some new abstraction mechanisms (access abstraction and transactional abstraction) particularly designed to manage typical database problems like integrity and concurrency control.

Journal ArticleDOI
TL;DR: An event order based model for specifying and analyzing concurrency control algorithms for distributed database systems has been presented in this article, where an expanded notion of history that includes the database access events as well as synchronization events is used to study the correctness, degree of concurrency, and other aspects of the algorithms such as deadlocks and reliability.
Abstract: An event order based model for specifying and analyzing concurrency control algorithms for distributed database systems has been presented. An expanded notion of history that includes the database access events as well as synchronization events is used to study the correctness, degree of concurrency, and other aspects of the algorithms such as deadlocks and reliability. The algorithms are mapped into serializable classes that have been defined based on the order of synchronization events such as lock points, commit point, arrival of a transaction, etc,.

Book
01 Jan 1983
TL;DR: The paper discusses the use of versions in the optimistic approach in order to free read transactions from concurrency control at all; the proposed concept does not restrict update transactions.
Abstract: The original approach of optimistic concurrency control has some serious weaknesses concerning long transactions and starvation. This paper first discusses design alternatives which avoid these disadvantages. Essential improvements can be reached by a new validation scheme which is called snapshot validation. The paper then discusses the use of versions in the optimistic approach in order to free read transactions from concurrency control at all; the proposed concept does not restrict update transactions.

Journal ArticleDOI
TL;DR: In this paper, the authors present the resiliency features of optimistic approach to concurrency control and demonstrate how it lends itself to a design of a reliable distributed database system, including concurrency, integrity, and atomicity control.
Abstract: This paper presents the resiliency features of the optimistic approach to concurrency control and demonstrates how it lends itself to a design of a reliable distributed database system. The validation of concurrency control, integrity control, and atomicity control has been integrated. This integration provides a high degree of concurrency and continuity of operations in spite of failures of transactions, processors, and communication system.

Proceedings Article
31 Oct 1983
TL;DR: Here the authors will assume serializability as the criterion of correctness of the actions performed by the transactions on the data items, that is, the interleaved execution of the transaction should be equivalent to some serial Execution of the the transactions.
Abstract: A database is viewed as a collection or data objects which can be read or written by concurrent transactions. Interleaving of updates can leave the database in an inconsistent state. A sufficient condition to guarantee consistency of the databme is seriolizobilify of the actions (reads or writes) performed by the transactions on the data items, that is, the interleaved execution of the transaction should be equivalent to some serial execution of the the transactions [1,2,7]. Here we will assume serializability as the criterion of correctness.


01 Jan 1983
TL;DR: This thesis considers the problem of concurrency control of multiple copy databases and several synchroni zation techniques are mentioned and a few algorithms for concurrence control are evaluated and compared.
Abstract: The declining cost of computer hardware and the increasing data processing needs of geographically dispersed organizations have led to substantial interest in distributed data management. These characteristics have led to reconsider the design of centralized data bases. Distributed databases have appeared as a result of those considerations. A number of advantages result from having duplicate copies of data in a distributed databases. Some of the se advantages are: increased data accesibility, more responsive data access, higher reliability, and load sharing. These and other benefits must be balanced against the additional cost and complexity introduced in doing so. This thesis considers the problem of concurrency control of multiple copy databases. Several synchroni zation techniques are mentioned and a few algorithms for concurrency control are evaluated and compared.