scispace - formally typeset
Search or ask a question

Showing papers on "Multiversion concurrency control published in 1990"


Proceedings Article
Chandrasekaran Mohan1
13 Aug 1990
TL;DR: ARIESIKVL, by also using for key value locking the IX and SIX lock modes that were intended originally for table level locking, is able to better exploit the semantics of the operations to improve concurrency, compared to the System R index protocols.
Abstract: This paper presents a method, called ARIES/ KVL (Algorithm for Recovery and Isolation Exploiting Semantics using Key-Value Locking), for concurrency control in B-tree indexes. A transaction may perform any number of nonindex and index operations, including range scans. ARIES/KVL guarantees serializability and it supports very high concurrency during tree traversals, structure modifications, and other operations. Unlike in System R, when one transaction is waiting for a lock on a key value in a page, reads and modifications of that page by other transactions are allowed. Further, transactions that are rolling back will never get into deadlocks. ARIESIKVL, by also using for key value locking the IX and SIX lock modes that were intended originally for table level locking, is able to better exploit the semantics of the operations to improve concurrency, compared to the System R index protocols. These techniques are also applicable to the concurrency control of the classical links-based storage and access structures which are beginning to appear in modern systems also.

245 citations


Proceedings ArticleDOI
02 Apr 1990
TL;DR: It is demonstrated that under a policy that discards transactions whose constraints are not met, optimistic concurrency control outperforms locking over a wide range of system utilization.
Abstract: Performance studies of concurrency control algorithms for conventional database systems have shown that, under most operating circumstances, locking protocols outperform optimistic techniques. Real-time database systems have special characteristics - timing constraints are associated with transactions, performance criteria are based on satisfaction of these timing constraints, and scheduling algorithms are priority driven. In light of these special characteristics, results regarding the performance of concurrency control algorithms need to be re-evaluated. We show in this paper that the following parameters of the real-time database system - its policy for dealing with transactions whose constraints are not met, its knowledge of transaction resource requirements, and the availability of resources - have a significant impact on the relative performance of the concurrency control algorithms. In particular, we demonstrate that under a policy that discards transactions whose constraints are not met, optimistic concurrency control outperforms locking over a wide range of system utilization. We also outline why, for a variety of reasons, optimistic algorithms appear well-suited to real-time database systems.

222 citations


Proceedings ArticleDOI
05 Dec 1990
TL;DR: A new real-time optimistic concurrency control algorithm, WAIT-50, is presented that monitors transaction conflict states and gives precedence to urgent transactions in a controlled manner and is shown to provide significant performance gains over OPT-BC under a variety of operating conditions and workloads.
Abstract: The authors (1990) have shown that in real-time database systems that discard late transactions, optimistic concurrency control outperforms locking. Although the optimistic algorithm used in that study, OPT-BC, did not factor in transaction deadlines in making data conflict resolution decisions, it still outperformed a deadline-cognizant locking algorithm. A discussion is presented of why adding deadline information to optimistic algorithms is a nontrivial problem, and some alternative methods of doing so are described. A new real-time optimistic concurrency control algorithm, WAIT-50, is presented that monitors transaction conflict states and gives precedence to urgent transactions in a controlled manner. WAIT-50 is shown to provide significant performance gains over OPT-BC under a variety of operating conditions and workloads. >

194 citations


Journal ArticleDOI
TL;DR: Several new optimistic concurrency control techniques for objects in decentralized distributed systems are described here, their correctness and optimality properties are proved, and the circumstances under which each is likely to be useful are characterized.
Abstract: An optimistic concurrency control technique is one that allows transactions to execute without synchronization, relying on commit-time validation to ensure serializability. Several new optimistic concurrency control techniques for objects in decentralized distributed systems are described here, their correctness and optimality properties are proved, and the circumstances under which each is likely to be useful are characterized.Unlike many methods that classify operations only as Reads or Writes, these techniques systematically exploit type-specific properties of objects to validate more interleavings. Necessary and sufficient validation conditions can be derived directly from an object's data type specification. These techniques are also modular: they can be applied selectively on a per-object (or even per-operation) basis in conjunction with standard pessimistic techniques such as two-phase locking, permitting optimistic methods to be introduced exactly where they will be most effective.These techniques can be used to reduce the algorithmic complexity of achieving high levels of concurrency, since certain scheduling decisions that are NP-complete for pessimistic schedulers can be validated after the fact in time, independent of the level of concurrency. These techniques can also enhance the availability of replicated data, circumventing certain tradeoffs between concurrency and availability imposed by comparable pessimistic techniques.

154 citations


Proceedings ArticleDOI
05 Dec 1990
TL;DR: A new concurrency control algorithm for real-time database systems is proposed, by which real- time scheduling and concurrency Control can be integrated.
Abstract: A new concurrency control algorithm for real-time database systems is proposed, by which real-time scheduling and concurrency control can be integrated. The algorithm is founded on a priority-based locking mechanism to support time-critical scheduling by adjusting the serialization order dynamically in favor of high priority transactions. Furthermore, it does not assume any knowledge about the data requirements or execution time of each transaction, making the algorithm very practical. >

144 citations


Journal ArticleDOI
TL;DR: The problem of implementing a given logical concurrency model in such a multiprocessor is addressed, and simple rules are introduced to verify that a multip rocessor architecture adheres to the models.
Abstract: The presence of high-performance mechanisms in shared-memory multiprocessors such as private caches, the extensive pipelining of memory access, and combining networks may render a logical concurrency model complex to implement or inefficient. The problem of implementing a given logical concurrency model in such a multiprocessor is addressed. Two concurrency models are considered, and simple rules are introduced to verify that a multiprocessor architecture adheres to the models. The rules are applied to several examples of multiprocessor architectures. >

122 citations


Book ChapterDOI
01 Mar 1990
TL;DR: Techniques are presented for the efficient handling of record ID lists, elimination of some locking, and determination of how many and which indexes to use, and opportunities for exploiting parallelism are identified.
Abstract: Many data base management systems' query optimizers choose at most one index for accessing the records of a table in a given query, even though many indexes may exist on the table. In spite of the fact that there are some systems which use multiple indexes, very little has been published about the concurrency control or query optimization implications (e.g., deciding how many indexes to use) of using multiple indexes. This paper addresses these issues and presents solutions to the associated problems. Techniques are presented for the efficient handling of record ID lists, elimination of some locking, and determination of how many and which indexes to use. The techniques are adaptive in the sense that the execution strategies may be modified at run-time (e.g., not use some indexes which were to have been used), if the assumptions made at optimization-time (e.g., about selectivities) turn out to be wrong. Opportunities for exploiting parallelism are also identified. A subset of our ideas have been implemented in IBM's DB2 V2R2 relational data base management system.

99 citations


Proceedings ArticleDOI
05 Feb 1990
TL;DR: The top-down approach emerges as a viable paradigm for ensuring the proper concurrent execution of global transactions in an HDDBS, and general schemes for local concurrency control with prespecified global serialization orders are presented.
Abstract: A heterogeneous distributed databases system (HDDBS) is a system which integrates preexisting databases to support global applications accessing more than one database. An outline of approaches to concurrency control in HDDBSs is presented. The top-down approach emerges as a viable paradigm for ensuring the proper concurrent execution of global transactions in an HDDBS. The primary contributions of this work are the general schemes for local concurrency control with prespecified global serialization orders. Two approaches are outlined. One is intended for performance enhancement but violates design autonomy, while the other does not violate local autonomy at the cost of generality (it does not apply to all local concurrency control protocols). This study is intended as a guide to concurrency control in this new environment. >

93 citations


Proceedings ArticleDOI
07 May 1990
TL;DR: The authors show that the scheduling protocol gives correct schedules and is free of covert channels due to contention for access to data, i.e. the scheduler is data-conflict-secure.
Abstract: Consideration is given to the application of multiversion schedulers in multilevel secure database management systems (MLS/DBMSs). Transactions are vital for MLS/DBMSs because they provide transparency to concurrency and failure. Concurrent execution of transactions may lead to contention among subjects for access to data, which in MLS/DBMSs may lead to security problems. Multiversion schedulers reduce the contention for access to data by maintaining multiple versions. A description is given of the relation between schedules produced in MLS/DBMSs and those which are multiversion serializable. The authors also propose a secure multiversion scheduler. They show that the scheduling protocol gives correct schedules and is free of covert channels due to contention for access to data, i.e. the scheduler is data-conflict-secure. >

85 citations


Proceedings ArticleDOI
01 Jan 1990
TL;DR: It was shown that it is easier for a class of multiversion lock-based concurrency control algorithms to maintain temporal consistency of shared data when the conflicting transactions are close in the lengths of their periods.
Abstract: The authors present a model of typical hard real-time applications and the concepts of age and dispersion of data accessed by the real-time transactions. These are used to evaluate the performance of a class of multiversion lock-based concurrency control algorithms in maintaining temporal consistency of data in a real-time shared-data environment. It is shown that it is easier for such a concurrency control algorithm to maintain temporal consistency of shared data when the conflicting transactions are close in the lengths of their periods. The conflict pattern of the transactions has a more significant effect on the temporal inconsistency of data than the load level of the system. It is also desirable to have the transactions' periods within a small range. The best case was obtained when the faster transactions have higher utilizations. It was also shown that the use of the priority inheritance principle with the lock-based concurrency control algorithms can reduce transactions' blocking times and the number of transactions that access temporally inconsistent data as well as the worst-case age and dispersion of data. >

76 citations


Proceedings ArticleDOI
03 Dec 1990
TL;DR: Two new concurrency control algorithms that are compatible with common security policies are described, based on the multiversion timestamp ordering technique, and implemented with single-level subjects.
Abstract: The concurrency control algorithms used for standard database systems can conflict with the security policies of multilevel secure database systems. The authors describe two new concurrency control algorithms that are compatible with common security policies. They are based on the multiversion timestamp ordering technique, and are implemented with single-level subjects. The use of only single-level subjects cannot introduce any additional threat of compromise of mandatory security; the analysis focuses on correctness. One of these algorithms has been implemented for Trusted ORACLE. >

Journal ArticleDOI
TL;DR: A new model for describing and reasoning about transaction-processing algorithms is presented, which provides a comprehensive, uniform framework for rigorous correctness proofs and general conditions for a concurrency control algorithm to be correct-i.e., to ensure that transactions appear to be atomic.

Patent
Mitsuru Kakimoto1
10 Oct 1990
TL;DR: In this paper, a method and an apparatus for the concurrency control in the database system, in which each transaction can be executed properly according to its scale, is presented, and a timestamp appropriate for each transaction is determined from the estimated processing time and a current time.
Abstract: A method and an apparatus for the concurrency control in the database system, in which each transaction can be executed properly according to its scale. In the apparatus, a processing time for each transaction is estimated, a timestamp appropriate for each transaction is determined from the estimated processing time and a current time, and a concurrency control is carried out according to the determined timestamps for the transactions to be executed concurrently.

Proceedings ArticleDOI
01 May 1990
TL;DR: A new multilevel concurrency protocol is presented that uses a semantics-based notion of conflict, which is weaker than commutativity, called recoverability, and operates according to relative conflict, a conflict notion based on the structure of operations.
Abstract: For next generation information systems, concurrency control mechanisms are required to handle high level abstract operations and to meet high throughput demands. The currently available single level concurrency control mechanisms for reads and writes are inadequate for future complex information systems. In this paper, we will present a new multilevel concurrency protocol that uses a semantics-based notion of conflict, which is weaker than commutativity, called recoverability. Further, operations are scheduled according to relative conflict, a conflict notion based on the structure of operations.Performance evaluation via extensive simulation studies show that with our multilevel concurrency control protocol, the performance improvement is significant when compared to that of a single level two-phase locking based concurrency control scheme or to that of a multilevel concurrency control scheme based on commutativity alone. Further, simulation studies show that our new multilevel concurrency control protocol performs better even with resource contention.

Book ChapterDOI
01 Mar 1990
TL;DR: The problems involved in integrating concurrency control into object-oriented database systems are described and an adaptation of the locking technique is proposed to satisfy them, without unnecessarily restricting concurrency.
Abstract: In this paper, we describe the problems involved in integrating concurrency control into object-oriented database systems. The object-oriented approach places specific constraints on concurrency between transactions, which we discuss. We then propose an adaptation of the locking technique to satisfy them, without unnecessarily restricting concurrency. Finally, we analyse in detail the impacts of both the transaction model and the locking granularity imposed by the underlying system. The solutions proposed are illustrated with the O2 system developed by the Altair GIP.

Proceedings ArticleDOI
05 Feb 1990
TL;DR: An example of a software development application is considered, and the formal model of H. Korth and G. Speegle (1988) is applied to show how this example could be represented as a set of database transactions.
Abstract: An example of a software development application is considered, and the formal model of H. Korth and G. Speegle (1988) is applied to show how this example could be represented as a set of database transactions. It is shown that, although the standard notion of correctness (serializability) is too strict, the notion of correctness in the Korth and Speegle model allows sufficient concurrency with acceptable overhead. An extrapolation is made from this example to draw some conclusions regarding the potential usefulness of a formal approach to the management of long-duration design transactions. >

Proceedings ArticleDOI
Alexander Thomasian1, Erhard Rahm
28 May 1990
TL;DR: A distributed optimistic concurrency control method followed by locking, such that locking is an integral part of distributed validation and two-phase commit is presented, showing that in the case of higher data contention levels, the hybrid OCC method allows a much higher maximum transaction throughput than distributed 2PL.
Abstract: A distributed optimistic concurrency control (OCC) method followed by locking, such that locking is an integral part of distributed validation and two-phase commit is presented. This OCC method ensures that a transaction failing its validation will not be reexecuted more than once, in general. Furthermore, deadlocks, which are difficult to handle in a distributed environment, are avoided by serializing lock requests. Implementation details are outlined, and the performance of the schemes is compared with distributed two-phase locking (2PL) through a detailed simulation, which incorporates queueing effects at the devices of the computer systems, buffer management, concurrency control, and commit processing. It is shown that in the case of higher data contention levels, the hybrid OCC method allows a much higher maximum transaction throughput than distributed 2PL. The performance of the method with respect to variable-size transactions is reported. It is shown that by restricting the number of restarts to one, the performance achieved for variable-size transactions is comparable to fixed-size transactions with the same mean size. >

Proceedings ArticleDOI
28 May 1990
TL;DR: An efficient method of consistency control of replicated directories that achieves fast access to directories and high concurrency in updating directory replicas by taking advantage of special characteristics of directories is presented.
Abstract: An efficient method of consistency control of replicated directories is presented. By taking advantage of special characteristics of directories, the method achieves fast access to directories and high concurrency in updating directory replicas. The algorithm differs from conventional mechanisms for concurrency control of replicated data in two aspects: It does not use global locks or global timestamp orderings. Updating operations can proceed without being in synchronization. The algorithm can survive both node failure and network failure. The directory problem, design objectives and related works are described. The system model and consistency control requirements are defined, and the data structures and algorithm are presented. The fault tolerance and recovery mechanism of the approach are discussed, as is the applicability of the algorithm. The approach is evaluated and compared with other works. The detailed algorithm and consistency proof are given. >

01 Jul 1990
TL;DR: In this paper, the authors investigate an optimistic concurrency control approach for real-time transaction processing, which possesses the properties of deadlock freedom and predictable blocking time, and give solutions to the problem of transaction starvation.
Abstract: The two-phase locking approach widely used for concurrency control in database systems have some inherent disadvantages such as deadlock and unpredictable blocking time. These appear to be serious problems with respect to real-time transaction processing, since in a real-time environ- ment transactions need to meet their time constraints as well as their consistency requirements. Integrated with CPU scheduling, we investigate an optimistic concurrency control approach for real- time transaction processing, which possesses the properties of deadlock freedom and predictable blocking time. We also give solutions to the problem of transaction starvation. The proposed optimistic concurrency control scheme is implemented on a real-time database testbed. The per- formance results show that the optimistic scheme outperforms two-phase locking even when the system is CPU bound.

Proceedings ArticleDOI
09 Oct 1990
TL;DR: It is shown that dynamic adaptability can result in performance benefits and that system reconfigurations can be accomplished dynamically with less cost than stopping the system, performing reconfiguration, and then restarting the system.
Abstract: A series of experiments is being conducted on the RAID distributed database system to study the performance and reliability implications of providing static and dynamic adaptability. The authors' studies of the cost of their adaptable implementation were conducted in the context of the concurrency controller and the replication controller. It is shown that adaptable implementations can be provided at costs comparable to those of special-purpose implementations. The experimentation with dynamic adaptability focuses on concurrency control. It is shown that dynamic adaptability can result in performance benefits and that system reconfiguration can be accomplished dynamically with less cost than stopping the system, performing reconfiguration, and then restarting the system. The authors' examination of the costs of providing greater data availability includes studying the replication control and atomicity control subsystems of RAID. The cost associated with increasing availability in an adaptable scheme of replication control and commit protocols is demonstrated. >

Patent
Naofumi Sakai1
24 May 1990
TL;DR: A data management method and system for classifying shared data as new data or past data where the new data may be updated by processing and the past data may not be updated is presented in this article.
Abstract: A data management method and system for classifying shared data as new data or past data where the new data may be updated by processing and the past data may not be updated by processing. The data management method and system has a concurrency control, a central control and a sharing of data by a plurality of users wherein the past data is no subject to updating hereby precluding the need for lock processing by concurrency control of past data. In addition, together with query language, the capability of processing new and past data separately allows for an increase of concurrency control efficiency and a smoother user operation.

Proceedings ArticleDOI
Philip S. Yu1, Daniel M. Dias1
07 Mar 1990
TL;DR: It is shown that, with sufficient buffer, a new approach to buffer management can be adopted so that data items referenced by aborted transactions continue to be retained in memory for access during rerun.
Abstract: Under optimistic concurrency control (OCC) schemes, the buffer hit ratio and hence the abort probability of a rerun transaction can be affected by its previous runs, since the data items brought in from the previous runs may still be in memory. It is noted that this buffering effect on rerun transactions has been ignored in previous performance studies. In the present work the authors examine its effect on different OCC schemes. It is shown that, with sufficient buffer, a new approach to buffer management can be adopted so that data items referenced by aborted transactions continue to be retained in memory for access during rerun. By considering the I/O reduction during rerun, it is found that, at high contention levels, the broadcast OCC which attempts to abort conflicting transactions as early as possible can be inferior to the pure OCC which only aborts a transaction at its commit time. Second, combining the two schemes, with pure OCC during the first run of a transaction and broadcast OCC during any reruns, can typically lead to better performance, especially at high contention levels. >

Proceedings ArticleDOI
Philip S. Yu1, Daniel M. Dias1
05 Feb 1990
TL;DR: The proposed scheme reduces the blocking probability by deferring the blocking behavior of transactions to the later stages of their execution, which can lead to better performance at all data and resource contention levels than either conventional locking or the optimistic concurrency control schemes.
Abstract: The concurrency control method employed can be critical to the performance of transaction processing systems. The conventional locking scheme tends to suffer from the blocking phenomenon. The proposed scheme reduces the blocking probability by deferring the blocking behavior of transactions to the later stages of their execution. The transaction execution can then be divided into a nonblocking phase, in which transactions wait for locks but do not block other transactions, and a blocking phase, as in conventional locking. Data accessed during the nonblocking phase can lead to transaction abort. By properly balancing the blocking and abort effects, the proposed scheme can lead to better performance at all data and resource contention levels than either conventional locking or the optimistic concurrency control schemes. Both simulation and analytical models are used for estimating the performance of this scheme. >

Proceedings ArticleDOI
01 Jul 1990
TL;DR: This paper identifies the restriction and formalizes the hierarchical concurrency control approach and proves its correctness, and presents a new global concurrence control algorithm based on this hierarchical approach.
Abstract: A multidatabase system is a facility that allows access to data stored in multiple autonomous and possibly heterogeneous database systems. In order to support atomic updates across multiple database systems, a global concurrency control algorithm is required. Hierarchical concurrency control has been proposed as one possible approach for multidatabase systems. However, to apply this approach, some restrictions have to be imposed on the local concurrency control algorithms. In this paper, we identify the restriction. Based on this restriction, we formalize the hierarchical concurrency control approach and prove its correctness. A new global concurrency control algorithm based on this hierarchical approach is also presented.

Journal ArticleDOI
TL;DR: The effect of deadlocks, read : write ratio, cascade rollback, degree of concurrency, transaction-size mixes, transaction wait time and transaction blockings on the overall performance of seven different concurrency control algorithms are evaluated.

Proceedings ArticleDOI
01 May 1990
TL;DR: A secure protocol is given that guarantees one-copy serializability of concurrent transaction executions and can be implemented in such a way that the size of the trusted code (including the code required for concurrency and recovery) is small.
Abstract: In a multilevel secure database management system based on the replicated architecture, there is a separate database management system to manage data at or below each security level, and lower level data are replicated in all databases containing higher level data. In this paper, we address the open issue of concurrency control in such a system. We give a secure protocol that guarantees one-copy serializability of concurrent transaction executions and can be implemented in such a way that the size of the trusted code (including the code required for concurrency and recovery) is small.

Proceedings ArticleDOI
S. Wang1
05 Apr 1990
TL;DR: A solution, based on the role of attributes in an object-oriented database (OODB), is proposed to eliminate shortcomings in concurrency control and ensures that only the necessary units are concerned and that the transaction boundary is narrowed to the smallest extent.
Abstract: The issue of concurrency control in object-oriented database systems is addressed. Two existing concurrency control methods, the optimistic approach and the granularity locking approach, are investigated. The determination of a fine transaction boundary is ignored in the optimistic approach, and the defined boundary is usually too large in the granularity locking approach. A solution, based on the role of attributes in an object-oriented database (OODB), is proposed to eliminate these shortcomings. The solution ensures that only the necessary units are concerned and that the transaction boundary is narrowed to the smallest extent. >

Proceedings ArticleDOI
05 Feb 1990
TL;DR: It is shown that for specific workloads, multiversion database systems offer performance improvements despite additional CPU and I/O costs involved in accessing old versions of data.
Abstract: A description is given of a detailed simulation study of the performance of multiversion database systems. The characteristics and the extent to which they provide performance benefits over their single-version counterparts are investigated. First, the structure of a software prototyping environment for the evaluation of distributed database systems is presented. Using this environment, the authors shows that for specific workloads, multiversion database systems offer performance improvements despite additional CPU and I/O costs involved in accessing old versions of data. It is also shown that transaction size is one of the most critical parameters affecting the system performance. >

Proceedings ArticleDOI
28 May 1990
TL;DR: A replicated directory architecture for the proposed replication control protocol is designed, and it not only supports regeneration of replicated data items, but also provides inexpensive, high-availability directory services, which help maintain database availability.
Abstract: In replicated database systems, a replication control protocol is needed to ensure one-copy serializability. The author incorporates the concept of a regeneration into the missing-partition dynamic voting scheme to design a replication control protocol. Like the original missing-partition dynamic voting scheme, this protocol supports an inexpensive read operation which accesses one copy, rather than all copies, of each data item read. By incorporating the concept of regeneration and keeping multiple versions for each data item in the database, higher data availability is maintained. To support data regeneration, a replicated directory architecture for the proposed replication control protocol is designed, and it not only supports regeneration of replicated data items, but also provides inexpensive, high-availability directory services, which help maintain database availability. >

Proceedings ArticleDOI
Akhil Kumar1
05 Feb 1990
TL;DR: It is shown that full serializability can be guaranteed for escrow transactions by means of repeat semantics, that is, a transaction which fails is rerun in a special mode.
Abstract: Three issues in the context of escrow-based methods, are addressed. It is shown that full serializability can be guaranteed for escrow transactions by means of repeat semantics, that is, a transaction which fails is rerun in a special mode. The author describes a crash recovery protocol consistent with the concurrency control scheme used. An analysis is made, by means of a simulation model, of several different borrowing policies for escrow transactions and the best one is identified. >