scispace - formally typeset
Search or ask a question

Showing papers on "Multiversion concurrency control published in 1993"


Proceedings ArticleDOI
01 Dec 1993
TL;DR: A new optimistic concurrency control algorithm is presented that can avoid such unnecessary restarts by adjusting serialization order dynamically, and it is demonstrated that the new algorithm outperforms the previous ones over a wide range of system workload.
Abstract: Studies concluded that for a variety of reasons, optimistic concurrency control appears well-suited to real-time database systems. Especially, they showed that in a real-time database system that discards tardy transactions, optimistic concurrency control outperforms locking. We show that the optimistic algorithms used in those studies incur restarts unnecessary to ensure data consistency. We present a new optimistic concurrency control algorithm that can avoid such unnecessary restarts by adjusting serialization order dynamically, and demonstrate that the new algorithm outperforms the previous ones over a wide range of system workload. It appears that this algorithm is a promising candidate for a basic concurrency control mechanism for real-time database systems. >

119 citations


Proceedings ArticleDOI
01 Dec 1993
TL;DR: This paper presents a technique that is capable of supporting two major requirements for concurrency control in real-time databases; data temporal consistency, and data logical consistency, as well as tradeoffs between these requirements.
Abstract: This paper presents a technique that is capable of supporting two major requirements for concurrency control in real-time databases; data temporal consistency, and data logical consistency, as well as tradeoffs between these requirements. Our technique is based upon a real-time object-oriented database model in which each object has its own unique compatibility function that expresses the conditional compatibility of any two potential concurrent operations on the object. The conditions use the semantics of the object, such as allowable imprecision, along with current system state, such as time and the active operations on the object. Our concurrency control technique enforces that allowable concurrency expressed by the compatibility function by using semantic locking controlled by each individual object. The real-time object-oriented database model and process of evaluating the compatibility function to grant semantic locks are described. >

81 citations


Journal ArticleDOI
01 Oct 1993
TL;DR: The results indicate that algorithms with updaters that lock-couple using exclusive locks perform poorly as compared to those that permit more optimistic index descents, and the need for a highly concurrent long-term lock holding strategy to obtain the full benefits of ahighly concurrent algorithm for index operations is demonstrated.
Abstract: A number of algorithms have been proposed to access B+-trees concurrently, but they are not well understood. In this article, we study the performance of various B+-tree concurrency control algorithms using a detailed simulation model of B+-tree operations in a centralized DBMS. Our study covers a wide range of data contention situations and resource conditions. In addition, based on the performance of the set of B+-tree concurrency control algorithms, which includes one new algorithm, we make projections regarding the performance of other algorithms in the literature. Our results indicate that algorithms with updaters that lock-couple using exclusive locks perform poorly as compared to those that permit more optimistic index descents. In particular, the B-link algorithms are seen to provide the most concurrency and the best overall performance. Finally, we demonstrate the need for a highly concurrent long-term lock holding strategy to obtain the full benefits of a highly concurrent algorithm for index operations.

76 citations


Proceedings ArticleDOI
P. Muth1, T.C. Rakow1, Gerhard Weikum, P. Brossler, Christof Hasse 
19 Apr 1993
TL;DR: A locking protocol for object-oriented database systems (OODBSs) is presented and it is shown that, using the locking protocol in an open-nested transaction, the locks of a subtransactions are released when the subtransaction completes, and only a semantic lock is held further by the parent of the subTransaction.
Abstract: A locking protocol for object-oriented database systems (OODBSs) is presented. The protocol can exploit the semantics of methods invoked on encapsulated objects. Compared to conventional page-oriented or record-oriented concurrency control protocols, the proposed protocol greatly improves the possible concurrency because commutative method executions on the same object are not considered as a conflict. An OODBS application example is presented. The principle of open-nested transactions is reviewed. It is shown that, using the locking protocol in an open-nested transaction, the locks of a subtransaction are released when the subtransaction completes, and only a semantic lock is held further by the parent of the subtransaction. >

72 citations


Journal ArticleDOI
TL;DR: A trace-driven simulation system for DB-sharing complexes has been developed that allows a realistic performance comparison of four different concurrency and coherency control protocols and investigates so-called on-request and broadcast invalidation schemes.
Abstract: Database Sharing (DB-sharing) refers to a general approach for building a distributed high performance transaction system. The nodes of a DB-sharing system are locally coupled via a high-speed interconnect and share a common database at the disk level. This is also known as a “shared disk” approach. We compare database sharing with the database partitioning (shared nothing) approach and discuss the functional DBMS components that require new and coordinated solutions for DB-sharing. The performance of DB-sharing systems critically depends on the protocols used for concurrency and coherency control. The frequency of communication required for these functions has to be kept as low as possible in order to achieve high transation rates and short response times. A trace-driven simulation system for DB-sharing complexes has been developed that allows a realistic performance comparison of four different concurrency and coherency control protocols. We consider two locking and two optimistic schemes which operate either under central or distributed control. For coherency control, we investigate so-called on-request and broadcast invalidation schemes, and employ buffer-to-buffer communication to exchange modified pages directly between different nodes. The performance impact of random routing versus affinity-based load distribution and different communication costs is also examined. In addition, we analyze potential performance bottlenecks created by hot spot pages.

70 citations


Proceedings ArticleDOI
01 Aug 1993
TL;DR: A unified model is developed that allows reasoning about the correctness of concurrency control and recovery within the same framework and captures schedules with semantically rich ADT actions in addition to classical read/write schedules.
Abstract: The classical theory of transaction management is based on two different and independent criteria for the correct execution of transactions. The first criterion, serializability, ensures correct execution of parallel transactions under the assumption that no failures occur. The second criterion, strictness, ensures correct recovery from failures. In this paper we develop a unified model that allows reasoning about the correctness of concurrency control and recovery within the same framework. We introduce the correctness criteria of (prefix-) reducibility and (prefix-) expanded serializability and investigate their relationships to the classical criteria. An important advantage of our model is that it captures schedules with semantically rich ADT actions in addition to classical read/write schedules.

58 citations


Proceedings ArticleDOI
01 Dec 1993
TL;DR: This paper identifies and discusses six concurrency control requirements that distinguish collaborativehypertext systems from multiuser hypertext systems and discusses how existing hyperbase systems fare with respect to the identified set of requirements.
Abstract: Traditional concurrency control techniques for database systems (transaction management based on locking protocols) have been successful in many multiuser settings, but these techniques are inadequate in open, extensible and distributed hypertext systems supporting multiple collaborating users. The term "multiple collaborating users" covers a group setting in which two or more users are engaged in a shared task. Group members can work simultaneously in the same computing environment, use the same set of tools and share a network of hypertext objects. Hyperbase (hypertext database) systems must provide special support for collaborative work, requiring adjustments and extensions to normal concurrency control techniques. Based on the experiences of two collaborative hypertext authoring systems, this paper identifies and discusses six concurrency control requirements that distinguish collaborative hypertext systems from multiuser hypertext systems. Approaches to the major issues (locking, notification control and transaction management) are examined from a supporting technologies point of view. Finally, we discuss how existing hyperbase systems fare with respect to the identified set of requirements. Many of the issues discussed in the paper are not limited to hypertext systems and apply to other collaborative systems as well.

56 citations


Journal ArticleDOI
TL;DR: This work introduces concurrency to the object-oriented language Eiffel through a set of Clans Libraries and an associated concurrent programming design method that is well suited for client/server style distributed applications.
Abstract: We introduce concurrency to the object-oriented language Eiffel through a set of Clans Libraries and an associated concurrent programming design method. This concurrency mechanism is well suited for client/server style distributed applications. The essential principles of sequential object-oriented programming offered by Eiffel are not sacrificed, since no changes are made to the Eiffel Language [19], or its run-time system. Our concurrency abstractions are presented as encapsulated behavior of Eiffel objects that can be inherited from the CONCURRENCY Class

49 citations



ReportDOI
02 Jan 1993
TL;DR: In this paper, the authors show how concurrency control lock can be used for concurrency controlling, without introducing covert channels, in a multi-level system without the need to modify the database.
Abstract: : The concurrency control lock (e.g. file lock, table lock) has long been used as a canonical example of a covert channel in a database system. Locking is a fundamental concurrency control technique used in many kinds of computer systems besides database systems. Locking is generally considered to be interfering and hence unsuitable for multilevel systems. In this paper we show how such locks can be used for concurrency control, without introducing covert channels.

41 citations


Journal ArticleDOI
TL;DR: An interesting feature of the framework is that the execution of read-only transactions becomes completely independent of the underlying concurrency control implementation, and the extension of the multiversion algorithms to a distributed environment becomes very simple.
Abstract: A version control mechanism is proposed that enhances the modularity and extensibility of multiversion concurrency control algorithms. The multiversion algorithms are decoupled into two components: version control and concurrency control. This permits modular development of multiversion protocols and simplifies the task of proving the correctness of these protocols. A set of procedures for version control is described that defines the interface with the version control component. It is shown that the same interface can be used by the database actions of both two-phase locking and time-stamp concurrency control protocols to access multiversion data. An interesting feature of the framework is that the execution of read-only transactions becomes completely independent of the underlying concurrency control implementation. Unlike other multiversion algorithms, read-only transactions in this scheme do not modify any version-related information, and therefore do not interfere with the execution of read-write transactions. The extension of the multiversion algorithms to a distributed environment becomes very simple. >

Journal ArticleDOI
TL;DR: A concurrency control protocol that performs better than traditional DDBSs in high-speed networks is developed that is at the heart of the overall functioning of the distributed system.
Abstract: The issues involved in developing a distributed database system (DDBS) in a high-speed environment are discussed. The inadequacy of existing database protocols in utilizing the gigabit network is described. A concurrency control protocol that performs better than traditional DDBSs in high-speed networks is developed. Both analytical and simulation results are presented. The focus is on the concurrency control aspect of DDBS since this protocol is at the heart of the overall functioning of the distributed system. >


Proceedings ArticleDOI
01 Jun 1993
TL;DR: The use of properties of the application and the telecommunication systems to develop simple and efficient solutions to the concurrency control and recovery problems is discussed.
Abstract: In a research and technology application project at Bellcore, we used multidatabase transactions to model multisystem work flows of telecommunication applications. During the project a prototype scheduler for executing multi-database transactions was developed. Two of the issues addressed in this project were concurrent execution of multi-database transactions and their failure recovery. This paper discusses our use of properties of the application and the telecommunication systems to develop simple and efficient solutions to the concurrency control and recovery problems.

Journal ArticleDOI
TL;DR: A protocol for managing data in a replicated multiversion environment, where execution of read-only transactions or queries becomes completely independent of the underlying concurrency control and replica control mechanisms, and the data availability for read- only transactions increases significantly.
Abstract: Multiple versions of data are used in database systems to increase concurrency. The higher concurrency results since read-only transactions can be executed without any concurrency control overhead and, therefore, read-only transactions do not interfere with the execution of update transactions. Availability of data in a distributed environment is improved by data replication. We propose a protocol for managing data in a replicated multiversion environment, where execution of read-only transactions or queries becomes completely independent of the underlying concurrency control and replica control mechanisms, and the data availability for read-only transactions increases significantly since they can be executed as long as any one copy of the object is available in the system. In order to validate the feasibility of our approach, we developed a simple prototype to measure the performance improvement in the response times of queries. The results clearly establish the viability of the approach as a useful paradigm for the design of efficient and fault-tolerant distributed database systems. >

Proceedings Article
01 Jan 1993
TL;DR: The Dynamic Directed Graph DDG (DDG) as mentioned in this paper policy exploits the rich structure of a knowledge base to support the interleaved concurrent execu tion of several user requests thereby improving overall system performance.
Abstract: As the demand for ever larger knowledge bases grows knowledge base management techniques assume paramount importance In this paper we show that large multi user knowledge bases need concurrency control We discuss known techniques from database concurrency control and explain their inad equacies in the context of knowledge bases We o er a concurrency control algorithm called the Dynamic Directed Graph DDG policy that addresses the speci c needs of knowledge bases The DDG policy exploits the rich structure of a knowledge base to support the interleaved concurrent execu tion of several user requests thereby improv ing overall system performance We give a proof of correctness of the proposed concur rency control algorithm and an analysis of its properties We demonstrate that these re sults from concurrency control interact in in teresting ways with knowledge base features and highlight the importance of performance oriented tradeo s in the design of knowledge based systems

23 Jun 1993
TL;DR: This paper examines the R-tree as an index structure, and modify it to allow concurrent accesses, and investigates three different locking methods for concurrency control.
Abstract: Access to spatial objects is often required in many nonstandard database applications, such as GIS, VLSI and CAD. In this paper, we examine the R-tree as an index structure, and modify it to allow concurrent accesses. We investigate three different locking methods for concurrency control. The first method uses a single lock to lock the entire tree, allowing concurrent searches but only sequential updates. The second method locks the whole tree only when the splitting or merging of nodes in the tree is required. The third method uses the lock-coupling technique to lock individual nodes of the tree.

Journal ArticleDOI
TL;DR: This paper describes several distributed, lock-based, real-time concurrency control protocols and reports on the relative performance of the protocols in a distributed database environment.
Abstract: A real-time database system (RTDBS) is designed to provide timely response to the transactions of data-intensive applications. The transactions processed in a RTDBS are associated with real-time constraints typically in the form of deadlines. With the current database technology it is extremely difficult to provide schedules guaranteeing transaction deadlines. This difficulty comes from the unpredictability of transaction response times. Efficient resource scheduling algorithms and concurrency control protocols are required to schedule RTDB transactions so as to maximize the number of satisfied deadlines. In this paper, we describe several distributed, lock-based, real-time concurrency control protocols and report on the relative performance of the protocols in a distributed database environment. The protocols are different in the way real-time constraints of transactions are involved in controlling concurrent accesses to shared data. A detailed performance model of a distributed RTDBS was employed in the evaluation of concurrency control protocols.

Book ChapterDOI
20 Sep 1993
TL;DR: The interaction history of a document can be modelled as a tree of command objects, which does not only support recovery (undo/redo), but is also suitable for cooperation between distributed users working on a common document.
Abstract: The interaction history of a document can be modelled as a tree of command objects This model does not only support recovery (undo/redo), but is also suitable for cooperation between distributed users working on a common document Various coupling modes can be supported Switching between modes is supported by regarding different versions of a document as different branches of the history Branches can later be merged using a selective redo mechanism Synchronous cooperation is supported by replicating the document state and exchanging command objects Optimistic concurrency control can be applied because conflicting actions can later be undone automatically

Proceedings ArticleDOI
12 May 1993
TL;DR: A mechanism called polytransactions is presented to automatically generate actions to restore the consistency between interdependent data, and the design of two concurrency control mechanisms for concurrent execution of polytransaction is given.
Abstract: Interdependent data are data objects in a cooperative information environment that are related by mutual consistency requirements. A flexible framework for specifying the dependency requirements of interdependent data using data dependency descriptors is discussed. A mechanism called polytransactions is presented to automatically generate actions to restore the consistency between interdependent data. The design of two concurrency control mechanisms for concurrent execution of polytransactions is given. The first is a deadlock-free graph-locking mechanism and the second is a variant of multiversion timestamps with rollback that never rejects operations arriving out of timestamp order. A conceptual system architecture is outlined for the execution of polytransactions. The notion of a multidatabase monitor is discussed. >

Proceedings ArticleDOI
06 Oct 1993
TL;DR: A secure two-phase locking protocol that is shown to be free from covert channels arising due to data conflicts between transactions and that provides reasonably fair execution of all transactions, regardless of their access class, is presented.
Abstract: A secure concurrency control algorithm must, in addition to maintaining consistency of the database, be free from covert channels arising due to data conflicts between transactions. The existing secure concurrency control approaches are unfair to transactions at higher access classes. A secure two-phase locking protocol that is shown to be free from covert channels arising due to data conflicts between transactions and that provides reasonably fair execution of all transactions, regardless of their access class, is presented. A description of the protocol for a centralized database system is given, and the extensions that need to be provided in a distributed environment are discussed. >

01 Feb 1993
TL;DR: This paper evaluates the performance of the Two-Shadow SCC algorithm (SCC-2S), a member of the SCC-nS family, which is notable for its minimal use of redundancy.
Abstract: Speculative Concurrency Control (SCC) is a new concurrency control approach especially suited for real-time database applications. It relies on the use of redundancy to ensure that serializable schedules are discovered and adopted as early as possible, thus increasing the likelihood of the timely commitment of transactions with strict timing constraints. In a recent publication by two of the authors, SCC-nS, a generic algorithm that characterizes a family of SCC-based algorithms was described, and its correctness established by showing that it only admits serializable histories. In this paper, we evaluate the performance of the Two-Shadow SCC algorithm (SCC-2S), a member of the SCC-nS family, which is notable for its minimal use of redundancy. In particular, we show that SCC-2S (as a representative of SCC-based algorithms) provides significant performance gains over the widely used Optimistic Concurrency Control with Broadcast Commit (OCC-BC), under a variety of operating conditions and workloads.

Book ChapterDOI
Chandrasekaran Mohan1
13 Oct 1993
TL;DR: This work discusses some of the issues involved in improving the availability and efficient accessibility of partitioned tables via parallelism, fine-granularity locking, transient versioning and partition independence, and outlines some solutions that have been proposed.
Abstract: A number of interesting problems arise in supporting the efficient and flexible storage, maintenance and manipulation of large volumes of data (e.g., >100 gigabytes of data in a single table). Very large tables are becoming common. Typically, high availability is an important requirement for such data. The currently-popular relational DBMSs have been very slow in providing the needed support. To make it possible for RDBMSs to be deployed for managing many large enterprises' operational data and to support complex queries efficiently, these features are very crucial. We discuss some of the issues involved in improving the availability and efficient accessibility of partitioned tables via parallelism, fine-granularity locking, transient versioning and partition independence. We outline some solutions that have been proposed. These solutions relate to algorithms for index building, utilities for fuzzy backups, incremental recovery and reorganization, buffer management, transient versioning, concurrency control and record management.

Proceedings ArticleDOI
19 Apr 1993
TL;DR: It is shown, that CO is a necessary condition for guaranteeing 1SER over multiple autonomous RMs with mixed resources, and generic distributed CO algorithms, which guarantee the 1SER property over multiple RMsWith mixed, SV and MV resources are presented.
Abstract: Multi version (MV) based database systems allow queries (read-only transactions) to be executed without blocking updaters (read-write transactions), or being blocked by them. Such systems become more and more common. In a multi database environment, transactions may span several single version (SV) based database systems, as well as MV based ones. The database systems may implement various concurrency control techniques. It is required that a globally correct concurrency control is guaranteed. Commitment ordering (CO) is a serializability concept, that allows global serializability to be effectively achieved across multiple autonomous resource managers (RMs; e.g. database systems). The RMs may use different (any) concurrency control mechanisms. Generic distributed CO algorithms, which guarantee the 1SER property over multiple RMs with mixed, SV and MV resources are presented. It is shown, that CO is a necessary condition for guaranteeing 1SER over multiple autonomous RMs with mixed resources. >

01 Oct 1993
TL;DR: An algorithm is presented which extends the relatively new notion of speculative concurrency control by delaying the commitment of transactions, thus allowing other conflicting transactions to continue execution and commit rather than restart.
Abstract: This paper presents an algorithm which extends the relatively new notion of speculative concurrency control by delaying the commitment of transactions, thus allowing other conflicting transactions to continue execution and commit rather than restart. This algorithm propagates uncommitted data to other outstanding transactions thus allowing more speculative schedules to be considered. The algorithm is shown always to find a serializable schedule, and to avoid cascading aborts. Like speculative concurrency control, it considers strictly more schedules than traditional concurrency control algorithms. Further work is needed to determine which of these speculative methods performs better on actual transaction loads.

Book
29 Oct 1993
TL;DR: In this paper, the authors investigate the interoperability of transactions and another consistency model, atomic blocks, which provide exclusive access to an individual object, and explore the implications of 'uncontrolled' concurrency within the same COOPS.
Abstract: The transaction model provides a good solution to many programming problems, but has been recognized as too strict for many applications. Different parallel and distributed applications have different consistency requirements, so multiple concurrency control policies are needed. When data is shared among applications with different policies, then the policies must operate simultaneously and compatibly. The authors investigate the interoperability of transactions and another consistency model, atomic blocks, which provide exclusive access to an individual object. They explore the implications of 'uncontrolled' concurrency within the same COOPS. The work has been in the context of the MELD object-oriented programming language. >

Proceedings ArticleDOI
06 Apr 1993
TL;DR: The proposed protocol employs prioritybased conflict resolution schemes developed on forward validation and utilizes the notion of lazy serialization implemented using dynamic timestamp allocation and dynamic adjustment of timestamp intervals, and is expected to produce transaction results in a timely manner.
Abstract: Transactions in real-time database systems are associated with certain timing constraints derived either from temporal consistency requirements of data or from requirements imposed on system reaction time. Fundamental requirements of real-time database systems are timeliness, i.e., the ability to produce expected transaction results early or at the right time, and predictability, i.e., the ability to function as deterministic as necessary to satisfy system specifications including timing constraints. There are a number of issues that have to be addressed in processing real-time transactions. To achieve the fundamental requirements, not only conventional transaction processing mechanisms have to be tailored to take timing constraints into consideration, but also new mechanisms that have not been required in conventional transaction processing need to be designed and added. In this paper, we focus on the problem of concurrency control for processing real-time transactions, and propose an optimistic concurrency control protocol. The proposed protocol employs prioritybased conflict resolution schemes developed on forward validation. In addition, it utilizes the notion of lazy serialization implemented using dynamic timestamp allocation and dynamic adjustment of timestamp intervals. With these features, the proposed protocol is expected to produce transaction results in a timely manner.

Journal ArticleDOI
01 Dec 1993
TL;DR: This paper provides a survey of the concurrency control algorithms for a TDBMS and discusses future directions.
Abstract: Recently several algorithms have been proposed for concurrency control in a Trusted Database Management System (TDBMS). The various research efforts are examining the concurrency control algorithms developed for DBMSs and adapting them for a multilevel environment. This paper provides a survey of the concurrency control algorithms for a TDBMS and discusses future directions.


Proceedings ArticleDOI
19 Apr 1993
TL;DR: The concurrency control problem for multiversion database systems (MVDBSs) with system-imposed upper bounds on the total number of data item versions stored in the database is considered and a formal concurrence control theory for KVDBS is presented in terms of KV schedules.
Abstract: The concurrency control problem for multiversion database systems (MVDBSs) with system-imposed upper bounds on the total number of data item versions stored in the database is considered. Concurrency control theory for MVDBSs is reviewed. The inadequacy of this theory for analyzing concurrency control algorithms for k-version database systems (KVDBSs) is demonstrated. A formal concurrency control theory for KVDBS is presented. It is developed in terms of KV schedules. The relationships among mono-multi, and KV schedules are summarized. >