scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1988"


Proceedings Article
29 Aug 1988
TL;DR: In this paper, the authors developed a new family of algorithms for scheduling real-time transactions with deadlines, which have four components: a policy to manage overloads, a policy for scheduling the CPU, access to data, concurrency control and scheduling I/O requests on a disk device.
Abstract: This thesis has six chapters. Chapter 1 motivates the thesis by describing the characteristics of real-time database systems and the problems of scheduling transactions with deadlines. We also present a short survey of related work and discuss how this thesis has contributed to the state of the art. In Chapter 2 we develop a new family of algorithms for scheduling real-time transactions. Our algorithms have four components: a policy to manage overloads, a policy for scheduling the CPU, a policy for scheduling access to data, i.e., concurrency control and a policy for scheduling I/O requests on a disk device. In Chapter 3, our scheduling algorithms are evaluated via simulation. Our chief result is that real-time scheduling algorithms can perform significantly better than a conventional non real-time algorithm. In particular, the Least Slack (static evaluation) policy for scheduling the CPU, combined with the Wait Promote policy for concurrency control, produces the best overall performance. In Chapter 4 we develop a new set of algorithms for scheduling disk I/O requests with deadlines. Our model assumes the existence of a real-time database system which assigns deadlines to individual read and write requests. We also propose new techniques for handling requests without deadlines and requests with deadlines simultaneously. This approach greatly improves the performance of the algorithms and their ability to minimize missed deadlines. In Chapter 5 we evaluate the I/O scheduling algorithms using detailed simulation. Our chief result is that real-time disk scheduling algorithms can perform better than conventional algorithms. In particular, our algorithm FD-SCAN was found to be very effective across a wide range of experiments. Finally, in Chapter 6 we summarize our conclusions and discuss how this work has contributed to the state of the art. Also, we briefly explore some interesting new directions for continuing this research.

682 citations


Journal ArticleDOI
TL;DR: Two novel concurrency control algorithms for abstract data types are presented and it is proved that both algorithms ensure a local atomicity property called dynamic atomicity, which means that they can be used in combination with any other algorithms that also ensureynamic atomicity.
Abstract: Two novel concurrency algorithms for abstract data types are presented that ensure serializability of transactions. It is proved that both algorithms ensure a local atomicity property called dynamic atomicity. The algorithms are quite general, permitting operations to be both partial and nondeterministic. The results returned by operations can be used in determining conflicts, thus allowing higher levels of concurrency than otherwise possible. The descriptions and proofs encompass recovery as well as concurrency control. The two algorithms use different recovery methods: one uses intentions lists, and the other uses undo logs. It is shown that conflict relations that work with one recovery method do not necessarily work with the other. A general correctness condition that must be satisfied by the combination of a recovery method and a conflict relation is identified. >

318 citations


Proceedings Article
29 Aug 1988
TL;DR: An overview of the techniques used to build a DBMS at Berkeley that will simultaneously provide high performance and high availability in transaction processing environments, in applications with complex ad-hoc queries and in Applications with large objects such as images or CAD layouts is presented.
Abstract: 1.1. High Performance This paper presents an overview of the techniques we are using to build a DBMS at Berkeley that will simultaneously provide high performance and high availability in transaction processing environments, in applications with complex ad-hoc queries and in applications with large objects such as images or CAD layouts. We plan to achieve these goals using a general purpose DBMS and operating system and a shared memory multiprocessor. The hardware and software tactics which we are using to accomplish these goals are described in this paper and include a novel “fast path” feature, a special purpose concurrency control scheme, a twodimensional file system, exploitation of parallelism and a novel method to efficiently mirror disks. We strive for high performance in three different application areas: 1) transaction processing 2) complex ad-hoc queries 3) management of large objects

216 citations


Proceedings ArticleDOI
10 Oct 1988
TL;DR: A checkpoint algorithm is presented that benefits from the research in concurrency control, commit, and site recovery algorithms in transaction processing and does well if failures are infrequent by minimizing overhead during normal processing.
Abstract: A checkpoint algorithm is presented that benefits from the research in concurrency control, commit, and site recovery algorithms in transaction processing. In the authors' approach a number of checkpointing processes, a number of rollback processes, and computations on operational processes can proceed concurrently while tolerating the failure of an arbitrary number of processes. Each process takes checkpoints independently. During recovery after a failure, a process invokes a two-phase rollback algorithm. It collects information about relevant message exchanges in the system in the first phase and uses it in the second phase to determine both the set of processes that must roll back and the set of checkpoints up to which rollback must occur. Concurrent rollbacks are completed in the order of the priorities of the recovering processes. The proposed solution is optimistic in the sense that it does well if failures are infrequent by minimizing overhead during normal processing. >

209 citations


Journal ArticleDOI
01 Mar 1988
TL;DR: This paper discusses solutions for two problems: what is a reasonable method for modeling real-time constraints for database transactions and time constraints add a new dimension to concurrency control.
Abstract: Scheduling transactions with real-time requirements presents many new problems. In this paper we discuss solutions for two of these problems: what is a reasonable method for modeling real-time constraints for database transactions? Traditional hard real-time constraints (e.g., deadlines) may be too limited. May transactions have soft deadlines and a more flexible model is needed to capture these soft time constraints. The second problem we address is scheduling. Time constraints add a new dimension to concurrency control. Not only must a schedule be serializable but it also should meet the time constraints of all the transactions in the schedule.

205 citations


Journal ArticleDOI
01 Mar 1988
TL;DR: The protocol is based on the integration of a modular concurrency control theory with a real-time scheduling protocol called the priority ceiling protocol and supports the replication of data objects and avoids the formation of deadlocks.
Abstract: The concurrency control of transactions in a real-time database must satisfy not only the consistency constraints of the database but also the timing constraints of individual transactions. In this paper, we present a real-time concurrency control protocol that can be used in a distributed and decomposable real-time database. The protocol is based on the integration of a modular concurrency control theory with a real-time scheduling protocol called the priority ceiling protocol. This protocol supports the replication of data objects and avoids the formation of deadlocks. Finally, an analysis of the performance of this protocol is presented.

163 citations


Journal ArticleDOI
01 Jun 1988
TL;DR: A correctness condition for the concurrency control mechanism is formulated and a protocol that allows concurrent execution of a set of global transactions in presence of local ones is proposed that ensures the consistency of the multidatabase and deadlock freedom.
Abstract: A formal model of data updates in a multidatabase environment is developed, and a theory of concurrency control in such an environment is presented. We formulate a correctness condition for the concurrency control mechanism and propose a protocol that allows concurrent execution of a set of global transactions in presence of local ones. This protocol ensures the consistency of the multidatabase and deadlock freedom. We use the developed theory to prove the protocol's correctness and discuss complexity issues of implementing the proposed protocol.

137 citations


Journal ArticleDOI
TL;DR: Conurrent algorithms on search structures can achieve more parallelism than standard concurrency control methods would suggest, by exploiting the fact that many different search structure states represent one dictionary state.
Abstract: A dictionary is an abstract data type supporting the actions member, insert, and delete. A search structure is a data structure used to implement a dictionary. Examples include B trees, hash structures, and unordered lists. Concurrent algorithms on search structures can achieve more parallelism than standard concurrency control methods would suggest, by exploiting the fact that many different search structure states represent one dictionary state. We present a framework for verifying such algorithms and for inventing new ones. We give several examples, one of which exploits the structure of Banyan family interconnection networks. We also discuss the interaction between concurrency control and recovery as applied to search structures.

136 citations


Journal ArticleDOI
01 Jun 1988
TL;DR: In this paper, the authors describe transaction management in ORION, an object-oriented database system that supports concurrency control mechanism based on extensions to the current theory of locking, and a transaction recovery scheme based on conventional logging.
Abstract: In this paper, we describe transaction management in ORION, an object-oriented database system. The application environments for which ORION is intended led us to implement the notions of sessions of transactions, and hypothetical transactions (transactions which always abort). The object-oriented data model which ORION implements complicates locking requirements. ORION supports a concurrency control mechanism based on extensions to the current theory of locking, and a transaction recovery mechanism based on conventional logging.

127 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: Two novel concurrency control algorithms for abstract data types are presented and it is proved that both algorithms ensure a local atomicity property called dynamic atomicity, which means that they can be used in combination with any other algorithms that also ensureynamic atomicity.
Abstract: Two novel concurrency control algorithms for abstract data types are presented. The algorithms ensure serializability of transactions by using conflict relations based on the commutativity of operations. It is proved that both algorithms ensure a local atomicity property called dynamic atomicity. This means that the algorithms can be used in combination with any other algorithms that also ensure dynamic atomicity. The algorithms are quite general, permitting operations to be both partial and nondeterministic., They permit the results returned by operations to be used in determining conflicts, thus permitting higher levels of concurrency than is otherwise possible. The descriptions and proofs encompass recovery as well as concurrency control. The two algorithms use different recovery methods: one uses intentions lists, and the other uses undo logs. It is shown that conflict relations that work with one recovery method do not necessarily work with the other. A general correctness condition that must be satisfied by the combination of a recovery method and a conflict relation is identified. >

124 citations


Proceedings Article
29 Aug 1988
TL;DR: The authors examine the performance of four representative algorithms - distributed 2PL, wound-wait, basic timestamp ordering, and a distributed optimistic algorithm - using a detailed simulation model of a distributed DBMS to shed light on some of the important issues of distributed concurrency control performance tradeoffs.
Abstract: Many concurrency control algorithms have been proposed for use in distributed database systems. Despite the large number of available algorithms, and the fact that distributed database systems are becoming a commercial reality, distributed concurrency control performance tradeoffs are still not well understood. In this paper the authors attempt to shed light on some of the important issues by studying the performance of four representative algorithms - distributed 2PL, wound-wait, basic timestamp ordering, and a distributed optimistic algorithm - using a detailed simulation model of a distributed DBMS. The authors examine the performance of these algorithms for various levels of contention, ''distributedness'' of the workload, and data replication. The results should prove useful to designers of future distributed database systems.

Proceedings ArticleDOI
13 Jun 1988
TL;DR: The design and implementation of a reliable group communication mechanism that guarantees a form of atomicity in that the messages are received by all operational members of the group or by none of them is presented.
Abstract: The design and implementation of a reliable group communication mechanism is presented. The mechanism guarantees a form of atomicity in that the messages are received by all operational members of the group or by none of them. Since the overhead in enforcing the order of messages is nontrivial, the mechanism provides two types of message transmission: one guarantees delivery of the messages in the same order to all members of a group, and the other guarantees only atomicity with messages delivered in some arbitrary order. The message-ordering property can be used to simplify distributed database and distributed processing algorithms. The mechanism can survive despite process, host, and communication failures. >

Book ChapterDOI
14 Mar 1988
TL;DR: The fundamental theory of multi-level concurrency control and recovery is presented and it is shown how the theory helps to understand and explain in a systematic framework techniques that are in use in today's DBMSs.
Abstract: A useful approach to the design and description of complex data management systems is the decomposition of a system into a hierarchically organized collection of levels. In such a system, transaction management is distributed among the levels. This paper presents the fundamental theory of multi-level concurrency control and recovery. A model for the computation of multi-level transactions is introduced by generalizing from the well known single-level theory. Three basic principles, called commutation, reduction, and abstraction are explained. Using them enables one to explain and prove seemingly ”tricky” implementation techniques as correct, by regarding them as multi-level algorithms. We show how the theory helps to understand and explain in a systematic framework techniques that are in use in today's DBMSs. We also discuss how and why multi-level algorithms may achieve better performance than single-level ones.

Journal ArticleDOI
01 Mar 1988
TL;DR: It is felt that long communication delays may be a factor in limiting the performance of real-time distributed database systems, so a concurrency control algorithm is presented whose performance is not limited by communication delays.
Abstract: Real-time database systems support applications which have severe performance constraints such as fast response time and continued operation in the face of catastrophic failures. Real-time database systems are still in the state of infancy, and issues and alternatives in their design are not very well explored. In this paper, we discuss issues in the design of real-time database systems and discuss different alternatives for resolving these issues. We discuss the aspects in which requirements and design issues of real-time database systems differ from those of conventional database systems. We discuss two approaches to design real-time database systems, viz., main memory resident databases and design by trading a feature (like serializability).We also discuss requirements in the design of real-time distributed database systems, and specifically discuss issues in the design of concurrency control and crash recovery. It is felt that long communication delays may be a factor in limiting the performance of real-time distributed database systems. We present a concurrency control algorithm for real-time distributed database systems whose performance is not limited by communication delays.

Proceedings ArticleDOI
01 Mar 1988
TL;DR: A new locking protocol that permits more concurrency than existing commutativity-based protocols is defined, and it is proved that the protocol satisfies hybrid atomicity, a local atomicity property that combines aspects of static and dynamic atomic protocols.
Abstract: We define a new locking protocol that permits more concurrency than existing commutativity-based protocols. The protocol uses timestamps generated when transactions commit to provide more information about the serialization order of transactions, and hence to weaken the constraints on conflicts. In addition, the protocol permits operations to be both partial and non-deterministic, and it permits results of operations to be used in choosing locks. The protocol exploits type-specific properties of objects, necessary and sufficient constraints on lock conflicts are defined directly from a data type specification. We give a complete formal description of the protocol, encompassing both concurrency control and recovery, and prove that the protocol satisfies hybrid atomicity, a local atomicity property that combines aspects of static and dynamic atomic protocols. We also show that the protocol is optimal in the sense that no hybrid atomic locking scheme can permit more concurrency.

Journal ArticleDOI
TL;DR: An approach to concurrency control is presented; it is based on the decomposition of both the database and the individual transactions, and is a generalization of serializability theory in that the set of permissible transaction schedules contains all the serializable schedules.
Abstract: An approach to concurrency control is presented; it is based on the decomposition of both the database and the individual transactions. This approach is a generalization of serializability theory in that the set of permissible transaction schedules contains all the serializable schedules. In addition to providing a higher degree of concurrency than that provided by serializability theory, this approach retains three important properties associated with serializability: the consistency of the database is preserved, the individual transactions are executed correctly, and the concurrency control approach is modular. The authors formalize the last concept. The associated failure recovery procedure is presented, as is the concept of failure safety (i.e. failure tolerance). >

Proceedings ArticleDOI
01 Feb 1988
TL;DR: An optimistic concurrency-control algorithm is proposed that allows a subclass of global transactions to concurrently retrieve and update the multiple databases, while it places no restriction on the concurrence-control mechanisms used by each of the local DBMSs, thus maintaining local autonomy.
Abstract: The performance is studied of atomic updates across different database management systems (DBMSs). An optimistic concurrency-control algorithm is proposed that allows a subclass of global transactions to concurrently retrieve and update the multiple databases, while it places no restriction on the concurrency-control mechanisms used by each of the local DBMSs, thus maintaining local autonomy. >

Journal ArticleDOI
TL;DR: The controlled-generator model of P.J. Ramadge and W.M. Wonham (1988) is used to formulate the concurrent execution of transactions in database systems as a control problem for a partially observed discrete-event dynamical system.
Abstract: The controlled-generator model of P.J. Ramadge and W.M. Wonham (1988) is used to formulate the concurrent execution of transactions in database systems as a control problem for a partially observed discrete-event dynamical system. The control objectives of this problem (for concurrency control and recovery) and the properties of some important transaction scheduling techniques are characterized in terms of the language generated by the controlled process and in terms of the stage of an ideal complete-information scheduler. Results about the performance of these techniques are presented. >

Journal ArticleDOI
TL;DR: A method is discussed for synchronizing operations on objects when the operations are invoked by transactions, which takes into consideration the granularity at which operations affect an object.
Abstract: A method is discussed for synchronizing operations on objects when the operations are invoked by transactions. The technique, which is motivated by a desire to make use of possible concurrency in accessing objects, takes into consideration the granularity at which operations affect an object. A dynamic method is presented for determining the compatibility of an invoked operation with respect to operations in progress. In making decisions, it utilizes the state of the object, the semantics of the uncommitted operations, the actual parameters of the invoked operation, and the effect of the operations on the objects. One of the attractive features of this technique is that a single framework can be used to deal with the problem of synchronizing access to simple objects as well as compound objects, i.e. objects in which some components are themselves objects. >

Proceedings ArticleDOI
01 Jan 1988
TL;DR: It is shown how to achieve a fair solution to l-exclusion, a classical concurrency control problem previously solved assuming a very powerful form of atomic “test and set”, using safe registers alone and without introducing atomicity.
Abstract: Most of the research in concurrency control has been based on the existence of strong synchronization primitives such as test and set. Following Lamport, recent research promoting the use of weaker primitives, “safe” rather than “atomic,” has resulted in construction of atomic registers from safe ones, in the belief that they would be useful tools for process synchronization. We argue that the properties provided by atomic operations may be too powerful, masking core difficulties of problems and leading to inefficiency. We therefore advocate a different approach, to skip the intermediate step of achieving atomicity, and solve problems directly from safe registers. Though it has been shown that “test and set” cannot be implemented from safe registers, we show how to achieve a fair solution to l-exclusion, a classical concurrency control problem previously solved assuming a very powerful form of atomic “test and set”. We do so using safe registers alone and without introducing atomicity. The solution is based on the construction of a simple novel non-atomic synchronization primitive.

Proceedings ArticleDOI
01 Jun 1988
TL;DR: A Data Management System (DMS) for VLSI design is presented that supports hierarchical decomposition, multiple levels of abstraction, concurrency control and design evolution.
Abstract: A Data Management System (DMS) for VLSI design is presented that supports hierarchical decomposition, multiple levels of abstraction, concurrency control and design evolution. Our contribution is original in that we employ semantic data modeling techniques to derive a simple, yet powerful, data schema that represents the logical organization of VLSI design data. The resulting DMS provides an open framework for the integration of design tools and relieves the designer of the burden of organizing his design data.

Proceedings ArticleDOI
26 Sep 1988
TL;DR: A concurrency control model that supports cooperative data sharing among transactions that is relevant to applications that provide computer support for cooperative activities, such as office information systems, graphical programming environments, and CAD tools for electronic or mechanical domaim.
Abstract: We describe a concurrency control model that supports cooperative data sharing among transactions. Setializability is replaced by applicationand data-specific coneemess criteria that are explicitly defined by programmers. The model is relevant to applications that provide computer support for cooperative activities, such as office information systems, graphical programming environments, and CAD tools for electronic or mechanical domaim. Its context is an object-oriented database: an object is accessed only by operanons defined on its abstract type [ZW].

Journal ArticleDOI
TL;DR: A queueing network model to analyze the performance of a distributed database testbed system is developed and validated against empirical measurements and reflects a functioning distributed databaseTestbed system and is validated against performance measurements.
Abstract: A queuing network model for analyzing the performance of a distributed database testbed system with a transaction workload is developed. The model includes the effects of the concurrency control protocol (two-phase locking with distributed deadlock detection), the transaction recovery protocol (write-ahead logging of before-images), and the commit protocol (centralized two-phase commit) used in the testbed system. The queuing model differs from previous analytical models in three major aspects. First, it is a model for a distributed transaction processing system. Second, it is more general and integrated than previous analytical models. Finally, it reflects a functioning distributed database testbed system and is validated against performance measurements. >

Journal ArticleDOI
Theo Härder1
TL;DR: This paper proposes the use of global escrow services, which may be called asynchronously, and investigates the escrow mechanism for a data sharing environment where transactions running on multiple, independent processors must be efficiently synchronized without sacrificing their serializability.

Book ChapterDOI
01 Aug 1988
TL;DR: Arjuna as mentioned in this paper is a fault-tolerant distributed programming system supporting atomic actions using type-inheritance, which allows new types to be derived from, and inherit the capabilities of, old types.
Abstract: One of the key concepts available in many object-oriented programming languages is that of type-inheritance, which permits new types to be derived from, and inherit the capabilities of, old types. This paper describes how to exploit this property in a very simple fashion to implement object-oriented concurrency control. We show how by using type-inheritance, objects may control their own level of concurrency in a type-specific manner. Simple examples demonstrate the applicability of the approach. The implementation technique described here is being used to develop Arjuna, a fault-tolerant distributed programming system supporting atomic actions.

Proceedings ArticleDOI
05 Dec 1988
TL;DR: Using simulation and probabilistic analysis, this work studies the performance of an algorithm to read entire databases with locking concurrency control allowing multiple readers or an exclusive writer.
Abstract: Using simulation and probabilistic analysis, we study the performance of an algorithm to read entire databases with locking concurrency control allowing multiple readers or an exclusive writer. The algorithm runs concurrently with the normal transaction processing (on-the-fly) and locks the entities in the database one by one (incremental). The analysis compares different strategies to resolve the conflicts between the global read algorithm and update. Since the algorithm is parallel in nature, its interference with normal transactions is minimized in parallel and distributed databases. A simulation study shows that one variant of the algorithm can read the entire database with very little overhead and interference with the updates.

Proceedings ArticleDOI
01 Feb 1988
TL;DR: The authors derive a closed-form expression for the transaction throughput as a function of workload parameters and the resource-access-time parameters and yield a simple asymptotic analysis of the optimistic concurrency control protocols.
Abstract: The authors use a mean value model for data contention and a piecewise linear model for resource contention. To show the usefulness of this methodology, they compare three different optimistic concurrency control protocols for a centralized system. The authors derive a closed-form expression for the transaction throughput as a function of workload parameters and the resource-access-time parameters. The resource-access-time parameters can be derived using a simple analytical model. The closed-form expressions are very useful as a quick evaluation of different protocols and to gain insight about protocol performance over a wide range of model parameters. They also yield a simple asymptotic analysis of the optimistic concurrency control protocols. The authors apply the methodology to predict the performance of a testbed database system. >

Book ChapterDOI
15 Aug 1988
TL;DR: This work proposes an approach to co-existence that blurs the boundary between the object-oriented execution environment and the database.
Abstract: Object-oriented systems could use much of the functionality of database systems to manage their objects. Persistence, object identity, storage management, distribution and concurrency control are some of the things that database systems traditionally handle well. Unfortunately there is a fundamental difference in philosophy between the object-oriented and database approaches, namely that of object independence versus data independence. We discuss the ways in which this difference in outlook manifests itself, and we consider the possibilities for resolving the two views, including the current work on object-oriented databases. We conclude by proposing an approach to co-existence that blurs the boundary between the object-oriented execution environment and the database.

Proceedings Article
29 Aug 1988
TL;DR: A rigorous framework for analyzing timestampbased concurrency control and recovery algorithms for nested transactions is presented and it is shown that local static atomicity of each object is sufficient to ensure global serializability.
Abstract: We present a rigorous framework for analyzing timestampbased concurrency control and recovery algorithms for nested transactions. We define a local correctness property, local static atomic@, that affords useful modularity. We show that local static atomicity of each object is sufficient to ensure global serializability. We present generalizations of algorithms due to Reed and Herlihy, and show that each ensures local static atomicity.

01 Jan 1988
TL;DR: Arjuna as discussed by the authors is a fault-tolerant distributed programming system supporting atomic actions using type-inheritance, which allows new types to be derived from, and inherit the capabilities of, old types.
Abstract: One of the key concepts available in many object-oriented programming languages is that of type-inheritance, which permits new types to be derived from, and inherit the capabilities of, old types. This paper describes how to exploit this property in a very simple fashion to implement object-oriented concurrency control. We show how by using type-inheritance, objects may control their own level of concurrency in a type-specific manner. Simple examples demonstrate the applicability of the approach. The implementation technique described here is being used to develop Arjuna, a fault-tolerant distributed programming system supporting atomic actions.