scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1981"


Journal ArticleDOI
TL;DR: A survey of concurrency control methods for distributed database concurrency can be found in this paper, where the authors decompose the problem into two major subproblems, read-write and write-write synchronization, and describe a series of synchromzation techniques for solving each subproblem.
Abstract: In this paper we survey, consolidate, and present the state of the art in distributed database concurrency control. The heart of our analysts is a decomposition of the concurrency control problem into two major subproblems: read-write and write-write synchronization. We describe a series of synchromzation techniques for solving each subproblem and show how to combine these techniques into algorithms for solving the entire concurrency control problem. Such algorithms are called "concurrency control methods." We describe 48 principal methods, including all practical algorithms that have appeared m the literature plus several new ones. We concentrate on the structure and correctness of concurrency control algorithms. Issues of performance are given only secondary treatment.

1,124 citations


Proceedings ArticleDOI
29 Apr 1981
TL;DR: A careful distinction is made between design decisions concerning communications andDesign decisions concerning the responses to read/write requests, and two schemes for producing such controls are given.
Abstract: Associated with the write of a database entity is both the "before" or old value, and the "after" or new value. Concurrency can be increased by allowing other transactions to read the before values of a given transaction. The ramifications of allowing this, particularly on a distributed system in which limited communications is desirable, are investigated. A careful distinction is made between design decisions concerning communications and design decisions concerning the responses to read/write requests. Two schemes for producing such controls are given, one scheme for systems where processes are committed on termination, and the other for systems where committment is made later.

124 citations


Proceedings Article
09 Sep 1981
TL;DR: The Transaction Monitoring Facility (TMF), is a component of the ENCOMPASS distributed data management system, which runs on the Tandem computer system, and provides continuous, fault-tolerant transaction processing in a decentralized, distributed environment.
Abstract: A transaction is an atomic update which takes a data base from a consistent state to another consistent state. The Transaction Monitoring Facility (TMF), is a component of the ENCOMPASS distributed data management system, which runs on the Tandem [TM] computer system. TMF provides continuous, fault-tolerant transaction processing in a decentralized, distributed environment. Recovery from failures is transparent to user programs and does not require system halt or restart. Recovery from a failure which directly affects active transactions, such as the failure of a participating processor or the loss of communications between participating network nodes, is accomplished by means of the backout and restart of affected transactions. The implementation utilizes distributed audit trails of data base activity and a decentralized transaction concurrency control mechanism.

103 citations


Book
01 Jul 1981
TL;DR: This paper presents a meta-analyses of general purpose schedulers and their applications in database systems, focusing on the areas of conflict-preserving scheduling, database manipulation and concurrent dynamic logic.
Abstract: Database systems.- General purpose schedulers.- Logs.- Correctness criteria for general purpose schedulers.- Constructing general purpose schedulers.- Conflict-preserving schedulers.- Database description.- Database manipulation.- Concurrent dynamic logic.- Correctness of transaction systems.- Conclusions and directions for future work.

86 citations


Proceedings Article
09 Sep 1981
TL;DR: A method is developed which frees read transactions from any consideration of concurrency control; all responsibility for correct synchronization is assigned to the update transactions, which has the great advantage that, in case of conflicts between read transactions and update Transactions, no backup is performed.
Abstract: Recently, methods for concurrency control have been proposed which were called "optimistic". These methods do not consider access conflicts when they occur; instead, a transaction always proceeds, and at its end a check is performed whether a conflict has happened. If so, the transaction is backed up. This basic approach is investigated in two directions: First, a method is developed which frees read transactions from any consideration of concurrency control; all responsibility for correct synchronization is assigned to the update transactions. This method, has the great advantage that, in case of conflicts between read transactions and update transactions, no backup is performed. Then, the application of optimistic solutions in distributed database systems is discussed, a solution is presented.

49 citations




Journal ArticleDOI
TL;DR: An experimental system designed as part of INRIA's Project Sirius, Delta implements a distributed executive for real-time transactional systems.
Abstract: An experimental system designed as part of INRIA's Project Sirius, Delta implements a distributed executive for real-time transactional systems.

24 citations



01 Jan 1981
TL;DR: This dissertation presents a language suitable for the specification of communications protocols that can be translated into an algebraic data type specification formalism, which allows properties of protocols to be proved using semi-automated support.
Abstract: This dissertation presents a language suitable for the specification of communications protocols. This language can be translated into an algebraic data type specification formalism, which allows properties of protocols to be proved using semi-automated support. As an example, two complex protocols, namely a connection establishment protocol actually being used in practice, and a protocol for concurrency control in distributed data bases, are specified and certain properties proved. A logical design error in the connection establishment protocol is also uncovered.

23 citations


Proceedings ArticleDOI
11 May 1981
TL;DR: The aim in this paper is to show that there is a mathematically inherent reason why existing systems enforce D-serializability (rather than just because of its simplicity): it is because they are based on locking.
Abstract: Our aim in this paper is to show that there is a mathematically inherent reason why existing systems enforce D-serializability (rather than just because of its simplicity): it is because they are based on locking. Our main result is a characterization of the power of locking which states that if a locking policy is safe then it must allow only D-serializable schedules. Furthermore any such schedule can be produced by some safe locking policy. The rest of the paper is organized as follows. In Section 2 we formalize our concepts and describe the model. In Section 3 we characterize D-serializability in semantic terms. In Section 4 we examine when a set of transactions can be let to run safely by themselves without locking or any intervention from the scheduler. Section 5 is concerned with locking policies and in Section 6 we discuss some implications of our results.

Proceedings ArticleDOI
28 Oct 1981
TL;DR: It follows that, unless NP=PSPACE, a scheduler cannot simultaneously minimize communication and be computationally efficient, and this result captures the quantum jump in complexity of the transition from centralized to distributed concurrency control problems.
Abstract: We present a formal framework for distributed databases, and we study the complexity of the concurrency control problem in this framework. Our transactions are partially ordered sets, of actions, as opposed to the straight-line programs of the centralized case. The concurrency control algorithm, or scheduler, is itself a distributed program. Three notions of performance of the scheduler are studied and interrelated: (i) its parallelism, (ii) the computational complexity of the problems it needs to solve, and (iii) the cost of communication between the various parts of the scheduler. We show that the number of messages necessary and sufficient to support a given level of parallelism is equal to the minmax value of a combinatorial game. We show that this game is PSPACE-complete. It follows that, unless NP=PSPACE, a scheduler cannot simultaneously minimize communication and be computationally efficient. This result, we argue, captures the quantum jump in complexity of the transition from centralized to distributed concurrency control problems.

Journal ArticleDOI
TL;DR: Necessary and sufficient conditions which assure serializability and deadlock-freedom in the absence of a concurrency control are derived.
Abstract: A simple locking protocol is presented for transactions executing concurrently in a database. The locking protocol is not two-phase, but each entity in the database may be locked at most once by any transaction. The database is modeled by a directed graph whose vertices correspond to the entities, and whose arcs correspond to certain locking restrictions. Necessary and sufficient conditions which assure serializability and deadlock-freedom in the absence of a concurrency control are derived.

Book ChapterDOI
TL;DR: The motivation for distributed computer systems in terms of possible system characteristics attained by distributing the computational resources is discussed, and the real-time control application environment is characterized.
Abstract: Publisher Summary This chapter describes real-time distributed computer systems in regard to their design and implementation. It discusses the motivation for distributed computer systems in terms of possible system characteristics attained by distributing the computational resources. The chapter also characterizes the real-time control application environment. Further, the chapter reviews the options and issues related to hardware and software designs for distributed systems, and examines the details of the design and implementation of the Honeywell Experimental Distributed Computing System (HXDP). Distributed computer systems essentially contain several computers and provide increased system availability and reliability. Their design is complex, involving the design of communications mechanisms in hardware and software, and the selection of policies and mechanisms for distributed system control. However, the complex design issues can have simple solutions in a well-understood application environment. Distributed systems are considered attractive as they provide efficiency, modularity, robustness to failure, and extensibility. Moreover, for real-time environments, these systems allow physical distribution of functionality, placing computing power at places where required.

Proceedings ArticleDOI
29 Apr 1981
TL;DR: Two concurrency control mechanisms, the SDD-1 system and Dynamic Timestamping Method, are evaluated in terms of protocol synchronization delays and average transaction response time by using simulation.
Abstract: Two concurrency control mechanisms, the SDD-1 system and Dynamic Timestamping Method, are evaluated in terms of protocol synchronization delays and average transaction response time by using simulation. Relationship among average protocol synchronization delay, average transaction response time, average 10 service delay, communication delay, and other system parameters is analyzed by using regression analysis. The statistical distribution functions of transaction response times and synchronization delays are then examined to see if they fit exponential, erlangian, or some other distribution functions.

Proceedings Article
01 Jun 1981
TL;DR: This report has been published in the Proceedings of the Fifth Berkeley Conference on Distributed Data Management and Computer Networks.
Abstract: This report has been published in the Proceedings of the Fifth Berkeley Conference on Distributed Data Management and Computer Networks.

Proceedings ArticleDOI
29 Apr 1981
TL;DR: This work gives a necessary and sufficient condition for a concurrency control principle to be implementable by binary semaphores, and characterize exactly those sets of locking primitives that are no more powerful than binarySemaphores.
Abstract: We study the expressive power of locking primitives, as measured by ther ability to implement different concurrency control principles. We give a necessary and sufficient condition for a concurrency control principle (abstractly, a set of histories) to be implementable by binary semaphores. Also, we characterize exactly those sets of locking primitives that are no more powerful than binary semaphores.


01 Jan 1981
TL;DR: A novel reflective light-polarizing lamination comprising a light polarizer laminated to the matte surface of aluminum foil and useful in display cells for field-effect transition liquid crystal displays.
Abstract: A novel reflective light-polarizing lamination comprising a light polarizer laminated to the matte surface of aluminum foil and useful in display cells for field-effect transition liquid crystal displays.

Book ChapterDOI
20 Oct 1981
TL;DR: An execution of a set of transactions is described by the sequence of the read/write actions, a reads-from rolation ϕ and an overwrite relation ω, which produces correctness problems.
Abstract: The interleaved execution of database transactions produces correctness problems. It is called correct — or serializable —, if it is equivalent to a serial execution of the same transactions. An execution of a set of transactions is described by the sequence of the read/write actions — called schedule —, a reads-from rolation ϕ and an overwrite relation ω.

01 Jan 1981
TL;DR: This thesis analyzed some of the concurrency control schemes in an effort to understand their relative merits, investigate the sensitivity of the performance to different parameters, and to provide some guidelines for designing resilient concurrency algorithms for distributed databases.
Abstract: One of the most critical problem in the implementation of distributed databases is that of concurrency control. The problem is to preserve the consistency of data (that may otherwise be destroyed by concurrent accesses). One of the major goals in the research of distributed databases is to develop design methodology and guidelines for designing good concurrency control methods for a given system environment. In this thesis, we have set out to approach the goal by analyzing some of the concurrency control schemes in an effort to understand their relative merits, investigate the sensitivity of the performance to different parameters, and to provide some guidelines for designing resilient concurrency algorithms for distributed databases. Several update synchronization algorithms are modelled and analyzed in this dissertation. Algorithms investigated in this dissertation include a resilient centralized locking algorithm, some distributed algorithms using timestamps, and some algorithms using clock messages in addition to timestamps. Results from the analysis allowed us to pinpoint inefficiency and suggest some new algorithms.

01 Feb 1981
TL;DR: This report attempts to survey all the published proposals on concurrency control, and finds that a taxonomy is developed for the classification of concurrence control techniques for distributed database systems.
Abstract: : One of the most important problems in the design of centralized and distributed database management systems is the problem of concurrency control. Even though many different solutions have been proposed, a unifying theory is still not in sight. This report attempts to survey all the published proposals on concurrency control. In particular, a taxonomy is developed for the classification of concurrency control techniques for distributed database systems. The survey of these twenty some concurrency control mechanisms are in the framework of this taxonomy.

Journal ArticleDOI
TL;DR: The conflict graph for this database design is shown and it is shown that classes i and j each run exactly one transaction, i andJ, respectively, and the system is effectively deadlocked.
Abstract: The conflict graph for this database design is shown in Figure 1. SDD-1 protocol selection rules dictate that (transactions in) classes i and j obey protocol P3 with respect to each other. Assume that classes i and j each run exactly one transaction, i and j, respectively. The concurrency monitor at DM, will not process i’s READ message until it receives a WRITE or NULLWRITE message from class j, for only then will it be able to determine whether the read condition attached to i’s READ message is satisfied. Similarly, the concurrency monitor at DMB will not process j’s READ message until it receives a WRITE or NULLWRITE message from class i. Since no WRITE or NULLWRITE message will be sent until a READ message is processed, the system is effectively deadlocked. Note that this result is independent of the timestamps assigned to transactions i and j, and of the timestamps used in the read conditions attached to their READ messages. Also note that the scheduling rule described in [3, p. 451 cannot prevent the deadlock, since class i has no messages pending at DMp, just as. class j has no messages pending at DM,.

01 Jan 1981
TL;DR: This thesis is concerned with the problem of extending previous work on two-phase and non-two-phase locking protocols to achieve a higher degree of concurrency and at the same time deal effectively with the deadlock problem.
Abstract: With the ever growing popularity of data base management the systematic study of consistency preserving concurrency control techniques has become very important. Two important issues that need to be considered are: (i) the level of concurrency and (ii) deadlocks. This thesis is concerned with the problem of extending previous work on two-phase and non-two-phase locking protocols to achieve a higher degree of concurrency and at the same time deal effectively with the deadlock problem. Our work with the non-two-phase protocols deals with the most general of the existing natural protocols that are defined for use with data bases organized as directed acyclic graphs. An increased level of concurrency is attained by allowing lock conversions and/or by introducing new lock modes. When this is done either deadlock-freedom is assured a priori or simple restrictions are introduced to reduce the cost of deadlock detection and recovery. In addition to extending existing proctocols and proposing new ones, we also extend the existing theory of locking protocols by including lock conversions and the new modes of locking in the directed hypergraph model of locking protocols. In so doing, we obtain very general results concerning serializability and deadlock-freedom properties of all protocols satisfying a natural closure property. We propose and use some new proof techniques.



Journal ArticleDOI
01 Jan 1981
TL;DR: This paper disproves several results pertaining to database concurrency control and demonstrates that the notion of "weak consistency" introduced in [8] admits database states that are strictly inconsistent.
Abstract: This paper disproves several results pertaining to database concurrency control that are claimed in [8]. The results we disprove are•theorems 3.1, 3.2, 3.6 -- which claim a polynomial time algorithm for testing whether transaction schedules are serializable, and•theorems 4.2 and 4.7 -- which claim a necessary and sufficient mechanism for preserving the "weak consistency" of databases.In addition, we demonstrate that the notion of "weak consistency" introduced in [8] admits database states that are strictly inconsistent.

01 Jan 1981
TL;DR: A new design approach is proposed which removes much of this duplication by defining independent subsystems used by both the DBMS and OS, and makes use of a logical information model which models the stored information in secondary storage.
Abstract: Data Base Management Systems (DBMSs) today are usually built as subsystems on top of an Operating System (OS). This design approach can lead to problems of reliability and efficient performance as well as forcing a duplication of functions between the DBMS and OS. A new design approach is proposed which removes much of this duplication by defining independent subsystems used by both the DBMS and OS. Specifically, an I/O and file support subsystem and a security subsystem are defined. Both subsystems make use of a logical information model which models the stored information in secondary storage. The new data base operating system organization and the logical information model are presented in detail. Design of the security subsystem is stressed throughout. The security subsystem is based on the access control model, and is extended with conditional predicates (Boolean expressions) to produce an access control model with content-dependent security policies. The access matrix is implemented using a combination of access lists and capabilities. A capability is created when an object is first referenced, and can be used for subsequent accesses. In addition, the security subsystem contains: (1) an authorization model, (2) a multiprocessing ability, (3) concurrency control, and (4) a mechanism to temporarily amplify the rights of a capability. A formal specification and proof of correctness of the security subsystem is also presented.

01 Jan 1981
TL;DR: A "passive" concurrency control technique is presented which makes use of the broadcast nature of the communications bus and two algorithms are presented and shown robust with respect to communication and processor failures.
Abstract: Some large business database systems are characterized by a high volume of short transactions (e.g. credit/debit account). In such systems, data retrieval costs are fixed and unavoidable. However, overhead due to concurrency control and recovery protocols may be reduced resulting in higher throughput and shorter response times. This research addresses the problems of concurrency control and recovery (collectively referred to as transaction management) in very large business applications. It is felt that traditional solutions to the problem (i.e. ever-larger centralized machines) will be inadequate as applications grow. An architecture for a database management system distributed over a local broadcast network is proposed. A "passive" concurrency control technique is presented which makes use of the broadcast nature of the communications bus. By eavesdropping for request messages on the broadcast bus, a single concurrency control node can perform conflict analysis for the entire system without explicit lock messages. Two algorithms are presented and shown robust with respect to communication and processor failures: a passive locking algorithm and a passive non-locking algorithm. Simulation results indicate that the passive schemes have very low overhead and perform better than corresponding distributed algorithms. Also, the cost of the recovery protocol necessary to ensure atomic commit at all sites (i.e. the distributed two-phase commit protocol) is shown to be quite high and, in many cases, overshadows the cost of concurrency control.

Proceedings ArticleDOI
Paul Decitre1
04 May 1981
TL;DR: The distributed concurrency control algorithm is described as an improvement of the proposal made by Rosenkrantz, Stearns and Lewis, and key technical features such as deadlock prevention, wrong aborts, parallel execution, and relation between concurrence control and commitment are detailed.
Abstract: As a continuation of the POLYPHEME study, the Cii-Honeywell Bull research center has launched a project on co-operating transactional systems with particular attention paid to distributed concurrency control and commitment.Following the presentation of the application-driven approach being taken, the distributed concurrency control algorithm is described as an improvement of the proposal made by Rosenkrantz, Stearns and Lewis. Salient technical features such as deadlock prevention, wrong aborts, parallel execution, and relation between concurrency control and commitment are detailed. Then the main choices are justified, and the rejected techniques criticized.