scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1979"


Journal ArticleDOI
TL;DR: A “majority consensus” algorithm which represents a new solution to the update synchronization problem for multiple copy databases is presented and can function effectively in the presence of communication and database site outages.
Abstract: A “majority consensus” algorithm which represents a new solution to the update synchronization problem for multiple copy databases is presented. The algorithm embodies distributed control and can function effectively in the presence of communication and database site outages. The correctness of the algorithm is demonstrated and the cost of using it is analyzed. Several examples that illustrate aspects of the algorithm operation are included in the Appendix.

1,136 citations


Journal ArticleDOI
TL;DR: In this article, the authors present algorithms for ensuring the consistency of a distributed relational data base subject to multiple concurrent updates and mechanisms to correctly update multiple copies of objects and to continue operation when less than all machines in the network are operational.
Abstract: This paper contains algorithms for ensuring the consistency of a distributed relational data base subject to multiple, concurrent updates. Also included are mechanisms to correctly update multiple copies of objects and to continue operation when less than all machines in the network are operational. Together with [4] and [12], this paper constitutes the significant portions of the design for a distributed data base version of INGRES.

342 citations


Journal ArticleDOI
TL;DR: It is shown why locking mechanisms lead to correct operation, it is shown that two proposed mechanisms for distributed environments are special cases of locking, and a new version of lockdng is presented that alows more concurrency than past methods.
Abstract: An arbitrary interleaved execution of transactions in a database system can lead to an inconsistent database state. A number of synchronization mechanisms have been proposed to prevent such spurious behavior. To gain insight into these mechanisms, we analyze them in a simple centralized system that permits one read operation and one write operation per transaction. We show why locking mechanisms lead to correct operation, we show that two proposed mechanisms for distributed environments are special cases of locking, and we present a new version of lockdng that alows more concurrency than past methods. We also examine conflict graph analysis, the method used in the SDD-1 distributed database system, we prove its correctness, and we show that it can be used to substantially improve the performance of almost any synchronization mechanisn.

248 citations


Proceedings ArticleDOI
30 May 1979
TL;DR: It is formally show that the performance of a scheduler, i.e., the amount of parallelism that it supports, depends explicitly upon the number of information that is available to the scheduler.
Abstract: A concurrency control mechanism (or a scheduler) is the component of a database system that safeguards the consistency of the database in the presence of interleaved accesses and update requests. We formally show that the performance of a scheduler, i.e., the amount of parallelism that it supports, depends explicitly upon the amount of information that is available to the scheduler. We point out that most previous work on concurrency control is simply concerned with specific points of this basic trade-off between performance and information. In fact, several of these approaches are shown to be optimal for the amount of information that they use.

111 citations


Proceedings ArticleDOI
03 Oct 1979
TL;DR: Two families of non-locking concurrency controls are presented and the methods used are "optimistic" in the sense that they rely mainly on transaction backup as a control mechanism, "hoping" that conflicts between transactions will not occur.
Abstract: Most current approaches to concurrency control in database systems rely on locking of data objects as a control mechanism. In this paper, two families of non-locking concurrency controls are presented. The methods used are "optimistic" in the sense that they rely mainly on transaction backup as a control mechanism, "hoping" that conflicts between transactions will not occur. Applications where these methods should be more efficient than locking are discussed.

91 citations


Book
01 Jan 1979
TL;DR: A new technique for analyzing the performance of update algorithms for replicated data in a distributed database based on queueing theory is developed and the results show that centralized control algorithms nearly always perform better than the more popular distributed control algorithms.
Abstract: : In this thesis we study the performance of update algorithms for replicated data in a distributed database In doing so, we also investigate several other related issues We start by presenting a simple model of a distributed database which is suitable for studying updates and concurrency control We also develop a performance model and a set of parameters which represent the most important performance features of a distributed database The distributed database models are used to study the performance of update algorithms for replicated data This is done in two steps First the algorithms are analyzed in the case of completely replicated databases in a no failure, update only environment Then, the restrictions that we made are eliminated one at a time, and the impact on the system performance of doing this is evaluated For the first step, we develop a new technique for analyzing the performance of update algorithms This iterative technique is based on queueing theory Several well known update algorithms are analyzed using this technique The performance results are verified through detailed simulations of the algorithms The results show that centralized control algorithms nearly always perform better than the more popular distributed control algorithms

82 citations


Proceedings ArticleDOI
29 Oct 1979
TL;DR: The theory of databasc concurrency control bears a superficial silllilarity to the () pe ra ting systenl Sinspi I'ed con cLI rrency 1he 0 I'Y [VI, (' [) 1.
Abstract: A database consists of ellliflc.\' yvhich reLlte to each other in certain ways, i,e., they satisfy cerltlin cOllsistency constraints. Many tinles, when a user updates the database, he nlay have to update tcnlporarily these constraints in orde r tC) eventuaII y t I'an s1'0 I' 111 the database in to a new, consis ten t stat C . For this I'eas 0 n, at 0 nl ic act ion s by the sa nlC user arc grou ped toget her into un its of consistency called transactiolls. In practice. a transaction nlay be either an interactive session, or the execution of a user update progranl. When, however, nlan y' transactions access and update the SanlC database cOI1curTently, there rnust he sonle kind of coordination to ensure that the resulting sequence of interleaved atonlic actions (or schedule) is correct. This TlleanS that all transactions have a consistent view of the data. and furthernlore the database is Icft at the end in sonle consistent state, This required coordination is achic\cd via the COIlcurrency cOlltrol,nechalllsfn ()f the database. ('onsiderahle research effort has heen devoted recently to the theoretical aspects or the design of such a systenl !ECiLTl. SLR, SK, KS, Pa, PBR, KPI. The theory of databasc concurrency control bears a superficial silllilarity to the () pe ra ting systenl Sinspi I'ed con cLI rrency 1he 0 I'Y [K [vI, (' [) 1. The difference is lhtl{ in operating systeIllS \\le have cooperating, Ill0nitoring. dnd 1110n itored. processes, and the goal is to prevent had cooperation or Tllanagenlent (e.g. indetcrnlinacy. deadlocks) In databases, we have a population of' users that arc una\\'are of each other's pres-

80 citations


Proceedings ArticleDOI
06 Nov 1979
TL;DR: This paper derives necessary and sufficient condition for the serializability of concurrent execution of transactions, regardless of the underlying mech anism controlling such execution, and outlines a proposal for a new approach to concurrency control which is motivated by the results presented.
Abstract: In this paper we report our preliminary re sults in the correctness of concurrency control and we outline a new approach to concurrency control. First, we derive necessary and sufficient condition for the serializability of concurrent execution of transactions, regardless of the underlying mech anism (i. e. locking or time stamps) controlling such execution. We then outline a proposal for a new approach to concurrency control which is motivated by the results presented in the first part of the paper. The approach is based on detection of nonserializable actions and subsequent recovery rather than the prevention or avoidance methods which are used by existing concurrency control mechanisms.

45 citations


Proceedings ArticleDOI
29 Oct 1979
TL;DR: This paper is concerned with the problem of developing locking protocols for ensuring the consistency of database systems that are accessed concurrently by a number of independent transactions and shows the weak protocol to ensure consistency and deadlock-freedom only for databases that are organized as trees.
Abstract: This paper is concerned with the problem of developing locking protocols for ensuring the consistency of database systems that are accessed concurrently by a number of independent transactions It is assumed that the database is modelled by a directed acyclic graph whose vertices correspond to the database entities, and whose arcs correspond to certain locking restrictions Several locking protocols are presented The weak protocol is shown to ensure consistency and deadlock-freedom only for databases that are organized as trees For the databases that are organized as directed acyclic graphs, the strong protocol is presented Discussion of SHARED and EXCLUSIVE locks is also included

36 citations




01 Jan 1979
TL;DR: The thesis presents a model for computation in a distributed information system in which the sites and communication links may fail, and discusses implementation techniques that could be used to limit the effects of failures in a real system to those described in the model.
Abstract: This dissertation presents a collection of protocols for coordinating transactions in a distributed information system. The system is modeled as a collection of processes that communicate only through message passing. Each process manages some portion of the data base, and several processes may cooperate in performing a single transaction. The thesis presents a model for computation in a distributed information system in which the sites and communication links may fail. The effects of such failures on the computation are described in the model. The thesis discusses implementation techniques that could be used to limit the effects of failures in a real system to those described in the model. A hierarchical protocol for coordinating transaction are pre-analyzed to select the protocols needed to coordinate the processes that participate in the implementation of the transaction. This analysis can be used to guide the organization of the data base so as to minimize the amount of locking required in performing frequent or important transactions. An important aspect of this mechanism is that it allows transactions that cannot accurately be pre-analyzed to be performed and correctly synchronized without severely degrading the performance of the system in performing more predictable transactions. A novel approach to the problem of making updates at several different sites atomically is also discussed. This approach is based on the notion of polyvalue, which is used to represent two or more possible values for a single data item. A polyvalue is created for an item involved in an update that has been delayed due to a failure. By assigning a polyvalue to such an item, that item can be made accessible to subsequent transactions, rather than remaining locked until the update can be completed. A polyvalue describes the possible values that may be correct for an item, depending on the outcome of transactions that have been interrupted by failures. Frequently, the most important effects of a transaction (such as the payment of money) can be determined without knowing the exact values of the items in the data base. A polyvalue for an item that is accessed by such a transaction may be sufficient to determine such effects. By using polyvalues, we can guarantee that a data item will not be made inaccessible to any failure other than a failure of the site that holds the item. A strong motivation for the development of these protocols is the desire that the individual sites of a distributed information system fail independently, and that a site or a group of sites be able to continue local processing operations when a failure has isolated them from the rest of the sites. Many of the previous coordination mechanism have only considered the continued operation of the sites that remain with the system to be important. Another motivating factor for the development of these protocols is the idea that in many applications, the processing to be performed exhibits a high degre

Proceedings ArticleDOI
01 Jun 1979
TL;DR: The purpose is to survey the literature on concurrency control, concentrating on three approaches—locking, majority consensus, and SDD-1 protocols—which together subsume the bulk of the literature.
Abstract: Whenever multiple users or programs access a data base concurrently, the problem of concurrency control arises. The problem is to synchronize concurrent interactions so that each reads consistent data from the data base, writes consistent data, and is ultimately processed to completion. In a distributed data base this problem is exacerbated because a concurrency control mechanism at one site cannot instantaneously know about interactions at other sites. No fewer than 30 papers on this topic have appeared to date. Our purpose is to survey this literature, concentrating on three approaches—locking, majority consensus, and SDD-1 protocols—which together subsume the bulk of the literature. **

Proceedings ArticleDOI
03 Oct 1979
TL;DR: An efficient deadlock detection algorithm, which requires to be built and maintain only Sub-Wait-Graphs, is proposed, by making use of this property.
Abstract: There are two deadlock detection methods in a distributed database. One is centralized, and the other is distributed. In this paper a distributed method is discussed. Sub-Wait-Graphs, which express the state of execution of transactions in individual sites, are introduced, and a sufficient condition for a global deadlock not to occur is given, based on the Sub-Wait-Graph. This sufficient condition makes it possible for a deadlock detection to be separated into two phases, local deadlock detection and global dead-lock detection. Also, an efficient deadlock detection algorithm, which requires to be built and maintain only Sub-Wait-Graphs, is proposed, by making use of this property. The characteristics and effects of this algorithm are discussed.

Journal ArticleDOI
D.J. Rypka1, A.P. Lucido
TL;DR: This work states that mode compatibility is defined and used to derive dead-lock detection and avoidance methods that generalize well-known deadlock results for single unit resources by permitting greater concurrency while still guaranteeing data consistency.
Abstract: Logical resources are defined as shared passive entities that can be concurrently accessed by multiple processes. Concurrency restrictions depend upon the mode or manner in which a process may manipulate a resource. Models incorporating these single unit resources can be used to analyze information locking for consistency and integrity purposes. Mode compatibility is defined and used to derive dead-lock detection and avoidance methods. These methods generalize well-known deadlock results for single unit resources by permitting greater concurrency while still guaranteeing data consistency. This model is applicable to the standard shared (read-only) and exclusive (read-write) access modes as well as a useful subset of those proposed in the CODASYL DBMS report.

Journal ArticleDOI
A. Nader1
TL;DR: A general view of the problems that arise in the design of distributed computer control systems and some of the present approaches to solve them are presented.

Book ChapterDOI
02 Jul 1979
TL;DR: A distributed system based on communication among disjoint processes is considered, in which each process is capable of achieving a post condition of its local space in such a way that the conjunction of local post conditions implies a global post conditions of the whole system.
Abstract: We consider a distributed system based on communication among disjoint processes, in which each process is capable of achieving a post condition of its local space in such a way that the conjunction of local post conditions implies a global post condition of the whole system. We then augment the system with extra control communication, in order to achieve distributed — termination, without adding new channels of communication. The algorithm is applied to a problem of sorted partition.

Journal ArticleDOI
TL;DR: This address draws a parallel between the major development phases of the first 20 years of concurrent programming and the present challenge of distributed computing.
Abstract: Delivered at COMPSAC 78, this address draws a parallel between the major development phases of the first 20 years of concurrent programming and the present challenge of distributed computing.


Proceedings ArticleDOI
03 Oct 1979
TL;DR: The organization of an autonomous processor supporting database management functions is presented and higher capabilities and performance rates are hoped to be achieved.
Abstract: The organization of an autonomous processor supporting database management functions is presented. The DataBase Concurrent Processor (DBCP) can be thought as a back-end data management machine of a general purpose host computer; it supports relational data model directly in hardware, and is able to run concurrently a number of programs written with a relationally complete instruction set. DBCP is composed of a parallel organization of cells and a Coordination Unit (CU). Each cell supports and processes tuples of an unique relation and consists of a special purpose microprocessor and a circulating serial memory. Along a full. memory revolution each microprocessor accomplishes an access on its own memory block while, at the same time, the CU transmits all necessary information to the access to be made in the next revolution. CU is allowed by microprocessor functional independence to organize different concurrency control strategies, trying to make maximum use of cellular organization processing capability. This concurrent processing capability at backend level fits better to the database system multiuser nature (shared resources) and consequently higher capabilities and performance rates are hoped to be achieved.



Proceedings ArticleDOI
27 Nov 1979
TL;DR: This paper focuses on the concurrency control problem for a distributed database system and a new control philosophy called hierarchical processing structure is proposed, derived from two different types of the consistency, which provides the following features.
Abstract: This paper focuses on the concurrency control problem for a distributed database system. A new control philosophy called hierarchical processing structure is proposed. Two different types of the consistency are clearly defined, and the hierarchical processing structure is derived from these consistency types. This structure provides the following features;1) The centralization of processing load on a particular site can be avoided.2) Two distinct types of updating mechanism are defined according to two aspects of data consistency.3) A comprehensible philosophy for the concurrency control is established.


Proceedings ArticleDOI
06 Nov 1979
TL;DR: Recovery under a wide spectrum of system mal functions is considered in a distributed database environment and the "domino effect" is modeled, and a rollback algorithm is developed.
Abstract: Recovery under a wide spectrum of system mal functions is considered in a distributed database environment. Knowledge of the nature of all the processes that access the data is effectively used to design system recovery protocols for different classes of transactions. A policy for optimal check pointing is derived through the use of a simple model, and an audit trail maintenance policy is outlined. The "domino effect" is modeled, and a rollback algorithm is developed.



Proceedings ArticleDOI
06 Nov 1979
TL;DR: It is found that existing ESS software structures can be described in the framework of operating systems and that in general operating system concepts can be positively applied to new designs to meet requirements of the switching environment.
Abstract: Operating systems to provide resource allocation, concurrency control, and auministrative services are integral parts of general purpose computing systems. Electronic Switching Systems (ESS) are real time systems to control switching of a large number of phone calls. Control functions similar to those of operating systems exist in these ESS's but they are seldom distinguished nor structured as an operating system in the literature. This paper examines the characteristics of the telephone switching environment and applies commonly known operating system concepts to structuring software ano controlling of telephone switching. It is found that existing ESS software structures can be described in the framework of operating systems and that in general operating system concepts can be positively applied to new designs. However, special design considerations must be made in process structure, scheduling, and memory management to meet requirements of the switching environment.

Journal ArticleDOI
TL;DR: The examination of properties of both the “classical” computer systems, the batch computer system and the process control computer system are examined, finding that their properties become similar and differences degrade to parameter variations.