scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1978"


Proceedings Article
01 Jan 1978
TL;DR: In this paper, the authors present algorithms for ensuring the consistency of a distributed relational data base subject to multiple concurrent updates and mechanisms to correctly update multiple copies of objects and to continue operation when less than all machines in the network are operational.
Abstract: This paper contains algorithms for ensuring the consistency of a distributed relational data base subject to multiple, concurrent updates. Also included are mechanisms to correctly update multiple copies of objects and to continue operation when less than all machines in the network are operational. Together with [4] and [12], this paper constitutes the significant portions of the design for a distributed data base version of INGRES.

374 citations


Journal ArticleDOI
TL;DR: In this article, the authors present designs for several distributed concurrency controls and demonstrates that they work correctly and investigates some of the implications of global consistency of a distributed database and discusses phenomena that can prevent termination of application programs.
Abstract: A distributed database system is one in which the database is spread among several sites and application programs “move” from site to site to access and update the data they need. The concurrency control is that portion of the system that responds to the read and write requests of the application programs. Its job is to maintain the global consistency of the distributed database while ensuring that the termination of the application programs is not prevented by phenomena such as deadlock. We assume each individual site has its own local concurrency control which responds to requests at that site and can only communicate with concurrency controls at other sites when an application program moves from site to site, terminates, or aborts.This paper presents designs for several distributed concurrency controls and demonstrates that they work correctly. It also investigates some of the implications of global consistency of a distributed database and discusses phenomena that can prevent termination of application programs.

360 citations


Book ChapterDOI
TL;DR: Though attention is focused mainly on the design and operation of the network itself, the formation of a unified distributed processing system from a network of separate computer systems always remains the ultimate objective.
Abstract: Publisher Summary The field of distributed processing and computer networking is growing at a very rapid rate within the industry, government, and university communities While distributed processing systems may differ from computer networks both in perspective and in environment, they do have some common characteristics This chapter analyzes both an expository survey and research results on local computer networks in general and loop computer networks in particular Different types of local loop computer networks are surveyed along with typical design problem areas, such as the loop interface design, message transmission protocols, the network operating system, the network command language, the distributed programming system, the distributed data base system, and performance studies Though attention is focused mainly on the design and operation of the network itself, the formation of a unified distributed processing system from a network of separate computer systems always remains the ultimate objective Since the loop topology is adopted for the communication subnetwork and control is fully distributed among the nodes of the network, the resulting system is called a distributed loop computer network

152 citations


Journal ArticleDOI
TL;DR: The method used by SDD-1 for updating data that are stored redundantly for updating the reliability and responsiveness of the system and to facilitate upward scaling of system capacity is described.
Abstract: SDD-1, A System for Distributed Databases, is a distributed database system being developed by Computer Corporation of America (CCA), Cambridge, MA. SDD-1 permits data to be stored redundantly at several database sites in order to enhance the reliability and responsiveness of the system and to facilitate upward scaling of system capacity. This paper describes the method used by SDD-1 for updating data that are stored redundantly.

119 citations


Journal ArticleDOI
E.D. Jensen1
TL;DR: A fundamental thesis of the HXDP project is that the benefits and cost-effectiveness of distributed computer systems depend on the judicious use of hardware to control software costs.
Abstract: The Honeywell Experimental Distributed Processor (HXDP) is a vehicle for research in the science and engineering of processor interconnection, executive control, and user software for a certain class of multiple-processor computers which we call "distributed computer" systems. Such systems are very unconventional in that they accomplish total system-wide executive control in the absence of any centralized procedure, data, or hardware. The primary benefits sought by this research are improvements over more conventional architectures (such as multiprocessors and computer networks) in extensibility, integrity, and performance. A fundamental thesis of the HXDP project is that the benefits and cost-effectiveness of distributed computer systems depend on the judicious use of hardware to control software costs.

57 citations


Proceedings Article
01 Jan 1978
TL;DR: Two protocols for the detection of deadlocks in distributed data bases are described–a hierarchically organized one and a distributed one that requires that the global graph be built and maintained in order for deadlocks to be detected.
Abstract: This paper descrbes two protocols for the detection of deadlocks in distributed data bases–a hierarchically organized one and a distributed one. A graph model which depicts the state of execution of all transactions in the system is used by both protocols. A cycle in this graph is a necessary and sufficient condition for a deadlock to exist. Nevertheless, neither protocol requires that the global graph be built and maintained in order for deadlocks to be detected. In the case of the hierarchical protocol, the communications cost can be optimized if the topology of the hierarachy is appropriately chosen.

54 citations



Proceedings ArticleDOI
31 May 1978
TL;DR: In this paper, a centralized locking protocol is proposed to coordinate access to a distributed database and to maintain system consistency throughout normal and abnormal conditions, which is robust in the face of crashes of any participating site, as well as communication failures.
Abstract: A locking protocol to coordinate access to a distributed database and to maintain system consistency throughout normal and abnormal conditions is presented. The proposed protocol is robust in the face of crashes of any participating site, as well as communication failures. Recovery from any number of failures during normal operation or any of the recovery stages is supported. Recovery is done in such a way that maximum forward progress is achieved by the recovery procedures. Integration of virtually any locking discipline including predicate lock methods is permitted by this protocol. The locking algorithm operates, and operates correctly, when the network is partitioned, either intentionally or by failure of communication lines. Each partition is able to continue with work local to it, and operation merges gracefully when the partitions are reconnected.A subroutine of the protocol, that assures reliable communication among sites, is shown to have better performance than two-phase commit methods. For many topologies of interest, the delay introduced by the overall protocol is not a direct function of the size of the network. The communications cost is shown to grow in a relatively slow, linear fashion with the number of sites participating in the transaction. An informal proof of the correctness of the algorithm is also presented in this paper.The algorithm has as its core a centralized locking protocol with distributed recovery procedures. A centralized controller with local appendages at each site coordinates all resource control, with requests initiated by application programs at any site. However, no site experiences undue load. Recovery is broken down into three disjoint mechanisms: for single node recovery, merge of partitions, and reconstruction of the centralized controller and tables. The disjointness of the mechanisms contributes to comprehensibility and ease of proof.The paper concludes with a proposal for an extension aimed at optimizing operation of the algorithm to adapt to highly skewed distributions of activity. The extension applies nicely to interconnected computer networks.

10 citations



Proceedings ArticleDOI
01 Aug 1978
TL;DR: A distributed architecture for an interactive information system is described, and a scheme for coordinating concurrent access to its data is presented that is deadlock free and is carried out without the need for centralized control.
Abstract: A distributed architecture for an interactive information system is described, and a scheme for coordinating concurrent access to its data is presented. The scheme is deadlock free and is carried out without the need for centralized control. Conflicts are detected as they occur, and competing processes are given exclusive access to the data they need.

7 citations