scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1987"


Journal ArticleDOI
TL;DR: It is shown that differences in the underlying assumptions explain the seemingly contradictory performance results, and the question of how realistic the various assumptions are for actual database systems is addressed.
Abstract: A number of recent studies have examined the performance of concurrency control algorithms for database management systems. The results reported to date, rather than being definitive, have tended to be contradictory. In this paper, rather than presenting “yet another algorithm performance study,” we critically investigate the assumptions made in the models used in past studies and their implications. We employ a fairly complete model of a database environment for studying the relative performance of three different approaches to the concurrency control problem under a variety of modeling assumptions. The three approaches studied represent different extremes in how transaction conflicts are dealt with, and the assumptions addressed pertain to the nature of the database system's resources, how transaction restarts are modeled, and the amount of information available to the concurrency control algorithm about transactions' reference strings. We show that differences in the underlying assumptions explain the seemingly contradictory performance results. We also address the question of how realistic the various assumptions are for actual database systems.

446 citations


Journal ArticleDOI
TL;DR: A uniform model in which published algorithms can be cast is given, and the fundamental principles on which distributed deadlock detection schemes are based are presented, and a hierarchy of deadlock models is presented.
Abstract: The problem of deadlock detection in distributed systems has undergone extensive study. An important application relates to distributed database systems. A uniform model in which published algorithms can be cast is given, and the fundamental principles on which distributed deadlock detection schemes are based are presented. These principles represent mechanisms for developing distributed algorithms in general and deadlock detection schemes in particular. In addition, a hierarchy of deadlock models is presented; each model is characterized by the restrictions that are imposed upon the form resource requests can assume. The hierarchy includes the well-known models of resource and communication deadlock. Algorithms are classified according to both the underlying principles and the generality of resource requests they permit. A number of algorithms are discussed in detail, and their complexity in terms of the number of messages employed is compared. The point is made that correctness proofs for such algorithms using operational arguments are cumbersome and error prone and, therefore, that only completely formal proofs are sufficient for demonstrating correctness.

268 citations


Proceedings ArticleDOI
Won Kim1, Jay Banerjee1, Hong-Tai Chou1, Jorge F. Garza1, Darrel Woelk1 
01 Dec 1987
TL;DR: This paper provides a formal definition of the semantics of composite objects within an object-oriented data model, and describes their use as units of integrity control, storage and retrieval, and concurrency control in a prototype object- oriented database system the authors have implemented.
Abstract: Many applications in such domains as computer-aided design require the capability to define, store and retrieve as a single unit a collection of related objects known as a composite object. A composite object explicitly captures and enforces the IS-PART-OF integrity constraint between child and parent pairs of objects in a hierarchical collection of objects. Further, it can be used as a unit of storage and retrieval to enhance the performance of a database system.This paper provides a formal definition of the semantics of composite objects within an object-oriented data model, and describes their use as units of integrity control, storage and retrieval, and concurrency control in a prototype object-oriented database system we have implemented.

241 citations


Journal ArticleDOI
TL;DR: This paper examines the data management requirements of group work applications on the basis of experience with three prototype systems and on observations from the literature, and database and object management technologies that support these requirements are briefly surveyed.
Abstract: Data sharing is fundamental to computer-supported cooperative work: People share information through explicit communication channels and through their coordinated use of shared databases. This paper examines the data management requirements of group work applications on the basis of experience with three prototype systems and on observations from the literature. Database and object management technologies that support these requirements are briefly surveyed, and unresolved issues in the particular areas of access control and concurrency control are identified for future research.

212 citations


Proceedings ArticleDOI
01 Dec 1987
TL;DR: The Datacycle architecture is introduced, an attempt to exploit the enormous transmission bandwidth of optical systems to permit the implementation of high throughput multiprocessor database systems.
Abstract: The evolutionary trend toward a database-driven public communications network has motivated research into database architectures capable of executing thousands of transactions per second. In this paper we introduce the Datacycle architecture, an attempt to exploit the enormous transmission bandwidth of optical systems to permit the implementation of high throughput multiprocessor database systems. The architecture has the potential for unlimited query throughput, simplified data management, rapid execution of complex queries, and efficient concurrency control. We describe the logical operation of the architecture and discuss implementation issues in the context of a prototype system currently under construction.

167 citations


Journal ArticleDOI
TL;DR: It is shown that the choice of the best deadlock resolution strategy depends upon the level of data contention, the resource utilization levels, and the types of transactions, and that guidelines for selecting a deadlockresolution strategy for different operating regions are provided.
Abstract: There is growing evidence that, for a fairly wide variety of database workloads and system configurations, locking is the concurrency control strategy of choice With locking, of course, comes the possibility of deadlocks Although the database literature is full of algorithms for dealing with deadlocks, very little in the way of practical performance information is available to a database system designer faced with the decision of choosing a good deadlock resolution strategy This paper is an attempt to bridge this gap in our understanding of the behavior and performance of alternative deadlock resolution strategies We employ a simulation model of a database environment to study the relative performance of several strategies based on deadlock detection, several strategies based on deadlock prevention, and a strategy based on timeouts We show that the choice of the best deadlock resolution strategy depends upon the level of data contention, the resource utilization levels, and the types of transactions We provide guidelines for selecting a deadlock resolution strategy for different operating regions

110 citations


Proceedings ArticleDOI
01 Dec 1987
TL;DR: This paper re-examined issues related to concurrency control and the behavior of a block context, and the evolution of the solutions for overcoming these shortcomings is described along with the model of computation in ConcurrentSmalltalk.
Abstract: ConcurrentSmalltalk is an object-oriented concurrent programming language/system which has been running since late 1985. ConcurrentSmalltalk has the following features: Upper-compatibility with Smalltalk-80.Asynchronous method calls and CBox objects yield concurrency.Atomic objects have the property of running one at a time so that it can serialize the many requests sent to it.Through experience in writing programs, some disadvantages have become apparent related to concurrency control and the behavior of a block context. In this paper, these issues are re-examined in detail, and then the evolution of the solutions for overcoming these shortcomings is described along with the model of computation in ConcurrentSmalltalk. New features are explained with an example program. The implementation of the ConcurrentSmalltalk virtual machine is also presented along with the evaluation of it.

98 citations


Proceedings ArticleDOI
12 Oct 1987
TL;DR: An algorithm is given for the multi-writer version of the Concurrent Reading While Writing (CRWW) problem that solves the problem of allowing simultaneous access to arbitrarily sized shared data without requiring waiting, and hence avoids mutual exclusion.
Abstract: An algorithm is given for the multi-writer version of the Concurrent Reading While Writing (CRWW) problem. The algorithm solves the problem of allowing simultaneous access to arbitrarily sized shared data without requiring waiting, and hence avoids mutual exclusion. This. demonstrates that a quite complicated concurrent control problem can be solved-without eliminating the efficiency of parallelism. One very important aspect of the algorithm are the tools developed to prove its correctness. Without these tools, proving the correctness of a solution to a problem of this complexity would be very difficult.

97 citations



Journal ArticleDOI
TL;DR: This work presents four new concurrency control protocols that eliminate all interference between read-only actions and update actions, and thus offer significantly improved performance for read- only actions.
Abstract: Typical concurrency control protocols for atomic actions, such as two-phase locking, perform poorly for long read-only actions. We present four new concurrency control protocols that eliminate all interference between read-only actions and update actions, and thus offer significantly improved performance for read-only actions. The protocols work by maintaining multiple versions of the system state; read-only actions read old versions, while update actions manipulate the most recent version. We focus on the problem of managing the storage required for old versions in a distributed system. One of the protocols uses relatively little space, but has a potentially significant communication cost. The other protocols use more space, but may be cheaper in terms of communication.

88 citations


Proceedings ArticleDOI
01 Jun 1987
TL;DR: The development of the various branches of database theory is sketched, including such areas as dependency theory, universal-relation theory, and hypergraph theory, including the theory of relational databases.
Abstract: We briefly sketch the development of the various branches of database theory. One important branch is the theory of relational databases, including such areas as dependency theory, universal-relation theory, and hypergraph theory. A second important branch is the theory of concurrency control and distributed databases. Two other branches have not in the past been given the attention they deserve. One of these is “logic and databases,” and the second is “object-oriented database systems,” which to my thinking includes systems based on the network or hierarchical data models. Both these areas are going to be more influential in the future.

Patent
Glenn R. Thompson1, Yuri Breitbart1
24 Jul 1987
TL;DR: Disclosed as discussed by the authors is a concurrency control method and related system for permitting the correct execution of multiple concurrent transactions, each comprising one or more read and/or write operations, in a heterogeneous distributed database system.
Abstract: Disclosed is a concurrency control method and related system for permitting the correct execution of multiple concurrent transactions, each comprising one or more read and/or write operations, in a heterogeneous distributed database system Read and write operations are processed at the required sites if no site cycles are detected If any site cycles are detected in a read operation, then new sites are searched for If any site cycle is detected for any of the sites for a write operation, then the transaction is aborted

Journal ArticleDOI
01 Oct 1987
TL;DR: In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance.
Abstract: This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. We cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, we assess the potentials of optical and neural technologies for developing future supercomputers.

Proceedings ArticleDOI
01 Jun 1987
TL;DR: This work considers the separation of rebalancing from updates in several database structures, such as B-trees for external and AVL-tree for internal structures, and shows how this separation can be implemented such that rebalance is performed by local background processes.
Abstract: We consider the separation of rebalancing from updates in several database structures, such as B-trees for external and AVL-trees for internal structures. We show how this separation can be implemented such that rebalancing is performed by local background processes. Our solution implies that even simple locking schemes (without additional links and copies of certain nodes) for concurrency control are efficient in the sense that at any time only a small constant number of nodes must be locked.

Journal ArticleDOI
TL;DR: This experimental system features flexible document retrieval, a distributed architecture, and the capacity to store many very large documents.
Abstract: New technology is changing the way we store documents. This experimental system features flexible document retrieval, a distributed architecture, and the capacity to store many very large documents.

Journal ArticleDOI
TL;DR: The main advantage of this certification method is that it allows a chronological commit order which differs from the serialization one (thus avoiding rejections or delays of transactions which occur in usual certification methods or in classical locking or timestamping ones).
Abstract: This paper introduces, as an optimistic concurrency control method, a new certification method by means of intervals of timestamps, usable in a distributed database system. The main advantage of this method is that it allows a chronological commit order which differs from the serialization one (thus avoiding rejections or delays of transactions which occur in usual certification methods or in classical locking or timestamping ones). The use of the dependency graph permits both classifying this method among existing ones and proving it. The certification protocol is first presented under the hypothesis that transactions' certifications are processed in the same order on all the concerned sites; it is then extended to allow concurrent certifications of transactions.

Journal ArticleDOI
TL;DR: A distributed, multi-version, optimistic concurrency control scheme is described which is particularly advantageous in a query-dominant environment and is also free from dedlock and cascading rollback problems.
Abstract: Concurrency control algorithms have traditionally been based on locking and timestamp ordering mechanisms. Recently optimistic schemes have been proposed. In this paper a distributed, multi-version, optimistic concurrency control scheme is described which is particularly advantageous in a query-dominant environment. The drawbacks of the original optimistic concurrency control scheme, namely that inconsistent views may be seen by transactions (potentially causing unpredictable behavior) and that read-only transactions must be validated and may be rolled back, have been eliminated in the proposed algorithm. Read-only transactions execute in a completely asynchronous fashion and are therefore processed with very little overhead. Furthermore, the probability that read-write transactions are rolled back has been reduced by generalizing the validation algorithm. The effects of global transactions on local transaction processing are minimized. The algorithm is also free from dedlock and cascading rollback problems.

Proceedings ArticleDOI
01 Dec 1987
TL;DR: This paper exposes a general algorithm for the distributed detection of stable properties in distributed applications or systems that deals with every stable property of a fairly general class.
Abstract: When evaluated to true, a stable property remains true forever. Such a stable property may characterize important states of a computation. This is the case of deadlocked or terminated computations. In this paper we expose a general algorithm for the distributed detection of stable properties in distributed applications or systems. This distributed algorithm deals with every stable property of a fairly general class : in this sense the algorithm is generic. This was achieved using a methodical approach, with a strong distinction between the computation and control activities in the problem. Moreover, the detection method used by the algorithm is based on an observational mechanism,

Proceedings ArticleDOI
03 Feb 1987
TL;DR: The paper shows how the forward-chaining approach to deduction can flexibly be married with goal-directed aspects of best/easiest-first strategies and develops generally applicable differential iteration schemes that efficiently compute the fixpoint.
Abstract: Based on matured database technology the paper provides new insights into efficient ways to evaluate recursive deduction rules. We show how the forward-chaining approach to deduction can flexibly be married with goal-directed aspects of best/easiest-first strategies. From the natural fixpoint semantics of recursion we develop generally applicable differential iteration schemes that efficiently compute the fixpoint. Surprisingly the well-known Warshall-algorithm gets disclosed as a descendant of this class of algorithms. Performance measurements suggest the former as well as systolic Δ-algorithms with linear fixpoint equation as candidates for incorporating a transitive closure operator in databases. As a next important step towards integration of database technology and logic programming we suggest to profit from the standard features of concurrency control and transaction management by effectively using them for the synchronization of parallel deductions.

Journal ArticleDOI
TL;DR: This paper explores an alternative approach to managing replicated data by presenting two replication methods in which concurrency control and replica management are handled by a single integrated protocol.
Abstract: A replicated object is a typed data object that is stored redundantly at multiple locations to enhance availability. Most techniques for managing replicated data have a two-level structure: At the higher level, a replica-control protocol reconstructs the object's state from its distributed components, and at the lower level, a standard concurrency-control protocol synchronizes accesses to the individual components. This paper explores an alternative approach to managing replicated data by presenting two replication methods in which concurrency control and replica management are handled by a single integrated protocol. These integrated protocols permit more concurrency than independent protocols, and they allow availability and concurrency to be traded off: Constraints on concurrency may be relaxed if constraints on availability are tightened, and vice versa. In general, constraints on concurrency and availability cannot be minimized simultaneously.

Proceedings ArticleDOI
01 Oct 1987
TL;DR: This work discusses object-orientation as more than an implementation paradigm, and shows how an object-oriented approach simplifies both use and implementation of engineering design systems.
Abstract: An object-oriented approach to management of engineering design data requires object persistence, object-specific rules for concurrency control and recovery, views, complex objects and derived data, and specialized treatment of operations, constraints, relationships and type descriptions. We discuss object-orientation as more than an implementation paradigm, and show how an object-oriented approach simplifies both use and implementation of engineering design systems.

Proceedings ArticleDOI
Steven E. Lucco1
01 Dec 1987
TL;DR: Sloop is a parallel language and environment that employs an object-oriented model for explicit parallel programming of MIMD multiprocessors that uses object relocation heuristics and coroutine scheduling to attain high performance.
Abstract: Sloop is a parallel language and environment that employs an object-oriented model for explicit parallel programming of MIMD multiprocessors. The Sloop runtime system transforms a network of processors into a virtual object space. A virtual object space contains a collection of objects that cooperate to solve a problem. Sloop encapsulates virtual object space semantics within the object type domain. This system-defined type provides an associative, asynchronous method by which one object gains access to another. It also provides an operation for specifying groups of objects that should, for efficiency, reside on the same physical processor, and supports exploitation of the topology of the underlying parallel machine. Domains also support the creation of indivisible objects, which provide implicit concurrency control. The encapsulation of these semantics within an object gives the programmer the power to construct an arbitrary hierarchy of virtual object spaces, facilitating debugging and program modularity. Sloop implementations are running on a bus-based multiprocessor, a hypercube multiprocessor, and on a heterogeneous network of workstations. The runtime system uses object relocation heuristics and coroutine scheduling to attain high performance.


Proceedings ArticleDOI
03 Feb 1987
TL;DR: An experimental object base management system called Gordion is presented, which provides permanence and sharing of objects for workstations within an object-oriented environment and its ability to communicate with multiple languages, introduction of new concurrency control primitives, and ability to manipulate objects of arbitrary size are presented.
Abstract: An experimental object base management system called Gordion is presented. Gordion is a server which provides permanence and sharing of objects for workstations within an object-oriented environment. Among the unique aspects of Gordion are: its ability to communicate with multiple languages, introduction of new concurrency control primitives, ability to manipulate objects of arbitrary size, and object sharing across the languages through a base set of classes. The system is currently interfaced to two languages, BiggerTalk and Zetalisp Flavors. Beside its language interface, Gordion has an interface for the system administrator, and an interface for debugging. Major functional components of the system are: concurrency control, storage, history and inquiry, and maintenance. Concurrent access to objects is regulated by four types of locks, and transactions encapsulate units of work for the system. The storage system uses a hashing scheme and Unix™ files to store objects. A discussion of the future prospects for Gordion concludes the paper.

01 May 1987
TL;DR: This paper analyzes two orphan elimination algorithms that have been proposed for nested transaction systems, describes the algorthms formally, and presents complete detailed proofs of correctness.
Abstract: : In a distributed system, node crashes and network delays can result in orphaned computations: computations that are still running but whose results are no longer needed. Several algorithms have been proposed to detect and eliminate such computations before they can see inconsistent states of the shared, concurrently accessed data. This paper analyzes two orphan elimination algorithms that have been proposed for nested transaction systems describes the algorthms formally, and presents complete detailed proofs of correctness. The author's proofs are remarkably simple, and slow that the fundamental concepts underlying the two algorithms are quite similar. In addition, it is shown formally that the algorithms can be used in combination with any correct concurrency control technique, thus providing formal justification for the informal claims made by the algorithm' designers. The results are a significant advance over eariler work in the area, in which it was extremely difficult to state and prove comparable results.

Journal ArticleDOI
TL;DR: It is found that, at high transaction rates, affinity based routing significantly reduces lock contention probability and leads to a substantial reduction in transaction response time, which produces a large impact on the performance of an optimistic type concurrency control strategy.

Proceedings ArticleDOI
03 Feb 1987
TL;DR: To ensure the serializability of transactions, the recoverability relationship between transactions is forced to be acyclic, which can be used to decrease the delay involved in processing non-commuting operations while still avoiding cascading aborts.
Abstract: The concurrency of transactions executing on atomic data types can be enhanced through the use of semantic information about operations defined on these types. Hitherto, commutativity of operations has been exploited to provide enhanced concurrency while avoiding cascading aborts. We have identified a property known as recoverability which can be used to decrease the delay involved in processing non-commuting operations while still avoiding cascading aborts. When an invoked operation is recoverable with respect to an uncommitted operation, the invoked operation can be executed by forcing a commit-dependency between the invoked operation and the uncommitted operation; the transaction invoking the operation will not have to wait for the uncommitted operation to abort or commit. Further, this commit dependency only affects the order in which the operations should commit, if both commit; if either operation aborts, the other can still commit thus avoiding cascading aborts. To ensure the serializability of transactions, we force the recoverability relationship between transactions to be acyclic.

Journal ArticleDOI
TL;DR: This paper shows how multiversion time-stamping protocols for atomicity can be extended to induce fewer delays and restarts by exploiting semantic information about objects such as queues, directories, or counters.
Abstract: Atomic transactions are a widely accepted approach to implementing and reasoning about fault-tolerant distributed programs. This paper shows how multiversion time-stamping protocols for atomicity can be extended to induce fewer delays and restarts by exploiting semantic information about objects such as queues, directories, or counters. This technique relies on static preanalysis of conflicts between operations, and incurs no additioiwal runtime overhead. This technique is deadlock-free, and it is applicable to objects of arbitrary type.

Journal ArticleDOI
TL;DR: It is shown that timestamp-based techniques can be used to implement serial validation, and simulation results indicate that multiversion serial validation has significant performance advantages over the single version algorithm.
Abstract: This correspondence describes and analyzes two schemes for improving the performance of serial validation, an optimistic concurrency control algorithm proposed by Kutng and Robinson. It is shown that timestamp-based techniques can be used to implement serial validation, yielding an equivalent algorithm with a much lower validation cost. A multiple version variant of serial validation is then presented, and simulation results indicate that multiversion serial validation has significant performance advantages over the single version algorithm.

Journal ArticleDOI
TL;DR: This paper develops a performance model of timestamp-ordering concurrency control algorithms in a DDB that consists of five components: input data collection, transaction processing model, communication subnetwork model, conflict model, and performance measures estimation.
Abstract: A distributed database (DDB) consists of copies of data files (usually redundant) geographically distributed and managed on a computer network. One important problem in DDB research is that of concurrency control. This paper develops a performance model of timestamp-ordering concurrency control algorithms in a DDB. The performance model consists of five components: input data collection, transaction processing model, communication subnetwork model, conflict model, and performance measures estimation. In this paper we describe the conflict model in detail. We first determine the probability of transaction restarts, the probability of transaction blocking, and the delay due to blocking for the basic timestamp-ordering algorithm. We then develop conflict models for variations of the basic algorithm. These conflict models are illustrated by numerical examples.