scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1993"


Patent
05 Feb 1993
TL;DR: In this article, the concurrency control mechanisms in a database management system achieves high concurrency by using a lock-mode set larger than that conventionally employed for multi-granularity locking.
Abstract: The concurrency-control mechanisms in a database-management system achieves high concurrency by using a lock-mode set larger than that conventionally employed for multi-granularity locking. In a system of key-valued locking in which locks on key-value ranges are acquired separately from the locks on the key values with which they are associated, the IX lock mode conventionally acquired on a range by update, insert, and delete operations is replaced with three separate lock modes respectively associated with those operations and invoked by them for range locking. In key-valued-locking systems in which ranges are locked commonly with the key-values associated with them, the mode set is further expanded so that each mode represents a different combination of range and key-value locks.

129 citations


Proceedings ArticleDOI
01 Dec 1993
TL;DR: A schedulability bound for SSP (Similarity Stack Protocol) is given and simulation results show that SSP is especially useful for scheduling real-time data access on multiprocessor systems.
Abstract: We propose a class of real-time data access protocols called SSP (Similarity Stack Protocol). The correctness of SSP schedules is justified by the concept of similarity which allows different but sufficiently timely data to be used in a computation without adversely affecting the outcome. SSP schedules are deadlock-free, subject to limited blocking and do not use locks. We give a schedulability bound for SSP and also report simulation results which show that SSP is especially useful for scheduling real-time data access on multiprocessor systems. Finally, we present a variation of SSP which can be implemented in an autonomous fashion in the sense that scheduling decisions can be made with local information only. >

123 citations


Proceedings ArticleDOI
01 Dec 1993
TL;DR: A new optimistic concurrency control algorithm is presented that can avoid such unnecessary restarts by adjusting serialization order dynamically, and it is demonstrated that the new algorithm outperforms the previous ones over a wide range of system workload.
Abstract: Studies concluded that for a variety of reasons, optimistic concurrency control appears well-suited to real-time database systems. Especially, they showed that in a real-time database system that discards tardy transactions, optimistic concurrency control outperforms locking. We show that the optimistic algorithms used in those studies incur restarts unnecessary to ensure data consistency. We present a new optimistic concurrency control algorithm that can avoid such unnecessary restarts by adjusting serialization order dynamically, and demonstrate that the new algorithm outperforms the previous ones over a wide range of system workload. It appears that this algorithm is a promising candidate for a basic concurrency control mechanism for real-time database systems. >

119 citations


Proceedings ArticleDOI
25 May 1993
TL;DR: The authors begin by introducing the notion of a purely replicated architecture and then present GroupDesign, a shared drawing tool implemented with this architecture that gives the best response time for the interface and reduces the number of undo and redo operations when conflicts occur.
Abstract: Computer supported cooperative work (CSCW) is a rapidly growing field. Real-time groupware systems are addressed that allow a group of users to edit a shared document. The architecture and concurrency control algorithm used in this system are described. The algorithm is based on the semantics of the application and can be used by the developers of other groupware systems. The authors begin by introducing the notion of a purely replicated architecture and then present GroupDesign, a shared drawing tool implemented with this architecture. They then present the main parts of the algorithm that implement the distribution. The algorithm gives the best response time for the interface and reduces the number of undo and redo operations when conflicts occur. >

110 citations


Proceedings ArticleDOI
25 May 1993
TL;DR: The author examines the compare-and-swap operation in the content of contemporary bus-based shared memory multiprocessors, and it is shown that the common techniques for reducing synchronization overhead in the presence of contention are inappropriate when used as the basis for nonblocking synchronization.
Abstract: An important class of concurrent objects are those that are nonblocking, that is, whose operations are not contained within mutually exclusive critical sections. A nonblocking object can be accessed by many threads at a time, yet update protocols based on atomic compare-and-swap operations can be used to guarantee the object's consistency. The author examines the compare-and-swap operation in the content of contemporary bus-based shared memory multiprocessors, although the results generalize to distributed shared memory multiprocessors. He describes an operating system-based solution that permits the construction of a nonblocking compare-and-swap function on architectures that only support more primitive atomic primitives such as test-and-set or atomic exchange. Several locking strategies are evaluated that can be used to synthesize a compare-and-swap operation, and it is shown that the common techniques for reducing synchronization overhead in the presence of contention are inappropriate when used as the basis for nonblocking synchronization. A simple synchronization strategy is described that has good performance because it avoids much of the synchronization overhead that normally occurs when there is contention. >

105 citations


Journal ArticleDOI
TL;DR: This paper concentrates on modeling data contention and then, as others have done in other papers, the solutions of the data contention model are coupled with a standard hardware resource contention model through an iteration.
Abstract: The Concurrency Control (CC) scheme employed can profoundly affect the performance of transaction-processing systems. In this paper, a simple unified approximate analysis methodology to model the effect on system performance of data contention under different CC schemes and for different system structures is developed. This paper concentrates on modeling data contention and then, as others have done in other papers, the solutions of the data contention model are coupled with a standard hardware resource contention model through an iteration. The methodology goes beyond previously published methods for analyzing CC schemes in terms of the generality of CC schemes and system structures that are handled. The methodology is applied to analyze the performance of centralized transaction processing systems using various optimistic- and pessimistic-type CC schemes and for both fixed-length and variable-length transactions. The accuracy of the analysis is demonstrated by comparison with simulations. It is also shown how the methodology can be applied to analyze the performance of distributed transaction-processing systems with replicated data.

103 citations


Journal ArticleDOI
TL;DR: The Actor model as mentioned in this paper is a programming language concept that provides basic building blocks for a wide variety of computational structures, which unifies objects and concurrency, and provides three mechanisms for developing modular and reusable components for concurrent systems.
Abstract: The Actor model programming language concept, which provides basic building blocks for a wide variety of computational structures, is reviewed. The Actor model unifies objects and concurrency. Actors are autonomous, distributed, concurrently executing objects that can send each other messages asynchronously. The Actor model's communication abstractions and object-oriented design are discussed. Three mechanisms for developing modular and reusable components for concurrent systems are also discussed. The mechanism are synchronizers, modular specifications of resource management policies, and protocol customization of dependability. >

99 citations


Journal ArticleDOI
01 Jan 1993
TL;DR: A model for nested transactions is proposed allowing for effective exploitation of intra-transaction parallelism, and it can be shown how “controlled downward inheritance” for hierarchical objects is achieved in nested transactions.
Abstract: The concept of nested transactions offers more decomposable execution units and finer-grained control over concurrency and recovery than "flat" transactions. Furthermore, it supports the decomposition of a "unit of work" into subtasks and their appropriate distribution in a computer system as a prerequisite of intratransaction parallelism. However, to exploit its full potential, suitable granules of concurrency control as well as access modes for shared data are necessary. In this article, we investigate various issues of concurrency control for nested transactions. First, the mechanisms for cooperation and communication within nested transactions should not impede parallel execution of transactions among parent and children or among siblings. Therefore, a model for nested transactions is proposed allowing for effective exploitation of intra-transaction parallelism. Starting with a set of basic locking rules, we introduce the concept of "downward inheritance of locks" to make data manipulated by a parent available to its children. To support supervised and restricted access, this concept is refined to "controlled downward inheritance." The initial concurrency control scheme was based on S-X locks for "flat," non-overlapping data objects. In order to adjust this scheme for practical applications, a set of concurrency control rules is derived for generalized lock modes described by a compatibility matrix. Also, these rules are combined with a hierarchical locking scheme to improve selective access to data granules of varying sizes. After having tied together both types of hierarchies (transaction and object), it can be shown how "controlled downward inheritance" for hierarchical objects is achieved in nested transactions. Finally, problems of deadlock detection and resolution in nested transactions are considered.

87 citations


Proceedings ArticleDOI
01 Dec 1993
TL;DR: This paper presents a technique that is capable of supporting two major requirements for concurrency control in real-time databases; data temporal consistency, and data logical consistency, as well as tradeoffs between these requirements.
Abstract: This paper presents a technique that is capable of supporting two major requirements for concurrency control in real-time databases; data temporal consistency, and data logical consistency, as well as tradeoffs between these requirements. Our technique is based upon a real-time object-oriented database model in which each object has its own unique compatibility function that expresses the conditional compatibility of any two potential concurrent operations on the object. The conditions use the semantics of the object, such as allowable imprecision, along with current system state, such as time and the active operations on the object. Our concurrency control technique enforces that allowable concurrency expressed by the compatibility function by using semantic locking controlled by each individual object. The real-time object-oriented database model and process of evaluating the compatibility function to grant semantic locks are described. >

81 citations



Journal ArticleDOI
TL;DR: Two new concurrency control protocols are proposed, one of which is locking-based, and it prevents the priority inversion problem‡ by scheduling the data lock requests based on prioritizing data items, and the second new protocol extends the basic timestamp-ordering method by involving real-time priorities of transactions in the timestamp assignment procedure.

Proceedings ArticleDOI
P. Muth1, T.C. Rakow1, Gerhard Weikum, P. Brossler, Christof Hasse 
19 Apr 1993
TL;DR: A locking protocol for object-oriented database systems (OODBSs) is presented and it is shown that, using the locking protocol in an open-nested transaction, the locks of a subtransactions are released when the subtransaction completes, and only a semantic lock is held further by the parent of the subTransaction.
Abstract: A locking protocol for object-oriented database systems (OODBSs) is presented. The protocol can exploit the semantics of methods invoked on encapsulated objects. Compared to conventional page-oriented or record-oriented concurrency control protocols, the proposed protocol greatly improves the possible concurrency because commutative method executions on the same object are not considered as a conflict. An OODBS application example is presented. The principle of open-nested transactions is reviewed. It is shown that, using the locking protocol in an open-nested transaction, the locks of a subtransaction are released when the subtransaction completes, and only a semantic lock is held further by the parent of the subtransaction. >

Journal ArticleDOI
TL;DR: A trace-driven simulation system for DB-sharing complexes has been developed that allows a realistic performance comparison of four different concurrency and coherency control protocols and investigates so-called on-request and broadcast invalidation schemes.
Abstract: Database Sharing (DB-sharing) refers to a general approach for building a distributed high performance transaction system. The nodes of a DB-sharing system are locally coupled via a high-speed interconnect and share a common database at the disk level. This is also known as a “shared disk” approach. We compare database sharing with the database partitioning (shared nothing) approach and discuss the functional DBMS components that require new and coordinated solutions for DB-sharing. The performance of DB-sharing systems critically depends on the protocols used for concurrency and coherency control. The frequency of communication required for these functions has to be kept as low as possible in order to achieve high transation rates and short response times. A trace-driven simulation system for DB-sharing complexes has been developed that allows a realistic performance comparison of four different concurrency and coherency control protocols. We consider two locking and two optimistic schemes which operate either under central or distributed control. For coherency control, we investigate so-called on-request and broadcast invalidation schemes, and employ buffer-to-buffer communication to exchange modified pages directly between different nodes. The performance impact of random routing versus affinity-based load distribution and different communication costs is also examined. In addition, we analyze potential performance bottlenecks created by hot spot pages.

Proceedings ArticleDOI
01 Jun 1993
TL;DR: This paper proposes a new cost conscious real-time transaction scheduling algorithm which considers dynamic costs associated with a transaction, and shows its superiority over EDF-HP algorithm in simulations.
Abstract: Real-time databases are an important component of embedded real-time systems. In a real-time database context, transactions must not only maintain the consistency constraints of the database but must also satisfy the timing constraints specified for each transaction. Although several approaches have been proposed to integrate real-time scheduling and database concurrency control methods, none of them take into account the dynamic cost of scheduling a transaction. In this paper, we propose a new cost conscious real-time transaction scheduling algorithm which considers dynamic costs associated with a transaction. Our dynamic priority assignment algorithm adapts to changes in the system load without causing excessive numbers of transaction restarts. Our simulations show its superiority over EDF-HP algorithm.

Proceedings ArticleDOI
19 Apr 1993
TL;DR: A method for managing a remote backup database to provide protection from disasters that destroy the primary database is presented and techniques are proposed for checkpointing the state of the backup system and allowing new transaction activity to begin even as the backup is taking over a primary failure.
Abstract: A method for managing a remote backup database to provide protection from disasters that destroy the primary database is presented. The method is general enough to accommodate the ARIES-type recovery and concurrency control methods as well as the methods used by other systems such as DB2, DL/I and IMS Fast Path. It provides high performance by exploiting parallelism and by reducing inputs and outputs using different means, like log analysis and choosing a different buffer management policy from the primary one. Techniques are proposed for checkpointing the state of the backup system so that recovery can be performed quickly in case the backup system fails, and for allowing new transaction activity to begin even as the backup is taking over a primary failure. Some performance measurements taken from a prototype are also presented. >

Journal ArticleDOI
TL;DR: In this paper, the authors describe load balancing techniques for the Time Warp distributed synchronization technique for object-oriented simulation, which distributes objects across nodes and provides optimistic concurrency control, and executes on a network of UNIX-based workstations.
Abstract: This paper describes load balancing techniques for the Time Warp distributed synchronization technique for object-oriented simulation. The Time Warp system distributes objects across nodes and provides optimistic concurrency control. Our implementation is Lisp-based and executes on a network of UNIX-based workstations, where system load varies depending on the number of users and processes. This technique achieves a balanced load by first determining if the load is unbalanced. Then if it is unbalanced, the system must determine which machine is the most underloaded and migrate an object to that machine

Proceedings ArticleDOI
01 Aug 1993
TL;DR: A unified model is developed that allows reasoning about the correctness of concurrency control and recovery within the same framework and captures schedules with semantically rich ADT actions in addition to classical read/write schedules.
Abstract: The classical theory of transaction management is based on two different and independent criteria for the correct execution of transactions. The first criterion, serializability, ensures correct execution of parallel transactions under the assumption that no failures occur. The second criterion, strictness, ensures correct recovery from failures. In this paper we develop a unified model that allows reasoning about the correctness of concurrency control and recovery within the same framework. We introduce the correctness criteria of (prefix-) reducibility and (prefix-) expanded serializability and investigate their relationships to the classical criteria. An important advantage of our model is that it captures schedules with semantically rich ADT actions in addition to classical read/write schedules.

Proceedings ArticleDOI
01 Dec 1993
TL;DR: This paper identifies and discusses six concurrency control requirements that distinguish collaborativehypertext systems from multiuser hypertext systems and discusses how existing hyperbase systems fare with respect to the identified set of requirements.
Abstract: Traditional concurrency control techniques for database systems (transaction management based on locking protocols) have been successful in many multiuser settings, but these techniques are inadequate in open, extensible and distributed hypertext systems supporting multiple collaborating users. The term "multiple collaborating users" covers a group setting in which two or more users are engaged in a shared task. Group members can work simultaneously in the same computing environment, use the same set of tools and share a network of hypertext objects. Hyperbase (hypertext database) systems must provide special support for collaborative work, requiring adjustments and extensions to normal concurrency control techniques. Based on the experiences of two collaborative hypertext authoring systems, this paper identifies and discusses six concurrency control requirements that distinguish collaborative hypertext systems from multiuser hypertext systems. Approaches to the major issues (locking, notification control and transaction management) are examined from a supporting technologies point of view. Finally, we discuss how existing hyperbase systems fare with respect to the identified set of requirements. Many of the issues discussed in the paper are not limited to hypertext systems and apply to other collaborative systems as well.

Journal ArticleDOI
TL;DR: The optimized static algorithm was compared against the available copies method, a dynamic algorithm, to understand the relative performance of the two types of algorithms and found that if realistic reconfiguration times are assumed, then no one type of algorithm is uniformly better.
Abstract: Techniques for optimizing a static voting type algorithm are presented. Our basic optimization models are based on minimizing communications cost subject to given reliability constraints. Two models are presented; in the first model the reliability constraint is failure tolerance, while in the second it is availability. Other simpler models that are special cases of these two basic models and rise from making simplifying assumptions such as equal vote values or constant inter-site communications costs are also discussed. We describe a semi-exhaustive algorithm and efficient heuristics for solving each model. The algorithms utilize a novel signature based method for identifying equivalent vote combinations, and an efficient procedure for computing availability. Computational results for the various algorithms are also given. Finally, the optimized static algorithm was compared against the available copies method, a dynamic algorithm, to understand the relative performance of the two types of algorithms. We found that if realistic reconfiguration times are assumed, then no one type of algorithm is uniformly better. The factors that influence relative performance have been identified. 16 refs., 2 figs., 10 tabs.

Patent
Verlyn Mark Johnson1
14 Oct 1993
TL;DR: The control data locking protocol as mentioned in this paper allows a concurrency control manager and data store to permit concurrent dynamic access between those creating or modifying control data and those using the data in their execution sessions.
Abstract: The control data locking protocol allows a concurrency control manager and data store to permit concurrent dynamic access between those creating or modifying control data and those using the data in their execution sessions. The invention applies to the use of production rules in inferencing sessions or to the use of ordered processing of flow rules by a workflow manager. The invention grants inference locks, read locks and write locks. Pre-lock notification with negotiation is implemented. Post-lock notification is also implemented. Those who are using the rules and those who are modifying them can specify whether changes should be incorporated immediately. In case two requested locks conflict, users resolve the conflict before both are granted. Controlled invocation of conversion processing converts session data as necessary to allow changes in control data to be made while the control data is being used in existing execution sessions.

Proceedings ArticleDOI
19 Apr 1993
TL;DR: A distributed algorithm for dynamic data replication of an object in a distributed system is presented and it is shown that the cost of the algorithm is within a constant factor of the lower bound.
Abstract: A distributed algorithm for dynamic data replication of an object in a distributed system is presented. The algorithm changes the number of replicas and their location in the distributed system to optimize the amount of communication. The algorithm dynamically adapts the replication scheme of an object to the pattern of read-write requests in the distributed system. It is shown that the cost of the algorithm is within a constant factor of the lower bound. >

Journal ArticleDOI
TL;DR: Data conflict security, (DC-security), a property that implies a system is free of covert channels due to contention for access to data, is introduced and a definition of DC-security based on noninterference is presented.
Abstract: Concurrent execution of transactions in database management systems (DBMSs) may lead to contention for access to data, which in a multilevel secure DBMS (MLS/DBMS) may lead to insecurity. Security issues involved in database concurrency control for MLS/DBMSs are examined, and it is shown how a scheduler can affect security. Data conflict security, (DC-security), a property that implies a system is free of covert channels due to contention for access to data, is introduced. A definition of DC-security based on noninterference is presented. Two properties that constitute a necessary condition for DC-security are introduced along with two simpler necessary conditions. A class of schedulers called output-state-equivalent is identified for which another criterion implies DC-security. The criterion considers separately the behavior of the scheduler in response to those inputs that cause rollback and those that do not. The security properties of several existing scheduling protocols are characterized. Many are found to be insecure. >

Proceedings ArticleDOI
Chandrasekaran Mohan1
19 Apr 1993
TL;DR: The algorithm for recovery and isolation exploitation semantics for linear hashing with separators (ARIES/LHS) that controls concurrent operations on storage structures by different users is presented.
Abstract: The algorithm for recovery and isolation exploitation semantics for linear hashing with separators (ARIES/LHS) that controls concurrent operations on storage structures by different users is presented. The algorithm uses fine granularity locking, guarantees serializability, and prevents rolling back transactions from getting involved in deadlocks. >

Book
01 Mar 1993
TL;DR: Conurrent Systems answers the need for a book on concurrent programming which serves to integrate operating systems and database concepts, and provides a foundation for later courses in these areas.
Abstract: From the Publisher: Concurrent Systems answers the need for a book on concurrent programming which serves to integrate operating systems and database concepts, and provides a foundation for later courses in these areas.

Proceedings Article
24 Aug 1993
TL;DR: A model of authorization for object-oriented databases which includes a set of policies, a structure for authorization rules and their administration, and evaluation algorithms is developed, and algorithms for access evaluation at compile-time and at run-time are discussed.
Abstract: Object-oriented databases are a recent and important development and many studies of them have been performed. These consider aspects such as data modeling, query languages, performance, and concurrency control. Relatively few studies address their security, a critical aspect in systems like these that have a complex and rich data structuring. We developed previously a model of authorization for object-oriented databases which includes a set of policies, a structure for authorization rules and their administration, and evaluation algorithms. In that model the high-level query requests were resolved into read and writes at the authorization level. In this pa per we extend the set of access primitives to include ways to control the execution of methods or functions. Policy issues are discussed first, and then algorithms for access evaluation at compile-time and at run-time.

Journal ArticleDOI
TL;DR: An interesting feature of the framework is that the execution of read-only transactions becomes completely independent of the underlying concurrency control implementation, and the extension of the multiversion algorithms to a distributed environment becomes very simple.
Abstract: A version control mechanism is proposed that enhances the modularity and extensibility of multiversion concurrency control algorithms. The multiversion algorithms are decoupled into two components: version control and concurrency control. This permits modular development of multiversion protocols and simplifies the task of proving the correctness of these protocols. A set of procedures for version control is described that defines the interface with the version control component. It is shown that the same interface can be used by the database actions of both two-phase locking and time-stamp concurrency control protocols to access multiversion data. An interesting feature of the framework is that the execution of read-only transactions becomes completely independent of the underlying concurrency control implementation. Unlike other multiversion algorithms, read-only transactions in this scheme do not modify any version-related information, and therefore do not interfere with the execution of read-write transactions. The extension of the multiversion algorithms to a distributed environment becomes very simple. >

Journal ArticleDOI
TL;DR: High-Integrity Pearl, an extension to the Process and Experiment Automation Real-Time language (Pearl) which incorporates several principles from the real-time Euclid language, is described and its mechanisms for concurrency control, synchronization, allocation, time-bounded loops, and surveillance of events are discussed.
Abstract: High-Integrity Pearl, (HI-Pearl) an extension to the Process and Experiment Automation Real-Time language (Pearl) which incorporates several principles from the real-time Euclid language, is described. The requirements of real-time software and components of a real-time language are reviewed. HI-Pearl's mechanisms for concurrency control, synchronization, allocation, time-bounded loops, surveillance of events, parallelism, timing constraints, overload detection and handling, storage management, run tracing, and error detection and handling are discussed. HI-Pearl's schedulability analyzer, an automated tool to predict whether real-time software will adhere to its critical timing constraints, is also discussed. >

Patent
30 Sep 1993
TL;DR: In this paper, a system for maintaining the integrity of two substantially identical databases across a computer network consisting of two central processing units interconnected by a communications network includes means within a lock manager for maintaining a lock database on one of the central processing unit, means within an application program for initiating a request for a lock to be placed on an identified entity in both of the databases to enable a data update transaction to be performed on the identified entity.
Abstract: A system for maintaining the integrity of two substantially identical databases across a computer network consisting of two central processing units interconnected by a communications network includes means within a lock manager for maintaining a lock database on one of the central processing units, means within an application program for initiating a request for a lock to be placed on an identified entity in both of the databases to enable a data update transaction to be performed on the identified entity in both of the databases, and means within the lock manager for locking the identified entity in both of the databases by establishing an entry for the identified entity in the lock database. Such a system for maintaining database integrity also includes means within the application program for entering the data update transaction in a transaction processor queue, means within a network processor for transmitting the data update transaction to a network processor on the other central processing unit, means within a transaction processor for performing the data update transaction on the identified entity and for initiating an unlock request on the identified entity after the data update transaction has been performed, and means within the lock manager for unlocking the identified entity by modifying the entry for the identified entity in the lock database.

Proceedings ArticleDOI
01 Dec 1993
TL;DR: It is shown that certain properties that have been claimed to be characteristic of real-time systems are sufficient in themselves to guarantee that the system will run serializably, without any extra effort having to be taken.
Abstract: A new approach to the problem of achieving serializability for real-time transaction systems is presented. It is shown that certain properties that have been claimed to be characteristic of real-time systems are sufficient in themselves to guarantee that the system will run serializably, without any extra effort having to be taken. These systems can be said to achieve serializability "for free.". >

Proceedings ArticleDOI
06 Dec 1993
TL;DR: An algorithm is presented that, for a given hierarchy of a set M of CFSMs, incrementally composes and reduces subsets of C FSMs in M for the detection of global deadlocks.
Abstract: In this paper we present an incremental approach to reachability analysis of distributed programs with synchronous communication and mailbox naming. Each process in a distributed program can be modeled as a communicating finite state machine (CFSM). A set of CFSMs is organized into a hierarchy. We present an algorithm that, for a given hierarchy of a set M of CFSMs, incrementally composes and reduces subsets of CFSMs in M. This incremental reachability analysis guarantees the detection of global deadlocks. We provide an algorithm for selecting a hierarchy for a set of CFSMs and show an incremental analysis of the gas station problem.