scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1992"


Journal ArticleDOI
Chandrasekaran Mohan1, Don Haderle1, Bruce G. Lindsay1, Hamid Pirahesh1, Peter Schwarz1 
TL;DR: ARIES as discussed by the authors is a database management system applicable not only to database management systems but also to persistent object-oriented languages, recoverable file systems and transaction-based operating systems.
Abstract: DB2TM, IMS, and TandemTM systems. ARIES is applicable not only to database management systems but also to persistent object-oriented languages, recoverable file systems and transaction-based operating systems. ARIES has been implemented, to varying degrees, in IBM's OS/2TM Extended Edition Database Manager, DB2, Workstation Data Save Facility/VM, Starburst and QuickSilver, and in the University of Wisconsin's EXODUS and Gamma database machine.

1,083 citations



Journal ArticleDOI
TL;DR: In the new protocol, transaction processing is shared effectively among nodes storing copies of the data, and both the response time experienced by transactions and the system throughput are improved significantly.
Abstract: A new protocol for maintaining replicated data that can provide both high data availability and low response time is presented. In the protocol, the nodes are organized in a logical grid. Existing protocols are designed primarily to achieve high availability by updating a large fraction of the copies, which provides some (although not significant) load sharing. In the new protocol, transaction processing is shared effectively among nodes storing copies of the data, and both the response time experienced by transactions and the system throughput are improved significantly. The authors analyze the availability of the new protocol and use simulation to study the effect of load sharing on the response time of transactions. They also compare the new protocol with a voting-based scheme. >

271 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: A method, called ARIESIIM (Algorithm for Recovery and Isolation Exploiting Semantics for Index Management), for concurrency control and recovery of B-trees and a subset of ARIES/IM has been implemented in the OS/2 Extended Edition Database Manager.
Abstract: This paper provides a comprehensive treatment of index management in transaction systems. We present a method, called ARIESIIM (Algorithm for Recovery and Isolation Exploiting Semantics for Index Management), for concurrency control and recovery of B+-trees. ARIES/IM guarantees serializability and uses write-ahead logging for recovery. It supports very high concurrency and good performance by (1) treating as the lock of a key the same lock as the one on the corresponding record data in a data page (e.g., at the record level), (2) not acquiring, in the interest of permitting very high concurrency, commit duration locks on index pages even during index structure modification operations (SMOs) like page splits and page deletions, and (3) allowing retrievals, inserts, and deletes to go on concurrently with SMOs. During restart recovery, any necessary redos of index changes are always performed in a page-oriented fashion (i.e., without traversing the index tree) and, during normal processing and restart recovery, whenever possible undos are performed in a page-oriented fashion. ARIES/IM permits different granularities of locking to be supported in a flexible manner. A subset of ARIES/IM has been implemented in the OS/2 Extended Edition Database Manager. Since the locking ideas of ARIES/IM have general applicability, some of them have also been implemented in SQL/DS and the VM Shared File System, even though those systems use the shadow-page technique for recovery.

254 citations


Journal ArticleDOI
TL;DR: The authors extend the notion of structural testing criteria to concurrent programs and propose a hierarchy of supporting structural testing techniques, suitable for Ada or CSP-like languages.
Abstract: Although structural testing techniques are among the weakest available with regard to developing confidence in sequential programs, they are not without merit. The authors extend the notion of structural testing criteria to concurrent programs and propose a hierarchy of supporting structural testing techniques. Coverage criteria described include concurrency state coverage, state transition coverage and synchronization coverage. Requisite support tools include a static concurrency analyzer and either a program transformation system or a powerful run-time monitor. Also helpful is a controllable run-time scheduler. The techniques proposed are suitable for Ada or CSP-like languages. Best results are obtained for programs having only static naming of tasking objects. >

217 citations


Journal ArticleDOI
TL;DR: To ensure the serializability of transactions, the recoverability relationship between transactions is forced to be acyclic, which can be used to decrease the delay involved in processing non-commuting operations while still avoiding cascading aborts.
Abstract: The concurrency of transactions executing on atomic data types can be enhanced through the use of semantic information about operations defined on these types. Hitherto, commutativity of operations has been exploited to provide enchanced concurrency while avoiding cascading aborts. We have identified a property known as recoverability which can be used to decrease the delay involved in processing noncommuting operations while still avoiding cascading aborts. When an invoked operation is recoverable with respect to an uncommitted operation, the invoked operation can be executed by forcing a commit dependency between the invoked operation and the uncommitted operation; the transaction invoking the operation will not have to wait for the uncommitted operation to abort or commit. Further, this commit dependency only affects the order in which the operations should commit, if both commit; if either operation aborts, the other can still commit thus avoiding cascading aborts. To ensure the serializability of transactions, we force the recoverability relationship between transactions to be acyclic. Simulation studies, based on the model presented by Agrawal et al. [1], indicate that using recoverability, the turnaround time of transactions can be reduced. Further, our studies show enchancement in concurrency even when resource constraints are taken into consideration. The magnitude of enchancement is dependent on the resource contention; the lower the resource contention, the higher the improvement.

212 citations


Journal ArticleDOI
TL;DR: The Datacycle TM architecture is a radical approach to database management that attempts to achieve a full separation of concerns between the needs of many applications and the database management system supporting those needs, even for applications with unusually high demands for flexibility in access to data and for performance.
Abstract: A database system is, by one definition [4], "a system whose overall purpose is to maintain information and to make that information available on demand." Database systems have evolved as part of a response to the desire to separate concerns. That is, the "application" should be responsible for manipulation of information and for its presentation to a user, and the "database system" should perform application-independent data management operations. A true separation, however, is difficult to achieve in practice. Often, the internal data structures and physical placement of data within the database system are optimized to achieve efficient performance for those queries known to be critical for the target application. The data schema itself may also reflect performance concerns. For particularly demanding applications, such as information filtering, database management may be completely customized to meet the needs of the application and even embedded within the application itself. The Datacycle TM architecture [2, 10] is a radical approach to database management. It attempts to achieve a full separation of concerns between the needs of many applications and the database management system supporting those needs, even for applications with unusually high demands for flexibility in access to data and for performance. In the Datacycle architecture, the contents of the entire database are broadcast cyclically over high-bandwidth communication facilities to multiple very large-scale integration (VLSI) data filters that perform complex associative search operations in parallel. The cyclic broadcasting and hardware filtering of information allow a different set of technology trade-offs affecting system architecture, functionality, and performance. The result is a unique combination of functionality and performance to support complex information management applications: • As in most database systems based on the relational model [3], the interface between the

181 citations


Proceedings ArticleDOI
01 Mar 1992
TL;DR: A transaction model for multidatabase system (MDBS) applications in which global subtransactions may be either compensatable or retriable is presented and a commit protocol and a concurrency control scheme that ensures that all generated schedules are correct are presented.
Abstract: A transaction model for multidatabase system (MDBS) applications in which global subtransactions may be either compensatable or retriable is presented. In this model compensation and retrying are used for recovery purposes. However, since such executions may no longer consist of atomic transactions, a correctness criterion that ensures that transactions see consistent database states is necessary. A commit protocol and a concurrency control scheme that ensures that all generated schedules are correct are also presented. The commit protocol eliminates the problem of blocking, which is characteristics of the standard 2PC protocol. The concurrency control protocol can be used in any MDBS environment irrespective of the concurrency control protocol followed by the local DBMSs in order to ensure serializability. >

167 citations


Proceedings ArticleDOI
03 Feb 1992
TL;DR: A novel approach to multiversion concurrency control that allows high-performance transaction systems to support long-running queries and has the potential for reducing the cost of versioning by grouping together queries to run against the same transaction-consistent view of the database is discussed.
Abstract: The authors discuss a novel approach to multiversion concurrency control that allows high-performance transaction systems to support long-running queries. The approach extends the multiversion locking algorithm developed by Computer Corporation of America by using record-level versioning and reserving a portion of each data page for caching prior versions that are potentially needed for the serializable execution of queries; on-page caching also enables an efficient approach to garbage collection of old versions. In addition, view sharing is introduced, which has the potential for reducing the cost of versioning by grouping together queries to run against the same transaction-consistent view of the database. Results from a simulation study that indicate that the approach is a viable alternative to level-one and level-two consistency locking when the portion of each data reserved for prior versions is chosen appropriately are presented. >

118 citations


Proceedings ArticleDOI
TL;DR: This paper illustrates the problem and shows how the encoding of the software development process in process-centered SDEs can be used to provide more appropriate concurrency control, and presents the concurrence control mechanism I developed for the MARVEL SDE.
Abstract: Large scale software development processes often require cooperation among multiple teams of developers To support such processes, SDEs must allow developers to interleave their access to the various components of the projects This interleaving can lead to interference, which may corrupt the project components In traditional database systems, the problem is avoided by enforcing serializability among concurrent transactions In traditional software development, the problem has been addressed by introducing version and configuration management techniques combined with checkout/checkin mechanisms Unfortunately, both of these solutions are too restrictive for SDEs because they enforce serialization of access to data, making cooperation unacceptably difficult In this paper, I illustrate the problem and show how the encoding of the software development process in process-centered SDEs can be used to provide more appropriate concurrency control I present the concurrency control mechanism I developed for the MARVEL SDE This mechanism uses the process model in MARVEL to support the degree of cooperation specified in the development process

96 citations


Proceedings ArticleDOI
Kun-Lung Wu1, Philip S. Yu1, Calton Pu
03 Feb 1992
TL;DR: The authors present divergence control methods for epsilon-serializability (ESR) in centralized databases by presenting the designs of DC methods using other most known inconsistency specifications, such as absolute value, age, and total number of nonserializably read data items.
Abstract: The authors present divergence control methods for epsilon-serializability (ESR) in centralized databases. ESR alleviates the strictness of serializability (SR) in transaction processing by allowing for limited inconsistency. The bounded inconsistency is automatically maintained by divergence control (DC) methods in a way similar to the manner in which SR is maintained by concurrency control mechanisms, but DC for ESR allows more concurrency. Concrete representative instances of divergence-control methods are described based on two-phase locking, timestamp ordering, and optimistic approaches. The applicability of ESR is demonstrated by presenting the designs of DC methods using other most known inconsistency specifications, such as absolute value, age, and total number of nonserializably read data items. >

Journal ArticleDOI
TL;DR: A number of concurrency control concepts and transaction scheduling techniques that are applicable to high contention environments, and that do not rely on database semantics to reduce contention are considered.
Abstract: Future transaction processing systems may have substantially higher levels of concurrency due to reasons which include: (1) increasing disparity between processor speeds and data access latencies, (2) large numbers of processors, and (3) distributed databases. Another influence is the trend towards longer or more complex transactions. A possible consequence is substantially more data contention, which could limit total achievable throughput. In particular, it is known that the usual locking method of concurrency control is not well suited to environments where data contention is a significant factor.Here we consider a number of concurrency control concepts and transaction scheduling techniques that are applicable to high contention environments, and that do not rely on database semantics to reduce contention. These include access invariance and its application to prefetching of data, approximations to essential blocking such as wait depth limited scheduling, and phase dependent control. The performance of various concurrency control methods based on these concepts are studied using detailed simulation models. The results indicate that the new techniques can offer substantial benefits for systems with high levels of data contention.

Proceedings ArticleDOI
06 Dec 1992
TL;DR: An adaptive flow synchronization protocol that permits synchronized delivery of data to and from geographically distributed sites is presented and the introduction of an adaptive synchronization delay, the flexibility to maintain multiple synchronization groups, and the use of a modular architecture are presented.
Abstract: High-speed networks still facilitate the advent of multimedia and distributed applications. An adaptive flow synchronization protocol that permits synchronized delivery of data to and from geographically distributed sites is presented. Applications include inter-stream synchronization, synchronized delivery of information in a multisite conference, and synchronization for concurrency control in distributed computations. The contributions of this protocol in the area of flow synchronization are the introduction of an adaptive synchronization delay, the flexibility to maintain multiple synchronization groups, and the use of a modular architecture that permits the client application to tailor synchronization calculations to its service requirements. Network protocols capable of maintaining network clock synchronization in the millisecond range are used. >

Journal ArticleDOI
E. Levy1, A. Silberschatz
TL;DR: An incremental scheme for performing recovery in main memory database systems (MMDBs), in parallel with transaction execution, is presented and a page-based incremental restart algorithm that enables the resumption of transaction processing as soon as the system is up is proposed.
Abstract: Recovery activities, like checkpointing and restart, in traditional database management systems are performed in a quiescent state where no transactions are active. This approach impairs the performance of online transaction processing systems, especially when a large volatile memory is used. An incremental scheme for performing recovery in main memory database systems (MMDBs), in parallel with transaction execution, is presented. A page-based incremental restart algorithm that enables the resumption of transaction processing as soon as the system is up is proposed. Pages are recovered individually and according to the demands of the post-crash transactions. A method for propagating updates from main memory to the backup database on disk is also provided. The emphasis is on decoupling the I/O activities related to the propagation to disk from the forward transaction execution in memory. The authors also construct a high-level recovery manager based on operation logging on top of the page-based algorithms. The proposed algorithms are motivated by the characteristics of large MMDBs, and exploit the technology of nonvolatile RAM. >

Proceedings ArticleDOI
01 Jun 1992
TL;DR: A range of concurrency control schemes are developed that ensure global serializability in an MDBS environment, and at the same time meet the requirements of a centralized database system.
Abstract: A Multidatabase System (MDBS) is a collection of local database management systems, each of which may follow a different concurrency control protocol. This heterogeneity makes the task of ensuring global serializability in an MDBS environment difficult. In this paper, we reduce the problem of ensuring global serializability to the problem of ensuring serializability in a centralized database system. We identify characteristics of the concurrency control problem in an MDBS environment, and additional requirements on concurrency control schemes for ensuring global serializability. We then develop a range of concurrency control schemes that ensure global serializability in an MDBS environment, and at the same time meet the requirements. Finally, we study the tradeoffs between the complexities of the various schemes and the degree of concurrency provided by each of them.

Journal ArticleDOI
TL;DR: This paper describes an approach based on the use of active objects with essentially explicit interfaces and bindings, and composition as a pragmatic alternative to inheritance, which complements object-oriented programming.
Abstract: The popularity of the object-oriented programming paradigm has stimulated research into its use for parallel and distributed programming. The major issues that affect such use are concurrency control, object interfaces, binding and inheritance. In this paper, we discuss the relative merits of current solutions to these issues and describe an approach based on the use of active objects with essentially explicit interfaces and bindings, and composition as a pragmatic alternative to inheritance. The key feature of our approach is the use of a configuration language to define program structure as a set of objects and their bindings. The configuration language includes facilities for hierarchic definition of composite objects, parameterisation of objects, conditional configurations and recursive definition of objects. This separate and explicit description of program structure complements object-oriented programming. The approach is illustrated by examples from the REX environment for the development of parallel and distributed software.

Proceedings ArticleDOI
01 Dec 1992
TL;DR: Ensemble is a prototype lock-based approach to object-oriented concurrent graphics editing that relies on Unix* 4.3bsd sockets and can be used as a stand-alone program or as an application in the University of Florida’s distributed conferencing system (DCS).
Abstract: Ensemble is an X-Windows based, object-oriented graphics editor based on the tgif graphics editor from UCLA. It relies on Unix* 4.3bsd sockets and can be used as a stand-alone program or as an application in the University of Florida’s distributed conferencing system (DCS). It uses implicitly placed write locks for concurrency control, with locks placed when an object is selected and removed when it is deselected. Multiple users may read or edit a file concurrently, with all users receiving updates whenever a lock is removed. Pointers are shared by mutual consent, so that users may collaborate to the degree desired. Ensemble is a prototype lock-based approach to object-oriented concurrent graphics editing.

Proceedings Article
23 Aug 1992
TL;DR: CO generalizes the popular Strong-Strict Two Phase Locking concept (S-S2PL; “release locks applied on behalf of a transaction only after the transaction has ended”) and exhibits deadlocked executions when implemented as nonblocking (optimistic) concurrency control mechanisms.
Abstract: Commitment Ordering (CO) is a serializability concept, that allows global serializability to be effectively achieved across multiple autonomous Resource Managers (RMs). The RMs may use different (any) concurrency control mechanisms. Thus, CO provides a solution for the long standing global serializability problem. RM autonomy means that no concurrency control information is shared with other entities, except Atomic Commitment (AC) protocol (e.g. Two Phase Commitment 2PC) messages. CO is a necessary condition for guaranteeing global serializability across autonomous RMs. CO generalizes the popular Strong-Strict Two Phase Locking concept (S-S2PL; “release locks applied on behalf of a transaction only after the transaction has ended”). While S-S2PL is subject to deadlocks, CO exhibits deadlock-free executions when implemented as nonblocking (optimistic) concurrency control mechanisms. Permission IO copy withoti fee all or part of this material is granted provided that copies are MI ma& or dislributed for direct commercial advantage, the VLDB copyright notice and the ti!le of the publication and its dale appear, and nolice ir given thal copying is by permission of the Very Large Database Endowment. To copy olherwise, or IO repubbh, requires a fee and/or special permission from the endowment. Proceedings of the 18th VLDB Conference Vancouver, British Columbia, Canada 1992

Journal ArticleDOI
TL;DR: A parametric formal proof of liveness is developed based on the structure and initial state of the CGSPN model to study the correctness and performance of the Lamport concurrent algorithm to solve the mutual exclusion problem on machines lacking an atomic test and set instruction.
Abstract: A colored generalized stochastic Petri net (CGSPN) model was used to study the correctness and performance of the Lamport concurrent algorithm to solve the mutual exclusion problem on machines lacking an atomic test and set instruction. In particular, a parametric formal proof of liveness is developed based on the structure and initial state of the model. The performance evaluation is based on a Markovian analysis that exploits the symmetries of the model to reduce the cost of the numerical solution. Both kinds of analysis are supported by efficient algorithms. The potential of the GSPN modeling technique is illustrated on an academic but nontrivial example of an application from distributed systems. >

Proceedings ArticleDOI
02 Dec 1992
TL;DR: It is shown that the earliest deadline scheduling principle, upon which a number of existing priority assignment policies are based, discriminates significantly against longer transactions, thereby providing a fairer mechanism for use in multiclass RTDBS transaction scheduling.
Abstract: The issue of priority assignment is addressed, in firm multiclass real-time database systems (RTDBSs) where classes are distinguished by their mean sizes. It is shown that the earliest deadline scheduling principle, upon which a number of existing priority assignment policies are based, discriminates significantly against longer transactions. This observation has motivated the development of a novel dynamic priority assignment scheme that improves the chances for long transactions to meet their time constraints, thereby providing a fairer mechanism for use in multiclass RTDBS transaction scheduling. >

Proceedings ArticleDOI
22 Jun 1992
TL;DR: The aim is to develop a process theory that can be regarded as a kernel for languages based on asynchronous communication, like data flow, concurrent logic languages, and concurrent constraint programming.
Abstract: The authors study the paradigm of asynchronous process communication, as contrasted with the synchronous communication mechanism that is present in process algebra frameworks such as CCS, CSP, and ACP. They investigate semantics and axiomatizations with respect to various observability criteria: bisimulation, traces and abstract traces. The aim is to develop a process theory that can be regarded as a kernel for languages based on asynchronous communication, like data flow, concurrent logic languages, and concurrent constraint programming. >

Proceedings ArticleDOI
01 Jun 1992
TL;DR: This work finds the finest partitioning of a set of transactions Tran set with the following property; if the partitioned transactions execute serializably, then TranSet executes serializability, which permits users to obtain more concurrency while preserving correctness.
Abstract: Chopping transactions into pieces is good for performance but may lead to non-serializable executions. Many researchers have reacted to this fact by either inventing new concurrency control mechanisms, weakening serializability, or both. We adopt a different approach. We assume a user who has only the degree 2 and degree 3 consistency options offered by the vast majority of conventional database systems; and knows the set of transactions that may run during a certain interval (users are likely to have such knowledge for online or real-time transactional applications). Given this information, our algorithm finds the finest partitioning of a set of transactions TranSet with the following property; if the partitioned transactions execute serializably, then TranSet executes serializably. This permits users to obtain more concurrency while preserving correctness. Besides obtaining more inter-transaction concurrency, chopping transactions in this way can enhance intra-transaction parallelism. The algorithm is inexpensive, running in O(n x (e + m)) time using a naive implementation where n is the number of edges in the conflict graph among the transactions, and m is the maximum number of accesses of any transaction. This makes it feasible to add as a tuning knob to practical systems.

Proceedings ArticleDOI
03 Feb 1992
TL;DR: The authors examine a priority-driven locking protocol called integrated real-time locking protocol, which is free of deadlock, and in addition, a high-priority transaction is not blocked by uncommitted lower priority transactions.
Abstract: The authors examine a priority-driven locking protocol called integrated real-time locking protocol. They show that this protocol is free of deadlock, and in addition, a high-priority transaction is not blocked by uncommitted lower protocol. They show that this protocol is free of deadlock, and in addition, a high-priority transaction is not blocked by uncommitted lower priority transactions. The protocol does not assume any knowledge about the data requirements or the execution time of each transaction. This makes the protocol widely applicable, since in many actual environments such information may not be readily available. Using a database prototyping environment, it was shown that the proposed protocol offers a performance improvement over the two-phase locking protocol. >

Proceedings ArticleDOI
04 May 1992
TL;DR: A multiversion timestamping protocol is presented that has several very desirable properties: it is secure, produces multIVERSion histories that are equivalent to one-serial histories in which transactions are placed in a timestamp order, avoids livelocks, and can be implemented using single-level untrusted schedulers.
Abstract: Two different areas related to the concurrency control in multilevel secure, multiversion databases are considered. First, the issue of correctness criteria that are weaker than one-copy serializability are explored. The requirements for a weaker correctness criterion are that it should preserve database consistency in some meaningful way, and moreover, it should be implementable in a way that does not require the scheduler to be trusted. Three different, increasingly stricter notions of serializability that can serve as substitutes for one-copy serializability are proposed. Second, a multiversion timestamping protocol is presented that has several very desirable properties: it is secure, produces multiversion histories that are equivalent to one-serial histories in which transactions are placed in a timestamp order, avoids livelocks, and can be implemented using single-level untrusted schedulers. >

01 Jan 1992
TL;DR: In this article, the authors describe the problems involved in integrating concurrency control into object-oriented database systems, and propose an adaptation of the locking technique to satisfy them, without unnecessarily restricting concurrency.
Abstract: In this paper, we describe the problems involved in integrating concurrency control into object-oriented database systems. The object-oriented approach places specific constraints on concurrency between transactions, which we discuss. We then propose an adaptation of the locking technique to satisfy them, without unnecessarily restricting concurrency. Finally, we analyse in detail the impacts of both the transaction model and the locking granularity imposed by the underlying system. The solutions proposed are illustrated with the O2 system developed by the Altair GIP.

Proceedings ArticleDOI
17 Mar 1992
TL;DR: Two variants of a multiversion lock-based concurrency control algorithm and two priority-driven preemptive scheduling algorithms, rate-monotone and earliest-deadline-first, are evaluated and results indicate how well data temporal consistency is maintained for different workload characteristics.
Abstract: A model of hard real-time systems where tasks access shared data is presented, and temporal consistency is defined in terms of the age and dispersion of data. Based on this model, two variants of a multiversion lock-based concurrency control algorithm and two priority-driven preemptive scheduling algorithms, rate-monotone and earliest-deadline-first, are evaluated. Simulation results indicate how well data temporal consistency is maintained for different workload characteristics. These results can be used to guide the design of applications; by avoiding undesirable parameters, it is made easier to maintain data temporal consistency. >

Journal ArticleDOI
Philip S. Yu1, Daniel M. Dias1
TL;DR: In a high data contention environment where locking is inferior to OCC, analysis shows that the performance can be substantially improved by using this hybrid approach and the authors study the tradeoff of the different hybrid CC schemes.
Abstract: Analytical models are developed to study hybrid CC (concurrency control) schemes which employ a different CC scheme to handle rerun transactions, since their characteristics are different from the first run of transactions. These include switching to static or dynamic locking during rerun (referred to as static and dynamic hybrid OCC (optimistic concurrency control) schemes, respectively), and switching to broadcast OCC during rerun, while doing pure OCC for the first run. In a high data contention environment where locking is inferior to OCC, analysis shows that the performance can be substantially improved by using this hybrid approach and the authors study the tradeoff of the different hybrid CC schemes. The analytic models are based on a decomposition approach and use a mean-value-type analysis. The accuracy of the analysis is validated through simulations. >

01 Jan 1992
TL;DR: It is shown how such locks can be used for concurrency control, without introducing covert channels, in many kinds of computer systems besides database systems.
Abstract: : The concurrency control lock (e.g. file lock, table lock) has long been used as a canonical example of a covert channel in a database system. Locking is a fundamental concurrency control technique used in many kinds of computer systems besides database systems. Locking is generally considered to be interfering and hence unsuitable for multilevel systems. In this paper we show how such locks can be used for concurrency control, without introducing covert channels.

Proceedings ArticleDOI
01 Apr 1992
TL;DR: Two new real-time concurrency control protocols are proposed and performance of the protocols is discussed based on simulations comparing them to some previously proposed real- Time Concurrency Control Protocols.
Abstract: Efficient concurrency control protocols are required in order for it to be possible to schedule real-time database transactions so as to satisfy both real-time constraints and data consistency requirements. In this paper, two new real-time concurrency control protocols are proposed. Performance of the protocols is discussed based on simulations comparing them to some previously proposed real-time concurrency control protocols.

Journal ArticleDOI
TL;DR: A system structure and protocols for improving the performance of a distributed transaction processing system when there is some regional locality of data reference are presented and it is found that substantial performance improvement can be obtained.
Abstract: A system structure and protocols for improving the performance of a distributed transaction processing system when there is some regional locality of data reference are presented. A distributed computer system is maintained at each region, and a central computer system with a replication of all databases at the distributed sites is introduced. It provides the advantage of distributed systems principally for local transactions, and has the advantage of centralized systems for transactions accessing nonlocal data. Specialized protocols keep the copies at the distributed and centralized systems consistent without incurring the overhead and delay of generalized protocols for fully replicated databases. The advantages achievable through this system structure and the tradeoffs between protocols for concurrency and coherency control of the duplicate copies of the databases are studied. An approximate analytic model is used to estimate the system performance. It is found that the performance is sensitive to the protocol and that substantial performance improvement can be obtained as compared with distributed systems. >