scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1995"


Journal ArticleDOI
TL;DR: The syntax and semantics of the subset of the Rapide language that is designed to satisfy general requirements for architecture definition languages are described, and the use of event pattern mappings to define the relationship between two architectures at different levels of abstraction is illustrated.
Abstract: This paper discusses general requirements for architecture definition languages, and describes the syntax and semantics of the subset of the Rapide language that is designed to satisfy these requirements. Rapide is a concurrent event-based simulation language for defining and simulating the behavior of system architectures. Rapide is intended for modelling the architectures of concurrent and distributed systems, both hardware and software in order to represent the behavior of distributed systems in as much detail as possible. Rapide is designed to make the greatest possible use of event-based modelling by producing causal event simulations. When a Rapide model is executed it produces a simulation that shows not only the events that make up the model's behavior, and their timestamps, but also which events caused other events, and which events happened independently. The architecture definition features of Rapide are described: event patterns, interfaces, architectures and event pattern mappings. The use of these features to build causal event models of both static and dynamic architectures is illustrated by a series of simple examples from both software and hardware. Also we give a detailed example of the use of event pattern mappings to define the relationship between two architectures at different levels of abstraction. Finally, we discuss briefly how Rapide is related to other event-based languages. >

514 citations


Proceedings ArticleDOI
22 May 1995
TL;DR: An efficient optimistic concurrency control scheme for use in distributed database systems in which objects are cached and manipulated at client machines while persistent storage and transactional support are provided by servers, which outperforms adaptive callback locking for low to moderate contention workloads, and scales better with the number of clients.
Abstract: This paper describes an efficient optimistic concurrency control scheme for use in distributed database systems in which objects are cached and manipulated at client machines while persistent storage and transactional support are provided by servers. The scheme provides both serializability and external consistency for committed transactions; it uses loosely synchronized clocks to achieve global serialization. It stores only a single version of each object, and avoids maintaining any concurrency control information on a per-object basis; instead, it tracks recent invalidations on a per-client basis, an approach that has low in-memory space overhead and no per-object disk overhead. In addition to its low space overheads, the scheme also performs well. The paper presents a simulation study that compares the scheme to adaptive callback locking, the best concurrency control scheme for client-server object-oriented database systems studied to date. The study shows that our scheme outperforms adaptive callback locking for low to moderate contention workloads, and scales better with the number of clients. For high contention workloads, optimism can result in a high abort rate; the scheme presented here is a first step toward a hybrid scheme that we expect to perform well across the full range of workloads.

213 citations


Proceedings ArticleDOI
01 May 1995
TL;DR: Simulations show that the Disha scheme results in superior performance and is extremely simple, ensuring quick recovery from deadlocks and enabling the design of fast routers.
Abstract: This paper presents a simple, efficient and cost effective routing strategy that considers deadlock recovery as opposed to prevention. Performance is optimized in the absence of deadlocks by allowing maximum flexibility in routing. Disha supports true fully adaptive routing where all virtual channels at each node are available to packets without regard for deadlocks. Deadlock cycles, upon forming, are efficiently broken by progressively routing one of the blocked packets through a deadlock-free lane. This lane is implemented using a central "floating" deadlock buffer resource in routers which is accessible to all neighboring routers along the path. Simulations show that the Disha scheme results in superior performance and is extremely simple, ensuring quick recovery from deadlocks and enabling the design of fast routers.

165 citations


Proceedings ArticleDOI
27 Jun 1995
TL;DR: A conceptual framework for fault tolerance is established based on a general object concurrency model that is supported by most concurrent object-oriented languages and systems and integrates two complementary concepts-conversations and transactions.
Abstract: Presents a scheme for coordinated error recovery between multiple interacting objects in a concurrent object-oriented system. A conceptual framework for fault tolerance is established based on a general object concurrency model that is supported by most concurrent object-oriented languages and systems. This framework integrates two complementary concepts-conversations and transactions. Conversations (associated with cooperative exception handling) are used to provide coordinated error recovery between concurrent interacting activities whilst transactions are used to maintain the consistency of shared resources in the presence of concurrent access and possible failures. The serialisability property of transactions is exploited in order to help prevent unexpected information smuggling. The proposed framework is illustrated by means of a case study, and various linguistic and implementation issues are discussed. >

158 citations


Journal ArticleDOI
TL;DR: Besides obtaining more intertransaction concurrency, chopping transactions in this way can enhance intratransaction parallelism and permit users to obtain more concurrency while preserving correctness.
Abstract: Chopping transactions into pieces is good for performance but may lead to nonserializable executions. Many researchers have reacted to this fact by either inventing new concurrency-control mechanisms, weakening serializability, or both. We adopt a different approach. We assume a user who—has access only to user-level tools such as (1) choosing isolation degrees 1ndash;4, (2) the ability to execute a portion of a transaction using multiversion read consistency, and (3) the ability to reorder the instructions in transaction programs; and —knows the set of transactions that may run during a certain interval (users are likely to have such knowledge for on-line or real-time transactional applications).Given this information, our algorithm finds the finest chopping of a set of transactions TranSet with the following property: If the pieces of the chopping execute serializably, then TranSet executes serializably. This permits users to obtain more concurrency while preserving correctness. Besides obtaining more intertransaction concurrency, chopping transactions in this way can enhance intratransaction parallelism.The algorithm is inexpensive, running in O(n×(e+m)) time, once conflicts are identified, using a naive implementation, where n is the number of concurrent transactions in the interval, e is the number of edges in the conflict graph among the transactions, and m is the maximum number of accesses of any transaction. This makes it feasible to add as a tuning knob to real systems.

150 citations


Book ChapterDOI
21 Mar 1995
TL;DR: This paper presents an integrated system, which provides database operations with real-time constraints is generally called a real- time database system (RTDBS) and explains its development and use case.
Abstract: Traditionally, real-time systems manage their data (e.g. chamber temperature, aircraft locations) in application dependent structures. As real-time systems evolve, their applications become more complex and require access to more data. It thus becomes necessary to manage the data in a systematic and organized fashion. Database management systems provide tools for such organization, so in recent years there has been interest in “merging” database and real-time technology. The resulting integrated system, which provides database operations with real-time constraints is generally called a real-time database system (RTDBS) [1].

134 citations


Journal ArticleDOI
TL;DR: This work has evaluated the performance of two well known classes of concurrency control algorithms that handle multiversion data: the two phase locking and the optimistic algorithms, as well as the rate monotonic and earliest deadline first scheduling algorithms.
Abstract: We study the performance of concurrency control algorithms in maintaining temporal consistency of shared data in hard real time systems. In our model, a hard real time system consists of periodic tasks which are either write only, read only or update transactions. Transactions may share data. Data objects are temporally inconsistent when their ages and dispersions are greater than the absolute and relative thresholds allowed by the application. Real time transactions must read temporally consistent data in order to deliver correct results. Based on this model, we have evaluated the performance of two well known classes of concurrency control algorithms that handle multiversion data: the two phase locking and the optimistic algorithms, as well as the rate monotonic and earliest deadline first scheduling algorithms. The effects of using the priority inheritance and stack based protocols with lock based concurrency control are also studied. >

114 citations


Journal ArticleDOI
TL;DR: A comprehensive mathematical modeling approach for distributed database design that considers network communication, local processing, and data storage costs is developed and a genetic algorithm is developed to solve this mathematical formulation.
Abstract: The allocation of data and operations to nodes in a computer communications network is a critical issue in distributed database design. An efficient distributed database design must trade off performance and cost among retrieval and update activities at the various nodes. It must consider the concurrency control mechanism used as well as capacity constraints at nodes and on links in the network. It must determine where data will be allocated, the degree of data replication, which copy of the data will be used for each retrieval activity, and where operations such as select, project, join, and union will be performed. We develop a comprehensive mathematical modeling approach for this problem. The approach first generates units of data (file fragments) to be allocated from a logical data model representation and a characterization of retrieval and update activities. Retrieval and update activities are then decomposed into relational operations on these fragments. Both fragments and operations on them are then allocated to nodes using a mathematical modeling approach. The mathematical model considers network communication, local processing, and data storage costs. A genetic algorithm is developed to solve this mathematical formulation. >

99 citations


Journal ArticleDOI
TL;DR: The underlying data model and the functionality of GRAS, a database system which has been designed according to the requirements of software engineering, CAD, or office automation, are described.

96 citations


Journal ArticleDOI
Bernardo A. Huberman1, Tad Hogg1
TL;DR: As computer networks grow and blanket the planet, they become a community of concurrent processes, which, in their interactions, strategies, and lack of perfect knowledge, become analogous to human market economies.
Abstract: As computer networks grow and blanket the planet, they become a community of concurrent processes, which, in their interactions, strategies, and lack of perfect knowledge, become analogous to human market economies. Economics may thus offer new ways of designing and understanding the behavior of distributed computer systems.

88 citations


Patent
12 Sep 1995
TL;DR: In this paper, a lock holder table is proposed for concurrency control in an object-oriented database management system with a plurality of users accessing the system at the same time, and allowing editing of the database while other users are concurrently searching the database.
Abstract: The present invention provides a method and apparatus for concurrency control in an object oriented database management system having a plurality of users accessing the system at the same time, and allowing editing of the database while other users are concurrently searching the database. The present invention may be advantageously used in a client/server architecture comprising a knowledge base client and a knowledge base server. In a preferred embodiment, the knowledge base server may include an object oriented lock manager, a dynamic class manager, a connection manager, a query manager, a handle manager, a units manager, a database manager, and a file manager. The concurrency control mechanism includes a lock holder table. The present invention provides a method and apparatus for providing concurrency control in an object oriented database managemant system using only three types of lock modes: a class share lock, a tree update lock, and a tree exclusive lock. In a preferred embodiment, a fourth type of lock mode may be provided: a tree share lock. The present invention provides a particularly advantageous concurrency control mechanism for an object oriented database management system that is read oriented.

Journal ArticleDOI
TL;DR: A taxonomy of monitors is presented that encompasses all the extant monitors and suggests others not found in the literature or in existing programming languages.
Abstract: One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for systems with shared memory, is the monitor. Over the past twenty years many kinds of monitors have been proposed and implemented, and many modern programming languages provide some form of monitor for concurrency control. This paper presents a taxonomy of monitors that encompasses all the extant monitors and suggests others not found in the literature or in existing programming languages. It discusses the semantics and performance of the various kinds of monitors suggested by the taxonomy, and it discusses programming techniques suitable to each.

Proceedings ArticleDOI
30 May 1995
TL;DR: A new lock management scheme is presented which allows a read unlock for an item to be executed at any copy site of that item; the site may be different from the copy site on which the read lock is set.
Abstract: We present a new lock management scheme which allows a read unlock for an item to be executed at any copy site of that item; the site may be different from the copy site on which the read lock is set. The scheme utilizes the replicated copies of data items to reduce the message costs incurred by the mobility of the transaction host. We demonstrate this idea in an optimistic locking algorithm called O2PL-MT (Optimistic Two Phase Locking for Mobile Transactions). Like its counterpart algorithm O2PL (Optimistic Two Phase Locking), O2PL-MT grants read locks immediately on demand and defers write locks until the commitment time. However, O2PL-MT requires the transmission of fewer messages than O2PL in a mobile environment in which data items are replicated.

Journal ArticleDOI
TL;DR: In this paper, a basic understanding of the issues in real-time database systems is provided and the research efforts in this area are introduced.

Proceedings Article
11 Sep 1995
TL;DR: SCC-kS, a Speculative Concurrency Control algorithm that allows a DBMS to use efficiently the extra computing resources available in the system to increase the likelihood of timely commitment of transactions, and SCC-DC, a value-cognizant SCC protocol that utilizes deadline and criticalness information to improve timeliness through the controlled deferment of transaction commitments.
Abstract: We describe SCC-kS, a Speculative Concurrency Control (SCC) algorithm that allows a DBMS to use efficiently the extra computing resources available in the system to increase the likelihood of timely commitment of transactions. Using SCC-kS, up to k shadow transactions execute speculatively in behalf of a given uncommitted transaction so as to protect against the hazards of blockages and resterts. SCC-kS allows the system to scale the level of speculation that each transaction is allowed to perform, thus providing a straightforward mechanism of trading resources for timeliness. Also, we describe SCC-DC, a value-cognizant SCC protocol that utilizes deadline and criticalness information to improve timeliness through the controlled deferment of transaction commitments. We present simulation results that quantify the performance gains of our protocols compared to other widely used concurrency control protocols for real-time databases.


Book
01 Jun 1995
TL;DR: A two- phase approach to Predictability Scheduling Real-Time Transactions, Pat O'Neil, Krithi Ramamritham and Calton Pu, and an Analytic Model of Transaction Interference.
Abstract: Foreword, Jim Gray. 1. Transactions and Database Processing, Vijay Kumar and Meichun Hsu. 2. Serializability-based Correctness Criteria, Panos Chrysanthis. 3. Concurrency Control Mechanisms and Their Taxonomy, Vijay Kumar. 4. An Analytic Model of Transaction Interference, Andreas Reuter. 5. Concurrency Control Performance Modeling: Alternatives and Implications, Rakesh Agrawal, Michael J. Carey, and Miron Livny. 6. Modeling and Analysis of Concurrency Control Schemes, Philip S. Yu. 7. Modeling Performance Impact of Hot Spot, Bin Zhang and Meichun Hsu. 8. Two-Phase Locking Performance and Its Thrashing Behavior, Alexander Thomasian. 9. Database Concurrency Control Using Data Flow Graphs, Margaret H. Eich and David L. Wells. 10. Concurrency Control and Recovery Methods for B+-Tree Indexes: ARIES/KVL and ARIES/IM, C. Mohan. 11. Commmit_LSN: A Noval and Simple Method for Reducing Locking and Latching in Transaction Processing Systems, C. Mohan. 12. Synchronizing Long-Lived Computations, Friedemann Schwendreis and Andreas Reuter. 13. Implementation Considerations and Performance Evaluation of Object-Based Concurrency Control Protocols,Shehan Xavier and Krithi Ramaritham. 14. Reduction in Transaction Conflicts Using Semantics-Based Concurrency Control, Sushil Jajodia and Ravi Mukkamala. 15. The Design and Performance Evaluation of a Lock Manager for a Memory-Resident Database System, Toby Lehman and Vibby Gottemukkala. 16. Performance of Concurrency Control Algorithms for Real-Time Database Systems, Juhnyoung Lee and Sang H. Son. 17. Firm Real-Time Concurrency Control, Jayant Harista. 18. A Two-Phase Approach to Predictability Scheduling Real-Time Transactions, Pat O'Neil, Krithi Ramamritham and Calton Pu. 19. Conflict Detection Tradeoffs for Replicated Data, Michael J. Carey and Miron Livny. 20. On Mixing Queries and Transactions via Multiversion Locking, Paul M. Bober and Michael J. Carey. 21. Extensibility and Asynchrony in the Brown-Object Storage System, David E. Langworthy and Stanley B. Zdonik. Selected Biographies. Index.

Proceedings ArticleDOI
13 Sep 1995
TL;DR: An automated iterative improvement technique for performing concurrency optimization and hardware-software tradeoffs simultaneously for multiple-process systems and demonstrates that addressing these two issues simultaneously enables a number of interesting cost/performance points that would not have been found otherwise.
Abstract: Systems composed of microprocessors interacting with ASICs are necessarily multiple-process systems, since the controller in the microprocessor is separate from any controllers on the ASIC. For this reason, the design of such systems offers an opportunity to exploit not only hardware-software tradeoffs, but concurrency tradeoffs as well. The paper describes an automated iterative improvement technique for performing concurrency optimization and hardware-software tradeoffs simultaneously. Experimental results illustrate that addressing these two issues simultaneously enables us to identify a number of interesting cost/performance points that would not have been found otherwise.

Proceedings Article
11 Sep 1995
TL;DR: A framework for explaining redo recovery after a system crash is defined and pragmatic methods that constrain the logged operations to reading and writing single pages are explained, and a new class of logged operations having a recovery method with practical advantages over current methods are introduced.
Abstract: This paper defines a framework for explaining redo recovery after a system crash. In this framework, an installation graph explains the order in which operations must be installed into the stable database if it is to remain recoverable. This installation graph is a significantly weaker ordering on operations than the conflict graph from concurrency control. We use the installation graph to devise (i) a cache management algorithm for writing data from the volatile cache to the stable database, (ii) the specification of a REDO test used to choose the operations on the log to replay during recovery, and (iii) an idempotent recovery algorithm based on this test; and we prove that these cache management and recovery algorithms are correct. Most pragmatic recovery methods depend on constraining the kinds of operations that can appear in the log, but our framework allows arbitrary logged operations. We use our framework to explain pragmatic methods that constrain the logged operations to reading and writing single pages, and then using this new understanding to relax these constraints. The result is a new class of logged operations having a recovery method with practical advantages over current methods.

Journal ArticleDOI
01 Jan 1995
TL;DR: This work uses a new relationship between locks called ordered sharing to eliminate blocking that arises in the traditional locking protocols, and indicates that the proposed protocols significantly reduce the percentages of missed deadlines in the system for a variety of workloads.
Abstract: We propose locking protocols for real-time databases. Our approach has two main motivations: First, locking protocols are widely accepted and used in most database systems. Second, in real-time databases it has been shown that the blocking behavior of transactions in locking protocols results in performance degradation. We use a new relationship between locks called ordered sharing to eliminate blocking that arises in the traditional locking protocols. Ordered sharing eliminates blocking of read and write operations but may result in delayed termination. Since timeliness and not response time is the crucial factor in real-time databases, our protocols exploit this delay to allow transactions to execute within the slacks of delayed transactions. We compare the performance of the proposed protocols with the two-phase locking protocol for real-time databases. Our experiments indicate that the proposed protocols significantly reduce the percentage of missed deadlines in the system for a variety of workloads.

Proceedings ArticleDOI
08 Dec 1995
TL;DR: A hybrid execution model which dynamically adapts to runtime data layout, providing both sequential efficiency and low overhead parallel execution, and is expressed entirely in C, and therefore is easily portable to many systems.
Abstract: While fine-grained concurrent languages can naturally capture concurrency in many irregular and dynamic problems, their flexibility has generally resulted in poor execution effciency. In such languages the computation consists of many small threads which are created dynamically and synchronized implicitly. In order to minimize the overhead of these operations, we propose a hybrid execution model which dynamically adapts to runtime data layout, providing both sequential efficiency and low overhead parallel execution. This model uses separately optimized sequential and parallel versions of code. Sequential efficiency is obtained by dynamically coalescing threads via stack-based execution and parallel efficiency through latency hiding and cheap synchronization using heap-allocated activation frames. Novel aspects of the stack mechanism include handling return values for futures and executing forwarded messages (the responsibility to reply is passed along, like call/cc in Scheme) on the stack. In addition, the hybrid execution model is expressed entirely in C, and therefore is easily portable to many systems. Experiments with function-call intensive programs show that this model achieves sequential efficiency comparable to C programs. Experiments with regular and irregular application kernels on the CM5 and T3D demonstrate that it can yield 1.5 to 3 times better performance than code optimized for parallel execution alone.

Journal ArticleDOI
TL;DR: This paper shows how the object-oriented facilities of C++ are powerful enough to encapsulate concurrency creation and control, and describes how it can provide, with a standard compiler, almost all of the functionality offered by a new or extended language.
Abstract: Many attempts have been made to add concurrency to C++, often by extensive compiler extensions, but much of the work has not exploited the power of C++. This paper shows how the object-oriented facilities of C++ are powerful enough to encapsulate concurrency creation and control. We have developed a concurrent C++-based prototype system (ABC++) and describe how we can provide, with a standard compiler, almost all of the functionality offered by a new or extended language. Active objects, object distribution, selective method acceptance, and synchronous and asynchronous object interaction are supported. Concurrency control and synchronization are encapsulated at the active object level. The goal of ABC++ is to allow users to write concurrent programs without dealing with explicit synchronization and mutual exclusion constructs, with as few restrictions on the use of C++ as possible. ABC++ can be implemented on either a shared memory multiprocessor or a cluster of homogeneous workstations. It is presently implemented on a network of RISC System/6000® processors and on the IBM Scalable POWERparallel™ System 1 (SP1™).

Proceedings Article
01 Jan 1995
TL;DR: The W3Object model is described, and it is shown, through a prototype implementation, how the model is used to address the problems of referential integrity and transparent object (resource) migration.
Abstract: In this paper we discuss some of the problems of the current Web and show how the introduction of object-orientation provides flexible and extensible solutions. Web resources become encapsulated as objects, with well-defined interfaces through which all interactions occur. The interfaces and their implementations can be inherited by builders of objects, and methods (operations) can be redefined to better suit the object. New characteristics, such as concurrency control and persistence, can be obtained by inheriting from suitable base classes, without necessarily requiring any changes to users of these resources. We describe the W3Object model which we have developed based upon these ideas, and show, through a prototype implementation, how we have used the model to address the problems of referential integrity and transparent object (resource) migration. We also give indications of future work.

Journal ArticleDOI
01 Sep 1995
TL;DR: An agent-based framework for accessing mobile heterogeneous databases is defined and concurrency control and recovery issues are investigated and possible solutions are outlined.
Abstract: Information applications are increasingly required to be distributed among numerous remote sites through both wireless and wired links. Traditional models of distributed computing are inadequate to overcome the communication barrier this generates and to support the development of complex applications. In this paper, we advocate an approach based on agents. Agents are software modules that encapsulate data and code, cooperate to solve complicated tasks, and run at remote sites with minimum interaction with the user. We define an agent-based framework for accessing mobile heterogeneous databases. We then investigate concurrency control and recovery issues and outline possible solutions. Agent-based computing advances database transaction and control flow management concepts and remote programming techniques.

Book ChapterDOI
01 Jan 1995
TL;DR: There is a vastly uniform methodology for putting these aspects to work, which will be described, which is illustrated by examples ranging from simple reads and writes to semantically rich operations.
Abstract: The transaction concept provides a central paradigm for correctly synchronizing concurrent activities and for achieving reliability in database systems. In transaction modeling and processing, theory and practice influence each other a lot, and over the years the transaction concept has undergone a considerable evolution from a pure implementation vehicle to a powerful abstraction concept. This survey deals with conceptual issues in designing transaction models, and with approaches to specify correctness of concurrent transaction executions (schedules). As will be described, there is a vastly uniform methodology for putting these aspects to work, which is illustrated by examples ranging from simple reads and writes to semantically rich operations. In addition, the survey covers novel transaction models, whose goal is to adequately support a variety of requirements arising in modern database applications.

Journal ArticleDOI
TL;DR: A new replication paradigm, the location-based paradigm, is presented, which provides availability similar to quorum-based replication protocols but with transaction-execution delays similar to one-copy systems.
Abstract: Replication techniques for transaction-based distributed systems generally achieve increased availability but with a significant performance penalty. We present a new replication paradigm, the location-based paradigm, which addresses availability and other performance issues. It provides availability similar to quorum-based replication protocols but with transaction-execution delays similar to one-copy systems. The paradigm further exploits replication to improve performance in two instances. First, it takes advantage of local or nearby replicas to further improve the response time of transactions, achieving smaller execution delays than one-copy systems. Second, it takes advantage of replication to facilitate the independent crash recovery of replica sites-a goal which is unattainable in one-copy systems. In addition to the above the location-based paradigm avoids bottlenecks, facilitates load balancing, and minimizes the disruption of service when failures and recoveries occur. In this paper we present the paradigm, a formal proof of correctness, and a detailed simulation study comparing our paradigm to one-copy systems and to other approaches to replication control. >

Book
01 Jun 1995
TL;DR: It is shown that under a soft deadline system, the results of the relative performance of locking and optimistic approaches depend heavily on resource availability in the system as in conventional database systems, and an optimistic protocol outperforms a locking-based protocol under a wide range of resource availability and system workload level.
Abstract: In this paper, we investigate the key components of a reasonable model of real-time database systems (RTDBSs), including the policy for dealing with tardy transactions, the availability of resources in the system, and the use of pre-knowledge about transaction processing requirement. We employ a fairly complete model of an RTDBS for studying the relative performance of locking and optimistic concurrency control protocols under a variety of operating conditions. In addition, we examine the issues on the implementation of concurrency control algorithms, which may have a signiicant impact on performance. We show that under a soft deadline system, the results of the relative performance of locking and optimistic approaches depend heavily on resource availability in the system as in conventional database systems. In the context of rm deadline systems, it is shown that an optimistic protocol outperforms a locking-based protocol under a wide range of resource availability and system workload level. Based on these results, we reconnrm the results from previous performance studies on concurrency control for RTDBSs.

Proceedings ArticleDOI
08 May 1995
TL;DR: In this article, a secure two-phase locking protocol is described and a scheme is proposed to allow partial violations of security for improved timeliness in real-time database systems.
Abstract: Database systems for real-time applications must satisfy timing constraints associated with transactions, in addition to maintaining data consistency. In addition to real-time requirements, security is usually required in many applications. Multilevel security requirements introduce a new dimension to transaction processing in real-time database systems. We argue that due to the conflicting goals of each requirement, trade-offs need to be made between security and timeliness. We first define capacity, a measure of the degree to which security is being satisfied by a system. A secure two-phase locking protocol is then described and a scheme is proposed to allow partial violations of security for improved timeliness. The capacity of the resultant covert channel is derived and a feedback control scheme is proposed that does not allow the capacity to exceed a specified upper bound. >

Book ChapterDOI
11 Jan 1995
TL;DR: This paper shows that even exclusive locks can be released immediately after the commit request has arrived, without sacrificing any important recovery properties.
Abstract: Two-phase locking is a standard method for managing concurrent transactions in database systems. In order to guarantee good recovery properties, two-phase locking should be strict, meaning that locks can be released only after the transaction's commit or abort. In this paper we show that even exclusive locks can be released immediately after the commit request has arrived, without sacrificing any important recovery properties. This optimization is especially useful if the commit operation takes much time compared with the other actions, as for main-memory databases, or if the commits are performed in batches.

Proceedings ArticleDOI
13 Aug 1995
TL;DR: This work shows how to combine earlier mechanisms for single client, multiple server computing with a new mechanism called ESP (Event Sense Protocol) for multiple client,multiple server computing to enable more powerful form of collaboration.
Abstract: Most people think that collaboration implies that several people are sharing work on a single application with shared displays. In fact, collaboration is more. It includes the concurrent control of multiple applications by a collaborative group. To enable this more powerful form of collaboration, we show how to combine earlier mechanisms for single client, multiple server computing with a new mechanism called ESP (Event Sense Protocol) for multiple client, multiple server computing. We describe two extended examples — a working prototype of a multi-user, heterogeneous, distributed debugger and a commercial banking application.