scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1999"


Journal ArticleDOI
01 May 1999
TL;DR: A petri net model, called colored resource-oriented Petri net (CROPN), is developed and necessary and sufficient conditions and an efficient control law are presented for deadlock-free operation in FMS.
Abstract: Concurrent competition for finite resources by multiple parts in flexible manufacturing systems (FMS) results in deadlock. This is an important issue to be addressed in the operation of the system. A Petri net model, called colored resource-oriented Petri net (CROPN), is developed in this paper. The concurrent resource contention and the important characteristics of the production processes necessary for deadlock control are well modeled by this model. Based on the developed model, necessary and sufficient conditions and an efficient control law are presented for deadlock-free operation in FMS. This control law is a policy of dynamic resource allocation. It determines when a resource can be allocated to which job to avoid deadlock. This control law allows as many active parts as possible to be in the system, while deadlock is totally avoided. This control law is easy to implement and can be embedded into the real-time scheduler. A simple example is used to illustrate the application of the approach.

203 citations


Proceedings ArticleDOI
23 Mar 1999
TL;DR: This paper proposes a way to develop a truly scalable trigger system with a trigger cache to use the main memory effectively, and a memory-conserving selection predicate index based on the use of unique expression formats called expression signatures.
Abstract: Current database trigger systems have extremely limited scalability. This paper proposes a way to develop a truly scalable trigger system. Scalability to large numbers of triggers is achieved with a trigger cache to use the main memory effectively, and a memory-conserving selection predicate index based on the use of unique expression formats called expression signatures. A key observation is that if a very large number of triggers are created, many will have the same structure, except for the appearance of different constant values. When a trigger is created, tuples are added to special relations created for expression signatures to hold the trigger's constants. These tables can be augmented with a database index or main-memory index structure to serve as a predicate index. The design presented also uses a number of types of concurrency to achieve scalability, including token (tuple)-level, condition-level, rule action-level and data-level concurrency.

141 citations


Journal ArticleDOI
01 Jun 1999
TL;DR: This work proposes the use of a weaker correctness criterion called update consistency and outline mechanisms based on this criterion that ensure (1) the mutual consistency of data maintained by the server and read by clients, and (2) the currency of dataread by clients.
Abstract: A crucial consideration in environments where data is broadcast to clients is the low bandwidth available for clients to communicate with servers. Advanced applications in such environments do need to read data that is mutually consistent as well as current. However, given the asymmetric communication capabilities and the needs of clients in mobile environments, traditional serializability-based approaches are too restrictive, unnecessary, and impractical. We thus propose the use of a weaker correctness criterion called update consistency and outline mechanisms based on this criterion that ensure (1) the mutual consistency of data maintained by the server and read by clients, and (2) the currency of data read by clients. Using these mechanisms, clients can obtain data that is current and mutually consistent “off the air”, i.e., without contacting the server to, say, obtain locks. Experimental results show a substantial reduction in response times as compared to existing (serializability-based) approaches. A further attractive feature of the approach is that if caching is possible at a client, weaker forms of currency can be obtained while still satisfying the mutual consistency of data.

135 citations


Journal ArticleDOI
TL;DR: A replication scheme tailored for mobile computing in which communication is most often intermittent, low-bandwidth, or expensive, thus providing only weak connectivity is presented and transaction-oriented correctness criteria for the proposed schemes are presented.
Abstract: Mobile computing introduces a new form of distributed computation in which communication is most often intermittent, low-bandwidth, or expensive, thus providing only weak connectivity. We present a replication scheme tailored for such environments. Bounded inconsistency is defined by allowing controlled deviation among copies located at weakly connected sites. A dual database interface is proposed that in addition to read and write operations with the usual semantics supports weak read and write operations. In contrast to the usual read and write operations that read consistent values and perform permanent updates, weak operations access only local and potentially inconsistent copies and perform updates that are only conditionally committed. Exploiting weak operations supports disconnected operation since mobile clients can employ them to continue to operate even while disconnected. The extended database interface coupled with bounded inconsistency offers a flexible mechanism for adapting replica consistency to the networking conditions by appropriately balancing the use of weak and normal operations. Adjusting the degree of divergence among copies provides additional support for adaptivity. We present transaction-oriented correctness criteria for the proposed schemes, introduce corresponding serializability-based methods, and outline protocols for their implementation. Then, some practical examples of their applicability are provided. The performance of the scheme is evaluated for a range of networking conditions and varying percentages of weak transactions by using an analytical model developed for this purpose.

124 citations


Journal ArticleDOI
TL;DR: Several useful ideas for increasing concurrency have been summarized and include flexible transactions, adaptability, prewrites, multidimensional time stamps, and relaxation of two-phase locking.
Abstract: Ideas that are used in the design, development, and performance of concurrency control mechanisms have been summarized. The locking, time-stamp, optimistic-based mechanisms are included. The ideas of validation in the optimistic approach are presented in some detail. The degree of concurrency and classes of serializability for various algorithms have been presented. Questions that relate arrival rate of transactions with degree of concurrency and performance have been briefly presented. Finally, several useful ideas for increasing concurrency have been summarized. They include flexible transactions, adaptability, prewrites, multidimensional time stamps, and relaxation of two-phase locking.

83 citations


Proceedings ArticleDOI
18 Oct 1999
TL;DR: The Eternal system achieves consistent object replication by imposing a single logical thread of control on every replicated multithreaded CORBA client or server object, and by providing deterministic scheduling of threads and operations a cross the replicas of each object.
Abstract: In CORBA-based applications that depend on object replication for fault tolerance, inconsistencies in the states of the replicas of an object can arise when concurrent threads within those replicas perform updates in different orders. By imposing a single logical thread of control on every replicated multithreaded CORBA client or server object, and by providing deterministic scheduling of threads and operations a cross the replicas of each object, the Eternal system achieves consistent object replication. The Eternal system does this transparently, with no modification to the application, the ORB, or the concurrency model employed by the ORB.

67 citations


Proceedings ArticleDOI
01 May 1999
TL;DR: The extended unified theory of concurrency control and recovery is extended by applying it to generalized process structures, i.e., arbitrary partially ordered sequences of transaction invocations, to provide a more flexible handling of concurrent processes while allowing as much parallelism as possible.
Abstract: The unified theory of concurrency control and recovery integrates atomicity and isolation within a common framework, thereby avoiding many of the shortcomings resulting from treating them as orthogonal problems. This theory can be applied to the traditional read/write model as well as to semantically rich operations. In this paper, we extend the unified theory by applying it to generalized process structures, i.e., arbitrary partially ordered sequences of transaction invocations. Using the extended unified theory, our goal is to provide a more flexible handling of concurrent processes while allowing as much parallelism as possible. Unlike in the original unified theory, we take into account that not all activities of a process might be compensatable and the fact that these process structures require transactional properties more general than in traditional ACID transactions. We provide a correctness criterion for transactional processes and identify the key points in which the more flexible structure of transactional processes implies dierences from traditional transactions.

62 citations


Patent
14 May 1999
TL;DR: In this article, the authors define lock types for maintenance of the materialized view defined as a join on a plurality of base tables by obtaining different types of locks, such that another process attempting to update another base table simultaneously is blocked until the update on the base table is committed.
Abstract: Concurrency control for maintenance of materialized view defined as a join on a plurality of base tables is provided by obtaining different types of locks. The base table being updated is locked with one type of lock, and the other base tables of the materialized view is locked with a different type of lock. These lock types are defined so that another process attempting to update another base table simultaneously is blocked until the update on the base table is committed. On the other hand, another process attempting to update the same base table is allowed to perform that update concurrently.

58 citations


Patent
23 Nov 1999
TL;DR: In this paper, a system and method for synchronizing threads of execution within a distributed computing environment is described, in which local threads that are each part of the same logical thread will all have access to the shared resource when the lock is assigned to the logical thread.
Abstract: A system and method is disclosed for synchronizing threads of execution within a distributed computing environment. Threads of execution within a computer spawn additional threads of execution on separate computers within the distributed computing environment. Each thread may compete for shared resources within the computing environment, thereby creating a need to avoid deadlocks among the local threads. Whereas locals thread exists within a single computing platform, logical threads are created to relate local threads to each other and thereby span the platforms on which the local threads reside. Distributed monitors are created to control access to shared resources by local threads based on logical thread affiliations. Locks within the distributed monitors are assigned to logical threads instead of local threads. Local threads that are each part of the same logical thread will all have access to the shared resource when the lock is assigned to the logical thread.

53 citations


Journal ArticleDOI
01 Oct 1999
TL;DR: The extensive experiments carried out indicate that the newly proposed deadlock detection algorithm outperforms the other algorithms in the vast majority of configurations and workloads and, in contrast to all other algorithms, is very robust with respect to differing load and access profiles.
Abstract: This paper attempts a comprehensive study of deadlock detection in distributed database systems. First, the two predominant deadlock models in these systems and the four different distributed deadlock detection approaches are discussed. Afterwards, a new deadlock detection algorithm is presented. The algorithm is based on dynamically creating deadlock detection agents (DDAs), each being responsible for detecting deadlocks in one connected component of the global wait-for-graph (WFG). The DDA scheme is a “self-tuning” system: after an initial warm-up phase, dedicated DDAs will be formed for “centers of locality”, i.e., parts of the system where many conflicts occur. A dynamic shift in locality of the distributed system will be responded to by automatically creating new DDAs while the obsolete ones terminate. In this paper, we also compare the most competitive representative of each class of algorithms suitable for distributed database systems based on a simulation model, and point out their relative strengths and weaknesses. The extensive experiments we carried out indicate that our newly proposed deadlock detection algorithm outperforms the other algorithms in the vast majority of configurations and workloads and, in contrast to all other algorithms, is very robust with respect to differing load and access profiles.

50 citations


Book ChapterDOI
01 Jan 1999
TL;DR: Some performance results from a proof-of-concept platform that runs a number of small, but real, distributed applications on Unix and Windows NT confirm that the PerDiS abstraction is well adapted to the targeted application area and that the overall performance is promising compared to alternative approaches.
Abstract: The PerDis (Persisent Distributed Store) project addresses the issue of providing support for distributed collaborative engineering applications. We describe the design and implementation of the PerDis platform, and its support for such applications. Collaborative engineering raises system issues related to the sharing of large volumes of fine-grain, complex objects across wide-area networks and administrative boundaries. PerDiS manages all these aspects in a well defined, integrated, and automatic way. Distributed application programming is simplified because it uses the same memory abstraction as in the centralized case. Porting an existing centralized program written in C or C++ is usually a matter of a few, well-isolated changes. We present some performance results from a proof-of-concept platform that runs a number of small, but real, distributed applications on Unix and Windows NT. These confirm that the PerDiS abstraction is well adapted to the targeted application area and that the overall performance is promising compared to alternative approaches.

Proceedings ArticleDOI
01 Nov 1999
TL;DR: This paper addresses the concurrency control problem of how to preserve the intentions of concurrently generated operations whose effects are conflicting by proposing an object replication strategy to preserveThe intentions of all operations.
Abstract: Real-time collaborative editing systems are groupware systems that allow multiple users to edit the same document at the same time from multiple sites. A specific type of collaborative editing system is the object-based collaborative graphics editing system. One of the major challenge in building such systems is to solve the concurrency control problems. This paper addresses the concurrency control problem of how to preserve the intentions of concurrently generated operations whose effects are conflicting. An object replication strategy is proposed to preserve the intentions of all operations. The effects of conflicting operations are applied to different replicas of the same object, while non-conflicting operations are applied to the same object. An object identification scheme is proposed to uniquely and consistently identify non-replicated and replicated objects. Lastly, an object replication algorithm is proposed to produce consistent replication effects at all sites.

Journal ArticleDOI
01 Jun 1999
TL;DR: This paper presents a dynamic granular locking approach to phantom protection in Generalized Search Trees(GiSTs), an index structure supporting an extensible set of queries and data types, and provides the first such solution based on granularlocking.
Abstract: The importance of multidimensional index structures to numerous emerging database applications is well established. However, before these index structures can be supported as access methods (AMs) in a “commercial-strength” database management system (DBMS), efficient techniques to provide transactional access to data via the index structure must be developed. Concurrent accesses to data via index structures introduce the problem of protecting ranges specified in the retrieval from phantom insertions and deletions (the phantom problem). This paper presents a dynamic granular locking approach to phantom protection in Generalized Search Trees(GiSTs), an index structure supporting an extensible set of queries and data types. The granular locking technique offers a high degree of concurrency and has a low lock overhead. Our experiments show that the granular locking technique (1) scales well under various system loads and (2) similar to the B-tree case, provides a significantly more efficient implementation compared to predicate locking for multidimensional AMs as well. Since a wide variety of multidimensional index structures can be implemented using GiST, the developed algorithms provide a general solution to concurrency control in multidimensional AMs. To the best of our knowledge, this paper provides the first such solution based on granular locking.

Proceedings Article
07 Sep 1999
TL;DR: A new approach to extensible indexing implemented in Informix Dynamic Server with Universal Data Option (IDS/UDO) based on the generalized search tree, or GiST, which is a template index structure for abstract data types that supports an extensible set of queries.
Abstract: Today’s object-relational DBMSs (ORDBMSs) are designed to support novel application domains by providing an extensible architecture, supplemented by domain-specific database extensions supplied by external vendors. An important aspect of ORDBMSs is support for extensible indexing, which allows the core database server to be extended with external access methods (AMs). This paper describes a new approach to extensible indexing implemented in Informix Dynamic Server with Universal Data Option (IDS/UDO). The approach is is based on the generalized search tree, or GiST, which is a template index structure for abstract data types that supports an extensible set of queries. GiST encapsulates core database indexing functionality including search, update, concurrency control and recovery, and thereby relieves the external access method (AM) of the burden of dealing with these issues. The IDS/UDO implementation employs a newly designed GiST API that reduces the number of user defined function calls, which are typically expensive to execute, and at the same time makes GiST a more flexible data structure. Experiments show that GiST-based AM extensibility can offer substantially better performance than built-in AMs when indexing userdefined data types.

Proceedings ArticleDOI
13 Mar 1999
TL;DR: This paper describes how concurrent actions are coordinated in a multi- user, large-scale 3-D layout system CIAO, and presents the multi-user interfaces of CIAO that provide some sense of isolation as well as rich awareness.
Abstract: This paper is concerned with the concurrency control for collaborative virtual environments. In particular, we describe how concurrent actions are coordinated in a multi-user, large-scale 3-D layout system CIAO. In contrast to many existing systems that sacrifice responsiveness in order to maintain consistency, CIAO achieves optimal response and notification time without compromising consistency. The optimal responsiveness is achieved by a new multicast-based, optimistic concurrency control mechanism. Even operations on a group of related objects do not entail any latency for concurrency control. We also present the multi-user interfaces of CIAO that provide some sense of isolation as well as rich awareness.

Book ChapterDOI
TL;DR: The paper derives criteria and practical protocols for guaranteeing global snapshot isolation at the federation level and generalizes the well-known ticket method to cope with combinations of isolation levels in a federated system.
Abstract: Federated transaction management (also known as multidatabase transaction management in the literature) is needed to ensure the consistency of data that is distributed across multiple, largely autonomous, and possibly heterogeneous component databases and accessed by both global and local transactions. While the global atomicity of such transactions can be enforced by using a standardized commit protocol like XA or its CORBA counterpart OTS, global serializability is not self-guaranteed as the underlying component systems may use a variety of potentially incompatible local concurrency control protocols. The problem of how to achieve global serializability, by either constraining the component systems or implementing additional global protocols at the federation level, has been intensively studied in the literature, but did not have much impact on the practical side. A major deficiency of the prior work has been that it focused on the idealized correctness criterion of serializability and disregarded the subtle but important variations of SQL isolation levels supported by most commercial database systems. This paper reconsiders the problem of federated transaction management, more specifically its concurrency control issues, with particular focus on isolation levels used in practice, especially the popular snapshot isolation provided by Oracle. As pointed out in a SIGMOD 1995 paper by Berenson et al., a rigorous foundation for reasoning about such concurrency control features of commercial systems is sorely missing. The current paper aims to close this gap by developing a formal framework that allows us to reason about local and global transaction executions where some (or all) transactions are running under snapshot isolation. The paper derives criteria and practical protocols for guaranteeing global snapshot isolation at the federation level. It further generalizes the well-known ticket method to cope with combinations of isolation levels in a federated system.

Journal ArticleDOI
TL;DR: Main components of a workflow system that are relevant to the correctness in the presence of concurrency are formalized based on set theory and graph theory to form the theoretical basis of the correctness criterion.
Abstract: In this paper, main components of a workflow system that are relevant to the correctness in the presence of concurrency are formalized based on set theory and graph theory. The formalization which constitutes the theoretical basis of the correctness criterion provided can be summarized as follows: -Activities of a workflow are represented through a notation based on set theory to make it possible to formalize the conceptual grouping of activities. -Control-flow is represented as a special graph based on this set definition, and it includes serial composition, parallel composition, conditional branching, and nesting of individual activities and conceptual activities themselves. -Data-flow is represented as a directed acyclic graph in conformance with the control-flow graph. The formalization of correctness of concurrently executing workflow instances is based on this framework by defining two categories of constraints on the workflow environment with which the workflow instances and their activities interact. These categories are: -Basic constraints that specify the correct states of a workflow environment. -Inter-activity constraints that define the semantic dependencies among activities such as an activity requiring the validity of a constraint that is set or verified by a preceding activity. Basic constraints graph and inter-activity constraints graph which are in conformance with the control-flow and data-flow graphs are then defined to represent these constraints. These graphs are used in formalizing the intervals among activities where an inter-activity constraint should be maintained and the intervals where a basic constraint remains invalid. A correctness criterion is defined for an interleaved execution of workflow instances using the constraints graphs. A concurrency control mechanism, namely Constraint Based Concurrency Control technique is developed based on the correctness criterion. The performance analysis shows the superiority of the proposed technique. Other possible approaches to the problem are also presented.

Proceedings ArticleDOI
23 Mar 1999
TL;DR: A new algorithm is introduced that combines multiversion concurrency control schemes on a server with reconciliation of updates from disconnected clients to solve the reconciliation problem of serializing potentially conflicting updates performed by local transactions on disconnected clients on all copies of the database.
Abstract: As mobile computing devices become more and more popular mobile databases have started gaining popularity. An important feature of these database systems is their ability to allow optimistic replication of data by permitting disconnected mobile devices to perform local updates on replicated data. The fundamental problem in this approach is the reconciliation problem, i.e. the problem of serializing potentially conflicting updates performed by local transactions on disconnected clients on all copies of the database. We introduce a new algorithm that combines multiversion concurrency control schemes on a server with reconciliation of updates from disconnected clients. The scheme generalizes to multiversion systems, the single version optimistic method of reconciliation, in which client transactions are allowed to commit on the server iff data items in their read sets are not updated on the server after replication.

Journal ArticleDOI
TL;DR: An efficient distributed algorithm to detect generalized deadlocks in replicated databases that use quorum-consensus algorithms to perform majority voting and is shown to perform significantly better in both time and message complexity than the best known existing algorithms.
Abstract: Replicated databases that use quorum-consensus algorithms to perform majority voting are prone to deadlocks. Due to the P-out-of-Q nature of quorum requests, deadlocks that arise are generalized deadlocks and are hard to detect. We present an efficient distributed algorithm to detect generalized deadlocks in replicated databases. The algorithm performs reduction of a distributed wait-for-graph (WFG) to determine the existence of a deadlock. If sufficient information to decide the reducibility of a node is not available at that node, the algorithm attempts reduction later in a lazy manner. We prove the correctness of the algorithm. The algorithm has a message complexity of 2e messages and a worst-case time complexity of 2d+2 hops, where e is the number of edges and d is the diameter of the WFG. The algorithm is shown to perform significantly better in both time and message complexity than the best known existing algorithms. We conjecture that this is an optimal algorithm, in time and message complexity, to detect generalized deadlocks if no transaction has complete knowledge of the topology of the WFG or the system and the deadlock detection is to be carried out in a distributed manner.

Proceedings ArticleDOI
08 Apr 1999
TL;DR: A new CC protocol called MIRROR (Managing Isolation in Replicated Real-time Object Repositories), specifically designed for firm-deadline applications operating on replicated real-time databases, which augments the optimistic two-phase locking algorithm developed for non-real- time databases with a novel and easily implementable state-based conflict resolution mechanism to fine-tune the real- time performance.
Abstract: Data replication can help database systems meet the stringent temporal constraints of current time-critical applications, especially Internet-based services. A prerequisite, however, is the development of high-performance concurrency control mechanisms. We present here a new CC protocol called MIRROR (Managing Isolation in Replicated Real-time Object Repositories), specifically designed for firm-deadline applications operating on replicated real-time databases. MIRROR augments the optimistic two-phase locking (O2PL) algorithm developed for non-real-time databases with a novel and easily implementable state-based conflict resolution mechanism to fine-tune the real-time performance. A simulation-based study of MIRROR's performance against the real-time versions of a representative set of classical protocols shows that (a) the relative performance characteristics of replica concurrency control algorithms in the real-time environment could be significantly different from their performance in a traditional (non-real-time) database system, (b) compared to locking based protocols, MIRROR provides the best performance in replicated environments for real-time applications, and (c) MIRROR's conflict resolution mechanism works almost as well as more sophisticated (and difficult to implement) strategies.

Journal ArticleDOI
TL;DR: Simulation results show that the DDCR can significantly improve the system performance under different workload and workload distributions, and its performance is consistently better than the base protocol and the Opt protocols in both main-memory resident and disk-resident DRTDBS.
Abstract: In a distributed real-time database system (DRTDBS), a commit protocol is required to ensure transaction failure atomicity. If data conflicts occur between executing and committing transactions, the performance of the system may be greatly affected. In this paper, we propose a new protocol, called deadline-driven conflict resolution (DDCR), which integrates concurrency control and transaction commitment management for resolving executing and committing data conflicts amongst firm real-time transactions. With the DDCR, a higher degree of concurrency can be achieved, as many data conflicts of such kind can be alleviated, and executing transactions can access data items which are being held by committing transactions in conflicting modes. Also, the impact of temporary failures which occurred during the commitment of a transaction on other transactions, and the dependencies created due to sharing of data items is much reduced by reversing the dependencies between the transactions. A simulation model has been developed and extensive simulation experiments have been performed to compare the performance of the DDCR with other protocols such as the Opt [1], the Healthy-Opt [2], and the base protocol, which use priority inheritance and blocking to resolve the data conflicts. The simulation results show that the DDCR can significantly improve the system performance under different workload and workload distributions. Its performance is consistently better than the base protocol and the Opt protocols in both main-memory resident and disk-resident DRTDBS.

Proceedings ArticleDOI
13 Dec 1999
TL;DR: Applying this timing based approach to the asynchronous data sharing problem has the advantage that sequential data sharing mechanisms may be adapted to help remove priority inversion and blocking incurred by the commonly used lock based synchronization mechanism.
Abstract: The paper presents a timing based approach to implementing fully asynchronous reader/writer mechanisms which addresses the problems of priority inversion and blocking among tasks within multiprocessor real time systems. The approach associates a sequential circular buffer data sharing algorithm, which, although being lock-free and loop-free, is vulnerable to some timing subtlety, with necessary feasibility conditions and a configuring mechanism. Both the feasibility conditions and the configuring mechanism are constructed through analyzing the timing properties of relevant tasks. The feasibility conditions are employed to verify the safety property against a given implementation of the data sharing algorithm while the configuring mechanism helps configure the buffer holding the shared data objects. Applying this timing based approach to the asynchronous data sharing problem has the advantage that sequential data sharing mechanisms may be adapted to help remove priority inversion and blocking incurred by the commonly used lock based synchronization mechanism. Therefore, it demonstrates an effective alternative to the traditional algorithm based approaches.

Proceedings ArticleDOI
05 Jan 1999
TL;DR: It is demonstrated mathematically that all deadlock structures can be classified into these patterns and that deadlock occurs in business process workflow models which have one or more of these patterns.
Abstract: To support deadlock detection by computer, five patterns which cause deadlock are defined. In expressing the pattern of deadlock structures by control-node combinations only, it is difficult to express the connection properties, since these are decided by many complex combinations of control nodes. Therefore, two new concepts, "reachability" and "absolute transferability", which express connection properties between two control nodes, are introduced. With these concepts, the five deadlock patterns are expressed simply. It is demonstrated mathematically that all deadlock structures can be classified into these patterns and that deadlock occurs in business process workflow models which have one or more of these patterns.

Proceedings ArticleDOI
25 Feb 1999
TL;DR: This work proposes a new algorithm, called Update-First with Order (UFO), for concurrency control among the mobile transactions and update transactions, which has been shown to be an efficient method for data dissemination in mobile computing systems.
Abstract: We study the inconsistency problem in data broadcast. While data items in a mobile computing system are being broadcast, update transactions may install new values for the data items. If the executions of update transactions and the broadcast of data items are interleaved without any control, it is possible that the mobile transactions, which are generated by mobile clients, may observe inconsistent data values. We propose a new algorithm, called Update-First with Order (UFO), for concurrency control among the mobile transactions and update transactions. The mobile transactions are assumed to be read-only. In the UFO algorithm, all the schedules among them are serializable. Two important properties of the UFO algorithm are that: (1) the mobile transactions do not need to set any lock before they read the data items from the "air"; and (2) its impact on the adopted broadcast algorithm, which has been shown to be an efficient method for data dissemination in mobile computing systems, is minimal.

Journal ArticleDOI
TL;DR: An algorithm free of false deadlock resolutions is provided, a simple specification for a safe deadlock resolution algorithm is introduced, and the new distributed solution is developed in a hierarchical fashion from its abstract specification.
Abstract: Previous proposals for Distributed Deadlock Detection/Resolution algorithms for the AND model have the main disadvantage of resolving false deadlocks, that is, nonexisting or currently being resolved deadlocks. This paper provides an algorithm free of false deadlock resolutions, A simple specification for a safe deadlock resolution algorithm is introduced, and the new distributed solution is developed in a hierarchical fashion from its abstract specification. The algorithm is probe-based, uses node priorities, and coordinates the actions of resolvers so that false deadlocks are not resolved. The solution is formally proven correct by using the input-output Automata Model. Finally, a study about the liveness of the algorithm is provided.

Journal ArticleDOI
TL;DR: The paper defines external and interobject temporal consistency, the notion of phase variance, and builds a computation model that ensures such consistencies for replicated data deterministically where the underlying communication mechanism provides deterministic message delivery semantics and probabilistically where no such support is available.
Abstract: This paper presents a real-time primary-backup replication scheme to support fault-tolerant data access in a real-time environment. The main features of the system are fast response to client requests, bounded inconsistency between primary and backup, temporal consistency guarantee for replicated data, and quick recovery from failures. The paper defines external and interobject temporal consistency, the notion of phase variance, and builds a computation model that ensures such consistencies for replicated data deterministically where the underlying communication mechanism provides deterministic message delivery semantics and probabilistically where no such support is available. It also presents an optimization of the system and an analysis of the failover process which includes failover consistency and failure recovery time. An implementation of the proposed scheme is built within the x-kernel architecture on the MK 7.2 microkernel from the Open Group. The results of a detailed performance evaluation of this implementation are also discussed.

Proceedings ArticleDOI
07 Dec 1999
TL;DR: The paper proposes a test-case generation method with the EIAG and the ISTC (Interaction Sequences Testing Criteria) and the generated copaths can detect dead (unreachable) statements which concern interactions, and they can find some communication errors and deadlocks in testing.
Abstract: Test-cases play an important roll for high quality software testing. Inadequate test-cases may cause bugs remaining after testing. Overlapped test-cases lead to increases in testing costs. The paper proposes a test-case generation method with the EIAG (Event InterActions Graph) and the ISTC (Interaction Sequences Testing Criteria). The EIAG represents behavior of concurrent programs. It consists of Event Graphs and Interactions. An Event Graph is a control flow graph of a program unit in a concurrent program. The Interactions represent interactions (synchronizations, communications and waits) between the program units. The ISTC proposed are based on sequences of Interactions. The cooperated paths (copaths) on the EIAG as test-cases satisfying the ISTC are generated. The generated copaths can detect dead (unreachable) statements which concern interactions, and they can find some communication errors and deadlocks in testing. It is, however, necessary to validate feasibility of the generated copaths.

Proceedings Article
01 Jan 1999
TL;DR: In this paper, partial redundancy elimination of pointer-based access expressions is proposed to reduce the overhead of read and write barriers in OO7-based OO-7 benchmark suite.
Abstract: Persistent programming languages manage volatile memory as a cache for stable storage, imposing a read barrier on operations that access the cache, and a write barrier on updates to the cache. The read barrier checks the cache residency of the target object while the write barrier marks the target as dirty in the cache to support a write-back policy that defers updates to stable storage until eviction or stabilization. These barriers may also subsume additional functionality, such as negotiation of locks on shared objects to support concurrency control. Compilers for persistent programming languages generate barrier code to protect all accesses to possibly persistent objects. Orthogonal persistence imposes this cost on every object access, since all objects are potentially persistent, at significant overhead to execution. We have designed a new suite of compiler optimizations, focusing on partial redundancy elimination of pointer-based access expressions, that significantly reduce this impact. These are implemented in an analysis and optimization framework for Java bytecodes, in support of orthogonal persistence for Java. In experiments with the traversal portions of the OO7 benchmark suite our optimizations reduce the number of read and write barriers executed by an average of 83% and 25%, respectively.

Journal ArticleDOI
01 May 1999
TL;DR: A quadratic-time algorithm which decomposes each flow graph into a nested family of regions, such that all allocated resources are released before the control leaves a region, has the potential to achieve better resource utilization.
Abstract: We describe a natural extension of the banker's algorithm (D.W. Dijkstra, 1968) for deadlock avoidance in operating systems. Representing the control flow of each process as a rooted tree of nodes corresponding to resource requests and releases, we propose a quadratic-time algorithm which decomposes each flow graph into a nested family of regions, such that all allocated resources are released before the control leaves a region. Also, information on the maximum resource claims for each of the regions can be extracted prior to process execution. By inserting operating system calls when entering a new region for each process at runtime, and applying the original banker's algorithm for deadlock avoidance, this method has the potential to achieve better resource utilization because information on the "localized approximate maximum claims" is used for testing system safety.

Journal ArticleDOI
TL;DR: A new concurrency control that guarantees that only acceptable interleavings occur and a new correctness criterion that is weaker than serializability and yet guarantees that the specifications of all transactions are met is described.