scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1989"


Journal ArticleDOI
01 Jun 1989
TL;DR: An algorithm for concurrency control in real-time groupware systems is presented and its advantages are its simplicity of use and its responsiveness: users can operate directly on the data without obtaining locks.
Abstract: Groupware systems are computer-based systems that support two or more users engaged in a common task, and that provide an interface to a shared environment. These systems frequently require fine-granularity sharing of data and fast response times. This paper distinguishes real-time groupware systems from other multi-user systems and discusses their concurrency control requirements. An algorithm for concurrency control in real-time groupware systems is then presented. The advantages of this algorithm are its simplicity of use and its responsiveness: users can operate directly on the data without obtaining locks. The algorithm must know some semantics of the operations. However the algorithm's overall structure is independent of the semantic information, allowing the algorithm to be adapted to many situations. An example application of the algorithm to group text editing is given, along with a sketch of its proof of correctness in this particular case. We note that the behavior desired in many of these systems is non-serializable.

1,047 citations


Journal ArticleDOI
TL;DR: An up-to-date and comprehensive survey of deadlock detection algorithms is presented, their merits and drawbacks are discussed, and their performances are compared.
Abstract: The author describes a series of deadlock detection techniques based on centralized, hierarchical, and distributed control organizations. The point of view is that of practical implications. An up-to-date and comprehensive survey of deadlock detection algorithms is presented, their merits and drawbacks are discussed, and their performances (delays as well as message complexity) are compared. Related issues such as correctness of the algorithms, performance of the algorithms, and deadlock resolution, which require further research are examined. >

296 citations


Proceedings Article
01 Jul 1989
TL;DR: This paper introduces Quasi Serializability, a correctness criterion for concurrency control in heterogeneous distributed database environments and uses quasi serializability theory to give a correctness proof for an altruistic locking algorithm.
Abstract: In this paper, we introduce Quasi Serializability, a correctness criterion for concurrency control in heterogeneous distributed database environments. A global history is quasi serializable if it is (conflict) equivalent to a quasi serial history in which global transactions are submitted serially. Quasi serializability theory is an extension of serializability. We study the relationships between serializability and quasi serializability and the reasons quasi serializability can be used as a correctness criterion in heterogeneous distributed database environments. We also use quasi serializability theory to give a correctness proof for an altruistic locking algorithm.

218 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe a replica control protocol that allows the accessing of data in spite of site failures and network partitioning, which provides the database designer with a large degree of flexibility in deciding the degree of data availability and the cost of accessing data.
Abstract: In a replicated database, a data item may have copies residing on several sites. A replica control protocol is necessary to ensure that data items with several copies behave as if they consist of a single copy, as far as users can tell. We describe a new replica control protocol that allows the accessing of data in spite of site failures and network partitioning. This protocol provides the database designer with a large degree of flexibility in deciding the degree of data availability, as well as the cost of accessing data.

209 citations


Journal ArticleDOI
TL;DR: This paper introduces several local constraints on individual objects that suffice to ensure global atomicity of actions and presents three local atomicity properties, each of which is optimal.
Abstract: Atomic actions (or transactions) are useful for coping with concurrency and failures. One way of ensuring atomicity of actions is to implement applications in terms of atomic data types: abstract data types whose objects ensure serializability and recoverability of actions using them. Many atomic types can be implemented to provide high levels of concurrency by taking advantage of algebraic properties of the type's operations, for example, that certain operations commute. In this paper we analyze the level of concurrency permitted by an atomic type. We introduce several local constraints on individual objects that suffice to ensure global atomicity of actions; we call these constraints local atomicity properties. We present three local atomicity properties, each of which is optimal: no strictly weaker local constraint on objects suffices to ensure global atomicity for actions. Thus, the local atomicity properties define precise limits on the amount of concurrency that can be permitted by an atomic type.

181 citations


Proceedings ArticleDOI
30 Apr 1989
TL;DR: Performance data indicate that the CPU scheduling algorithm is the most significant of all the algorithms in improving the performance of real-time transactions, conflict-resolution protocols which directly address deadlines and criticality can have a substantial impact on performance compared to protocols that ignore such information.
Abstract: Results are presented of empirical evaluations carried out on the RT-CARAT testbed. This testbed was used evaluating a set of integrated protocols that support real-time transactions. A basic locking scheme for concurrency control was used to develop and evaluate several algorithms for handling CPU scheduling, data-conflict resolution, deadlock resolution, transaction wakeup, and transaction restart. The performance data indicate that the CPU scheduling algorithm is the most significant of all the algorithms in improving the performance of real-time transactions, conflict-resolution protocols which directly address deadlines and criticality can have a substantial impact on performance compared to protocols that ignore such information, both criticality and deadline distributions strongly affect transaction performance, and overheads such as locking and message communication are nonnegligible and cannot be ignored in real-time transaction analysis. It is believed that these empirical results represent the first experimental results for real-time transactions on a testbed system. >

177 citations


Proceedings Article
01 Jul 1989
TL;DR: A new group of algorithms for scheduling realtime transactions which produce serializable schedules is described which are suitable for managing transactions with deadlines on a single processor disk resident database system.
Abstract: Managing transactions with real-time requirements and disk resident data presents many new problems In this paper we address several: How can we schedule transactions with deadlines? How do the real-time constraints affect concurrency control? How does the scheduling of IO requests affect the timeliness of transactions? How should exclusive and shared locking be handled? We describe a new group of algorithms for scheduling realtime transactions which produce serializable schedules We present a model for scheduling transactions with deadlines on a single processor disk resident database system, and evaluate the scheduling algorithms through detailed simulation

167 citations


Proceedings ArticleDOI
06 Feb 1989
TL;DR: A framework is presented for analysis of time-critical scheduling algorithms and the main classes of scheduling algorithms are identified according to the availability of information about resource requirements and execution times.
Abstract: A framework is presented for analysis of time-critical scheduling algorithms. The main assumptions are analyzed behind real-time scheduling and concurrency control algorithms, and a unified approach is proposed. Two main classes of schedulers are identified according to the availability of information about resource requirements and execution times: conflict-resolving schedulers resolve conflicts at run-time, and hence can only produce a sequence of operations satisfying task priorities and resource constraints; and conflict-avoiding schedulers determine resource requirements and expected execution times through offline transaction-class preanalysis and produce a complete time-critical schedule satisfying both timing and resource constraints. For the latter case, the resolution of overload is essential. Examples are given to illustrate the framework and the main classes of scheduling algorithms. >

117 citations


Journal ArticleDOI
TL;DR: The authors develop a model and define performance measures for a replicated data system that makes use of a quorum-consensus algorithm to maintain consistency and derive optimal read and write quorums which maximize the proportion of successful transactions.
Abstract: The authors develop a model and define performance measures for a replicated data system that makes use of a quorum-consensus algorithm to maintain consistency. They consider two measures: the proportion of successfully completed transactions in systems where a transaction aborts if data is not available, and the mean response time in systems where a transaction waits until data becomes available. Based on the model, the authors show that for some quorum assignment there is an optimal degree of replication beyond which performance degrades. There exist other quorum assignments which have no optimal degree of replication. The authors also derive optimal read and write quorums which maximize the proportion of successful transactions. >

100 citations


Proceedings Article
01 Jul 1989
TL;DR: A simple and efficient recovery method for nested transactions, called ARIES/NT (Algorithm for Recovery and Isolation Exploiting Semantics for Nested ‘I’ransactions), that uses wrile-ahead logging and supports semantically-rich modes of locking and operation logging is presented.
Abstract: ~hstt~ct A simple and efficient recovery method for nested transact.ions, called ARIES/NT (Algorithm for Recovery and Isolation Exploiting Semantics for Nested ‘I’ransactions), that uses wrile-ahead logging and supports semantically-rich modes of locking and operation logging is presented. ‘I’his method applies to a very general model of nested transactions, which includes parlial rollbacks of subtransactions, upward and downward inheritance of locks, and concurrent execution of ancestor and descendent subtransactions. The adopted syslem nrchit.cclure encompasses aspects of distributed data base management also. ARIES/NT is an extension of the ARIES recovery and concurrency control method developed recently for the single-level transaction model by Mohan. et. al. in the IBM Research Report RJ6649.

96 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a model of CAD transactions which allows a group of cooperating designers to arrive at a complex design without being forced to wait over a long duration, and also allows the group of designers to collaborate on a design with another group by assigning subtasks.

Proceedings ArticleDOI
29 Mar 1989
TL;DR: This paper considers two general recovery methods for abstract data types, update-in-place and deferred-update, and gives a precise characterization of the conflict relations that work with each recovery method, and shows that each permits conflict Relations that the other does not.
Abstract: It is widely recognized by practitioners that concurrency control and recovery for transaction systems interact in subtle ways. In most theoretical work, however, concurrency control and recovery are treated as separate, largely independent problems. In this paper we investigate the interactions between concurrency control and recovery. We consider two general recovery methods for abstract data types, update-in-place and deferred-update. While each requires operations to conflict if they do not “commute,” the two recovery methods require subtly different notions of commutativity. We give a precise characterization of the conflict relations that work with each recovery method, and show that each permits conflict relations that the other does not. Thus, the two recovery methods place incomparable constraints on concurrency control. Our analysis applies to arbitrary abstract data types, including those with operations that may be partial or non-deterministic.

Journal ArticleDOI
TL;DR: A modified, priority-based probe algorithm for deadlock detection and resolution in distributed database system is presented and appears to be errorfree.
Abstract: A modified, priority-based probe algorithm for deadlock detection and resolution in distributed database system is presented. Various examples are used to show that the original priority-based algorithm, presented by M.K. Sinha and N. Natarajan (1985), either fails to detect deadlocks or reports deadlocks that do not exist in many situations. A modified algorithm that eliminates these problems is proposed. The algorithm has been tested through simulation and appears to be errorfree. The performance of the modified algorithm is briefly discussed. >

Journal ArticleDOI
01 Jun 1989
TL;DR: A version control mechanism is proposed that enhances the modularity and extensibility of multiversion concurrency control algorithms, and simplifies the task of proving the correctness of these protocols.
Abstract: In this paper we propose a version control mechanism that enhances the modularity and extensibility of multiversion concurrency control algorithms. We decouple the multiversion algorithms into two components: version control and concurrency control. This permits modular development of multiversion protocols, and simplifies the task of proving the correctness of these protocols. An interesting feature of our framework is that the execution of read-only transactions becomes completely independent of the underlying concurrency control implementation. Also, algorithms with the version control mechanism have several advantages over most other multiversion algorithms.

Journal ArticleDOI
TL;DR: Although using intermediate memory as a buffering device produces a moderate performance benefit, the analysis shows that more substantial gains can be realized when this technique is combined with the use of an integrated concurrency-coherency control protocol.
Abstract: The authors propose an integrated control mechanism and analyze the performance gain due to its use. An extension to the data sharing system structure is examined in which a shared intermediate memory is used for buffering and for early commit processing. Read-write-synchronization and write-serialization problems arise. The authors show how the integrated concurrency protocol can be used to overcome both problems. A queueing model is used to quantify the performance improvement. Although using intermediate memory as a buffering device produces a moderate performance benefit, the analysis shows that more substantial gains can be realized when this technique is combined with the use of an integrated concurrency-coherency control protocol. >

Book
01 Jan 1989
TL;DR: In this article, the authors present a model for concurrent access to data in the serializability model and the multilevel atomicity model, and discuss performance issues in concurrent access.
Abstract: Basic Definitions and Models. Concurrency Control Methods for the Serializability Model. Concurrency Control Methods for the Multiversion Serializability Model. Concurrency Control Methods for the Multilevel Atomicity Model. Performance Issues in the Concurrent Access to Data. Bibliography.

Journal ArticleDOI
TL;DR: Raid, a robust and adaptable distributed database system for transaction processing, is described, with server processes on each site, and a software tool for the evaluation of transaction processing algorithms in an operating system kernel is proposed.
Abstract: Raid, a robust and adaptable distributed database system for transaction processing, is described. Raid is a message-passing system, with server processes on each site. The servers manage concurrent processing, consistent replicated copies during site failures and atomic distributed commitment. A high-level, layered communications package provides a clean, location-independent interface between servers. The latest design of the communications package delivers messages via shared memory in a high-performance configuration in which several servers are linked into a single process. Raid provides the infrastructure to experimentally investigate various methods for supporting reliable distributed transaction processing. Measurements on transaction processing time and server CPU time are presented. Data and conclusions of experiments in three categories are also presented: communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for the evaluation of transaction processing algorithms in an operating system kernel is proposed. >

Book
01 Jan 1989
TL;DR: This book features information on a wide range of key subjects, including vector processors and multi-processors, concurrency control, hypercube systems and key applications, parallel architectures for implementing AI systems, and a comparison of the Cray X-MP-4, the Fujitsu VP-200, and the Hitachi S-810/20.
Abstract: This book features information on a wide range of key subjects, including vector processors and multi-processors, concurrency control, hypercube systems and key applications, parallel architectures for implementing AI systems, and a comparison of the Cray X-MP-4, the Fujitsu VP-200, and the Hitachi S-810/20. Also explained are such topics as the parallel programming environment and software support, applying AI techniques to program optimization for parallel computers, data flow computations for AI, VLSI array processors for signal/image processing, and optical computing and artificial neural technologies.

Journal ArticleDOI
01 Jun 1989
TL;DR: This paper examines the interplay between parallelism and transaction performance in a distributed database machine context, including response time, throughput, and speedup, over a fairly wide range of system loads.
Abstract: While several distributed (or 'shared nothing') database machines exist in the form of prototypes or commercial products, and a number of distributed concurrency control algorithms are available, the effect of parallelism on concurrency control performance has received little attention. This paper examines the interplay between parallelism and transaction performance in a distributed database machine context. Four alternative concurrency control algorithms are considered, including two-phase locking, wound-wait, basic timestamp ordering, and optimistic concurrency control. Issues addressed include how performance scales as a function of machine size and the degree to which partitioning the database for intra-transaction parallelism improves performance for the different algorithms. We examine performance from several perspectives, including response time, throughput, and speedup, and we do so over a fairly wide range of system loads. We also examine the performance impact of certain important overhead factors (e.g., communication and process initiation costs) on the four alternative concurrency control algorithms.

Journal ArticleDOI
TL;DR: A distributed algorithm for detection and resolution of resource deadlocks in object-oriented distributed systems that can be used in conjunction with concurrency control algorithms which are based on the semantic lock model is proposed and proved.
Abstract: The authors propose and prove a distributed algorithm for detection and resolution of resource deadlocks in object-oriented distributed systems. In particular, the algorithm can be used in conjunction with concurrency control algorithms which are based on the semantic lock model. The algorithm greatly reduces message traffic by properly identifying and eliminating redundant messages. It is shown that both its worst and average time complexities are O(n*e), where n is the number of nodes and e is the number of edges in the waits-for graph. After deadlock resolution, the algorithm leaves information in the system concerning dependence relations of currently running transactions. This information will preclude the wasteful retransmission of messages and reduce the delay in detecting future deadlocks. >

Journal ArticleDOI
TL;DR: Algorithmic adaptability, which supports techniques for switching between classes of schedulers in distributed transaction systems, is modeled and an experimental system implemented to support experimentation in adaptability is discussed.
Abstract: Adaptability is an essential tool for managing escalating software costs and to build high-reliability, high-performance systems. Algorithmic adaptability, which supports techniques for switching between classes of schedulers in distributed transaction systems, is modeled. RAID, an experimental system implemented to support experimentation in adaptability, is discussed. Adaptability features in RAID, including algorithmic adaptability, fault tolerance, and implementation techniques for an adaptable server-based design, are modeled. >

Proceedings ArticleDOI
10 Oct 1989
TL;DR: The new commit/abort protocols will be used in the RelaX project (reliable distributed applications support on Unix), which carries on work done at GMD in the PROFEMO project on distributed transaction mechanisms.
Abstract: Transactions are especially valuable in distributed systems, since they isolate the programmer from the effects of both concurrency and failures. In implementing transactions at the system level, flexibility has to be introduced into the transaction concept. Specifically, the premature release of data objects has to be addressed. To assure recoverability, resulting dependencies between transactions are stored by the system in a distributed data structure called a recovery graph. The storing of redundant information in the recovery graphs at the different sites reduces the complexity of the commit protocol, and a chase protocol used to abort transactions can be derived which excludes infinite chasing. The redundant information can be distributed almost for free because it can be piggybacked on messages. The new commit/abort protocols will be used in the RelaX project (reliable distributed applications support on Unix), which carries on work done at GMD in the PROFEMO project on distributed transaction mechanisms. >

Journal ArticleDOI
TL;DR: A method is proposed that ensures that orphans created by crashes and by aborts are detected and eliminated in a timely manner, and also prevents them from observing inconsistent states and is applicable to any concurrency control method that preserved atomicity.
Abstract: An orphan in a distributed transaction system is an activity executing on behalf of an aborted transaction. A method is proposed for managing orphans created by crashes and by aborts that ensures that orphans are detected and eliminated in a timely manner, and also prevents them from observing inconsistent states. The method uses timestamps generated at each site. Transactions are assigned timeouts at different sites. These timeouts are related by a global invariant, and they may be adjusted by simple two-phase protocols. The principal advantage of this method is simplicity: it is easy to understand, and to implement, and it can be proved correct. An 'eager' version of this method uses approximately synchronized real-time clocks to ensure that orphans are eliminated within a fixed duration, and a 'lazy' version uses logical clocks to ensure that orphans are eventually eliminated as information propagates through the system. The method is fail-safe: unsynchronized clocks and lost messages may affect performance, but they cannot produce inconsistencies or protect orphans from eventual elimination. Although the method is informally described in terms of two-phase locking, the formal argument shows it is applicable to any concurrency control method that preserved atomicity. >

Journal ArticleDOI
TL;DR: This paper presents a database model and its associated architecture, which is based on the principles of data-driven computation, and allows the model to be mapped onto a computer architecture consisting of large numbers of independent disk units and processing elements.
Abstract: In recent years, a number of database machines consisting of large numbers of parallel processing elements have been proposed. Unfortunately, there are two main limitations in database processing that prevent a high degree of parallelism; these are the available I/O bandwidth of the underlying storage devices and the concurrency control mechanisms necessary to guarantee data integrity. The main problem with conventional approaches is the lack of a computational model capable of utilizing the potential of any significant number of processing elements and storage devices and, at the same time, preserving the integrity of the database.This paper presents a database model and its associated architecture, which is based on the principles of data-driven computation. According to this model, the database is represented as a network in which each node is conceptually an independent, asynchronous processing element, capable of communicating with other nodes by exchanging messages along the network arcs. To answer a query, one or more such messages, called tokens, are created and injected into the network. These then propagate asynchronously through the network in search of results satisfying the given query.The asynchronous nature of processing permits the model to be mapped onto a computer architecture consisting of large numbers of independent disk units and processing elements. This increases both the available I/O bandwidth as well as the processing potential of the machine. At the same time, new concurrency control and error recovery mechanisms are necessary to cope with the resulting parallelism.

Proceedings ArticleDOI
05 Jun 1989
TL;DR: An object/ thread based paradigm is presented that links data consistency with object/thread semantics and can be used to achieve a wide range of consistency semantics from strict atomic transactions to standard process semantics.
Abstract: An object/thread based paradigm is presented that links data consistency with object/thread semantics. The paradigm can be used to achieve a wide range of consistency semantics from strict atomic transactions to standard process semantics. The paradigm supports three types of data consistency. Object programmers indicate the type of consistency desired on a per-operation basis, and the system performs automatic concurrency control and recovery management to ensure that those consistency requirements are met. This allows programmers to customize consistency and recovery on a per-application basis without having to supply complicated, custom recovery management schemes. The paradigm allows robust and nonrobust computation to operate concurrently on the same data in a well-defined manner. The operating system need support only one vehicle of computation-the thread. >

Book ChapterDOI
21 Jun 1989
TL;DR: This paper presents a new scheme based on two-phase locking that minimizes the overhead associated with concurrency control without overly limiting opportunities for concurrently executing transactions and avoids the expense of setting locks at multiple levels of a granularity hierarchy.
Abstract: Recent trends in memory sizes, combined with a demand for high-performance data management facilities, have led to the emergence of database support for managing memory-resident data as a topic of interest. In this paper we address the concurrency control problem for main memory database systems. Because such systems differ significantly from traditional database systems in terms of their cost characteristics, existing solutions to the problem are inappropriate; we present a new scheme based on two-phase locking that minimizes the overhead associated with concurrency control without overly limiting opportunities for concurrently executing transactions. We accomplish this by allowing the granularity of locking to vary dynamically in response to changes in the level of inter-transaction conflicts. Unlike hierarchical locking schemes, however, we avoid the expense of setting locks at multiple levels of a granularity hierarchy. We present a simple empirical analysis, based on instruction counts, to validate our claims.

Journal ArticleDOI
TL;DR: It is shown that the OSN method provides more concurrency than basic timestamp ordering and two-phase locking methods and handles successfully some logs which cannot be handled by any of the past methods.
Abstract: A method for concurrency control in distributed database management systems that increases the level of concurrent execution of transactions, called ordering by serialization numbers (OSN), is proposed. The OSN method works in the certifier model and uses time-interval techniques in conjunction with short-term locks to provide serializability and prevent deadlocks. The scheduler is distributed, and the standard transaction execution policy is assumed, that is, the read and write operations are issued continuously during transaction execution. However, the write operations are copied into the database only when the transaction commits. The amount of concurrency provided by the OSN method is demonstrated by log classification. It is shown that the OSN method provides more concurrency than basic timestamp ordering and two-phase locking methods and handles successfully some logs which cannot be handled by any of the past methods. The complexity analysis of the algorithm indicates that the method works in a reasonable amount of time. >

Journal ArticleDOI
Alexander Thomasian1, In Kyung Ryu
TL;DR: An analytic solution method based on recursion is developed to compute performance measures of interest for slatic locking systems and gain insight into the behavior of static locking systems.
Abstract: Abstracf-According to the static locking Concurrency control policy, transactions whicb have predeclared their locks can only be activated when they have acquired all requested locks. Lock contention results in transaction Mocking, such that only a subset of V Imnsaclions available for processing can be active. We develop an analytic solution method based on recursion to compute performance measures of interest for slatic locking systems. The probability of lock conflict is computed approximately by noting that the mean number of active transactions encountered by a blocked transaction is that of a system with V I transactions. Transactions del8y before activation is determined by the mean number of times it is checked unsuccessfully for Pctivation (c ( V )) and the mean elapsed time between these instants. It i s shown that tbc mean number d active transactions is V / ( 1 + qe( V)), where is the ratio of the mean residual residence time of a transaction in the system and its mean residence time. It follows that the variability in transaction residence time, in addition to lock contention, has an adverse effect on performance. Based on the loeking requirements of transpetions, five database access models are considered: I) a fixed number of exclusive locks, 2) a variable number of exclusive locks, 3) shared in addition to exclusive lock requests, 4) query and update transactlons, 5) nonuniform database accesses In which transactions access objcets in different database domains. Numerical results obtained by the analytical solution are validated against simulation results and shown to be quite accurate. They are also used lo gain insight into the behavior of static locking systems. These insights are summarized at the end of the paper.

Journal ArticleDOI
TL;DR: The results show that the OS will have to implement essentially all of the specialized tactics for transaction management that are currently used by a database management system (DBMS) in order to match DBMS performance.
Abstract: Results of a previous comparison study (A. Kumar and M. Stonebraker, 1987) between a conventional transaction manager and an operating system (OS) transaction manager indicated that the OS transaction manager incurs a severe performance penalty and appears to be feasible only in special circumstances. Three approaches for enhancing the performance of an OS transaction manager are considered. The first strategy is to improve performance by reducing the cost of lock acquisition and by compressing the log. The second strategy explores the possibility of still further improvements from additional semantics to be built into an OS transaction system. The last strategy is to use a modified index structure that makes update operations less expensive to perform. The results show that the OS will have to implement essentially all of the specialized tactics for transaction management that are currently used by a database management system (DBMS) in order to match DBMS performance. >

Journal ArticleDOI
TL;DR: The authors show that a replication of a requested higher-priority process can prevent a distributed deadlock (in a continuous deadlock treatment) and a replication is shown to recover a sequence of processes from an indefinite wait-die scheme.
Abstract: The necessary and sufficient condition for deadlock in a distributed system and an algorithm for detection of a distributed deadlock based on the sufficient condition are formulated. The protocol formulated, checks all wait-for contiguous requests in one iteration. A cycle is detected when a query message reaches the initiator. A wait-for cycle is only the necessary condition for the distributed deadlock. A no-deadlock message is expected by the query initiator to infer a deadlock-free situation if at least one wait-for cycle is present. A no-deadlock message is issued by a dependent (query intercessor) that is not waiting-for. No no-deadlock message implies a deadlock, and processes listed in the received query messages are the processes involved in a distributed deadlock. Properties of the protocol are discussed. The authors show that a replication of a requested higher-priority (or older) process can prevent a distributed deadlock (in a continuous deadlock treatment). A replication is shown to recover (in a periodical deadlock handling) a sequence of processes from an indefinite wait-die scheme. >