scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1994"


Proceedings ArticleDOI
22 Oct 1994
TL;DR: The paper considers both human and technical considerations that designers should ponder before choosing a particular concurrency control method and reviews the work-in-progress designing and implementing a library of concurrency schemes in GROUPKIT, a groupware toolkit.
Abstract: This paper exposes the concurrency control problem in groupware when it is implemented as a distributed system. Traditional concurrency control methods cannot be applied directly to groupware because system interactions include people as well as computers. Methods, such as locking, serialization, and their degree of optimism, are shown to have quite different impacts on the interface and how operations are displayed and perceived by group members. The paper considers both human and technical considerations that designers should ponder before choosing a particular concurrency control method. It also reviews our work-in-progress designing and implementing a library of concurrency schemes in GROUPKIT, a groupware toolkit.

453 citations


Journal ArticleDOI
01 Jan 1994
TL;DR: It is shown how generalized rate-monotonic scheduling theory can be applied in practical system development, where special attention must be given to facilitate concurrent development by geographically distributed programming teams and the reuse of existing hardware and software components.
Abstract: Real-time computing systems are used to control telecommunication systems, defense systems, avionics, and modern factories. Generalized rate-monotonic scheduling theory, is a recent development that has had large impact on the development of real-time systems and open standards. In this paper we provide an up-to-date and self-contained review of generalized rate-monotonic scheduling theory. We show how this theory can be applied in practical system development, where special attention must be given to facilitate concurrent development by geographically distributed programming teams and the reuse of existing hardware and software components. >

251 citations


Journal Article
TL;DR: The paper presents the design and implementaion details of Arjuna and takes a retrospective look at the system based on the application building experience of users.
Abstract: Arjuna is an object-oriented programming system implemented entirely in C++, that provides a set of tools for the construction of fault-tolerant distributed applications. Arjuna exploits features found in most object-oriented languages (such as inheritance) and only requires a limited set of system capabilities commonly found in conventional operating systems. Arjuna provides the programmer with classes that implement atomic transations, object level recovery, concurrency control and persistence. These facilities can be overridden by the programmer as the needs of the application dictate. Distribution of an Arjuna application is handled using stub generation techniques that operate on the original C++ class headers normally used by the standard compiler. The system is portable, modular and flexible. The paper presents the design and implementaion details of Arjuna and takes a retrospective look at the system based on the application building experience of users.

162 citations


Journal ArticleDOI
TL;DR: The ability to synchronize over arbitrary topologies, the introduction of an adaptive synchronization delay, the flexibility to maintain multiple synchronization groups, and the use of a modular architecture that permits the application to tailor synchronization calculations to its service requirements are presented.
Abstract: Presents an adaptive flow synchronization protocol that permits synchronized delivery of data to and from geographically distributed sites. Applications include inter-stream synchronization, synchronized delivery of information in a multisite conference, and synchronization for concurrency control in distributed computations. The contributions of this protocol in the area of flow synchronization are the ability to synchronize over arbitrary topologies, the introduction of an adaptive synchronization delay, the flexibility to maintain multiple synchronization groups, and the use of a modular architecture that permits the application to tailor synchronization calculations to its service requirements. The authors take advantage of network protocols capable of maintaining network clock synchronization in the millisecond range. >

133 citations


Proceedings ArticleDOI
23 May 1994
TL;DR: Preliminary results from a prototype PIOUS implementation are presented, to exploit the combined file I/O and buffer cache capacities of networked computing resources, and transaction-based concurrency control, to guarantee access consistency without explicit synchronization.
Abstract: PIOUS is a parallel file system architecture that provides cost-effective, scalable bandwidth in a network computing environment. PIOUS employs data declustering, to exploit the combined file I/O and buffer cache capacities of networked computing resources, and transaction-based concurrency control, to guarantee access consistency without explicit synchronization. This paper presents preliminary results from a prototype PIOUS implementation. >

129 citations


Proceedings ArticleDOI
22 Oct 1994
TL;DR: DUPLEX proposes a model based on splitting the document into independent parts, maintained individually and replicated by a kernel, providing a safe store and recovery mechanisms against failures or divergence with co-authors.
Abstract: DUPLEX is a distributed collaborative editor for users connected through a large-scale environment such as the Internet. Large-scale implies heterogeneity, unpredictable communication delays and failures, and inefficient implementations of techniques traditionally used for collaborative editing in local area networks. To cope with these unfavorable conditions, DUPLEX proposes a model based on splitting the document into independent parts, maintained individually and replicated by a kernel. Users act on document parts and interact with co-authors using a local environment providing a safe store and recovery mechanisms against failures or divergence with co-authors. Communication is reduced to a minimum, allowing disconnected operation. Atomicity, concurrency, and replica control are confined to a manageable small context.

100 citations


Proceedings Article
12 Sep 1994
TL;DR: The Dali system is a main memory storage manager designed to provide the persistence, availability and safety guarantees one typically expects from a diskresident database, while at the same time providing very high performance by virtue of being tuned to support in-memory data.
Abstract: Performance needs of many database applications dictate that the entire database be stored in main memory. The Dali system is a main memory storage manager designed to provide the persistence, availability and safety guarantees one typically expects from a diskresident database, while at the same time providing very high performance by virtue of being tuned to support in-memory data. Dali follows the philosophy of treating all data, including system data, uniformly as database files that can be memory mapped and directly accessed/updated by user processes. Direct access provides high performance; slower, but more secure, access is also provided through the use of a server process. Various features of Dali can be tailored to the needs of an application to achieve high performance - for example, concurrency control and logging can be turned off if not desired, which enables Dali to efficiently support applications that require non-persistent memory resident data to be shared by multiple processes. Both objectoriented and relational databases can be implemented on top of Dali.

96 citations


Journal ArticleDOI
TL;DR: An efficient one-phase algorithm that consists of two concurrent sweeps of messages to detect generalized distributed deadlocks and it is proved that the correctness of the algorithm is proved.
Abstract: We present an efficient one-phase algorithm that consists of two concurrent sweeps of messages to detect generalized distributed deadlocks. In the outward sweep, the algorithm records a snapshot of a distributed wait-for-graph (WFG). In the inward sweep, the algorithm performs reduction of the recorded distributed WFG to check for a deadlock. The two sweeps can overlap in time at a process. We prove the correctness of the algorithm. The algorithm has a worst-case message complexity of 4e/spl minus/2n+2l and a time complexity of 2d hops, where e is the number of edges, n is the number of nodes, l is the number of leaf nodes, and d is the diameter of the WFG. This is a notable improvement over the existing algorithms to detect generalized deadlocks. >

83 citations


Posted Content
01 Jan 1994
TL;DR: This paper will lay bare the shortcomings of the original approach and present some major improvements and several techniques will be presented which especially support read transactions with the consequence that the number of backups can be decreased substantially.
Abstract: Several years ago optimistic concurrency control gained much attention in the database community. However, two-phase locking was already well established, especially in the relational database market. Concerning traditional database systems most developers felt that pessimistic concurrency control might not be the best solution for concurrency control, but, a well-known and accepted one. With the work on new generation database systems, however, there has been a revival of optimistic concurrency control (at least a partial one). This paper will reconsider optimistic concurrency control. It will lay bare the shortcomings of the original approach and present some major improvements. Moreover, several techniques will be presented which especially support read transactions with the consequence that the number of backups can be decreased substantially. Finally, a general solution for the starvation problem is presented. The solution is perfectly consistent with the underlying optimistic approach.

68 citations


Journal ArticleDOI
TL;DR: The Unix File System(UFS) has historically offered a shared-memory consistency model, but the lack of concurrency control makes this model susceptible to read/write conflicts, i.e., unexpected read/ write sharing between two different processes.
Abstract: The Unix File System(UFS) has historically offered a shared-memory consistency model. The lack of concurrency control makes this model susceptible to read/write conflicts, i.e., unexpected read/write sharing between two different processes. For example, the update of a header file by one user while another user is performing a long-runningmake can cause inconsistencies in the compilation results. In practice, read/write conflicts are rare for two reasons. First, the window of vulnerability is relatively small because read/write conflicts only occur when the executions of two processes overlap. Second, they are often prevented via explicit user-level coordination.

66 citations


Proceedings ArticleDOI
20 Nov 1994
TL;DR: First local solutions with globally-optimum performance guarantees with respect to deadlock resolution and resource allocation problems, occurring in distributed server-client architectures are exhibited.
Abstract: The work is motivated by deadlock resolution and resource allocation problems, occurring in distributed server-client architectures. We consider a very general setting which includes, as special cases, distributed bandwidth management in communication networks, as well as variations of classical problems in distributed computing and communication networking such as deadlock: resolution and "dining philosophers". In the current paper, we exhibit first local solutions with globally-optimum performance guarantees. An application of our method is distributed bandwidth management in communication networks. In this setting, deadlock resolution (and maximum fractional independent set) corresponds to admission control maximizing network throughput. Job scheduling (and minimum fractional coloring) corresponds to route selection that minimizes load. >

Proceedings ArticleDOI
Bestavros1, Braoudakis1
07 Dec 1994
TL;DR: This work proposes a Speculative Concurrency Control technique (SCC) technique that minimizes the impact of block ages and rollbacks, and presents a number of SCC-based algorithms that differ in the level of speculation they introduce, and the amount of System resources they require.
Abstract: Various concurrency control algorithms differ in the time when conflicts are detected, and in the way they are resolved. Pessimistic (PCC) protocols detect conflicts as soon as they occur and resolve them using blocking. Optimistic (OCC) protocols detect conflicts at transaction commit time and resolve them using rollbacks. For real-time databases, blockages and rollbacks are hazards that increase the likelihood of transactions missing their deadlines. We propose a Speculative Concurrency Control (SCC) technique that minimizes the impact of block ages and rollbacks. SCC relies on added system resources to speculate on potential serialization orders, ensuring that if such serialization orders materialize, the hazards of blockages and roll-back are minimized. We present a number of SCC-based algorithms that differ in the level of speculation they introduce, and the amount of System resources (mainly memory) they require. We show the performance gains (in terms of number of satisfied timing constraints) to be expected when a representative SCC algorithm (SCC-2S) is adopted. >

Proceedings ArticleDOI
06 Nov 1994
TL;DR: A synthesis trajectory is presented that can synthesize the necessary hardware resources, control circuitry, and protocol conversion behaviors for implementing system interface modules.
Abstract: We describe a new high-level compiler called Integral for designing system interface modules. The input is a high-level concurrent algorithmic specification that can model complex concurrent control flow, logical and arithmetic computations, abstract communication, and low-level behavior. For abstract communication between two communicating modules that obey different I/O protocols, the necessary protocol conversion behaviors are automatically synthesized using a Petri net theoretic approach. We present a synthesis trajectory that can synthesize the necessary hardware resources, control circuitry, and protocol conversion behaviors for implementing system interface modules.

Book ChapterDOI
04 Jul 1994
TL;DR: A programming model for concurrent object-oriented applications by which concurrency issues are abstracted and separated from the code, and a solution based on lessons learned with adaptive software is proposed, introducing the concept of synchronization patterns.
Abstract: This paper presents a programming model for concurrent object-oriented applications by which concurrency issues are abstracted and separated from the code. The main goal of the model is to minimize dependency between application specific functionality and concurrency control. Doing so, software reuse can be effective and concurrent programs are more flexible, meaning that changes in the implementation of the operations don't necessarily imply changes in the synchronization scheme (and vice-versa). We make an analysis of concurrent computation, review existing systems and their inherent limitations, and discuss the fundamental problems in abstracting concurrency. Then we propose a solution based on lessons learned with adaptive software, introducing the concept of synchronization patterns. The result is a programming model by which data, operations and concurrency control are minimally interdependent.

Proceedings ArticleDOI
M. Shapiro1
01 Oct 1994
TL;DR: A comprehensive, unified protocol capable of supporting different languages and object models, and may be tailored to support various policies in a simple manner is proposed.
Abstract: A number of actions, collectively known as binding, prepare a reference for invocation of its target: locating the target, setting up a connection, checking access rights and concurrency control state, type-checking, instantiating a proxy, etc. Existing languages or operating systems support only a single binding policy, that cannot be tailored to object-specific semantics for the management of distribution, replication, or persistence. We propose a general binding protocol covering the above needs; the protocol is simple (a single RPC and one upcall at each end) but recursive; however the recursion can be terminated at any point, trading off simplicity and performance against completeness. This comprehensive, unified protocol is capable of supporting different languages and object models, and may be tailored to support various policies in a simple manner. >

Journal ArticleDOI
TL;DR: This work presents a new transaction model which allows correctness criteria more suitable for new database applications which require long-duration transactions, and combines three enhancements to the standard model: nested transactions, explicit predicates, and multiple versions.
Abstract: In the typical database system, an execution is correct if it is equivalent to some serial execution. This criterion, called serializability, is unacceptable for new database applications which require long-duration transactions. We present a new transaction model which allows correctness criteria more suitable for these applications. This model combines three enhancements to the standard model: nested transactions, explicit predicates, and multiple versions. These features yield the name of the new model, nested transactions with predicates and versions, or NT/PV.The modular nature of the NT/PV model allows a straightforward representation of simple systems. It also provides a formal framework for describing complex interactions. The most complex interactions the model allows can be captured by a protocol which exploits all of the semantics available to the NT/PV model. An example of these interactions is shown in a CASE application. The example shows how a system based on the NT/PV model is superior to both standard database techniques and unrestricted systems in both correctness and performance.

Proceedings Article
01 Jan 1994
TL;DR: Speculative concurrency control (SCC) as discussed by the authors relies on added system resources to speculate on potential serialization orders, ensuring that the serialization order materializes, the hazards of blockages and rollbacks are minimized.
Abstract: Various concurrency control algorithms differ in the time when conflicts are detected, and in the WQY they are resolved. Pessimistic (PCC) protocols detect conflicts as soon as they occur and resolve them using blocking. Optimistic (OCC) protocols detect conflicts ai transaction commit time and resolve them using rollbacks. For real-time databases, blockages and rollbacks are hazards that increase the likelihood of transactions missing their deadlines. We propose a Speculative Concurrency Control (SCC) technique that minimizes the impact of blockages and rollbacks. SCC relies on added system resources to speculate on potential serialization orders, ensuring that af such serialization orders materialize, the hazards of blockages and roll-backs are minimized. We present a number of SCC-based algorithms that differ in the level of speculation ihey introduce, and the amount of system resources (mainly memory) they require. We show the performance gains (in terms of number of satisfied timing constraints) to be expected when a representative SCC algorithm (SCC-2s) is adopted.

Journal ArticleDOI
TL;DR: An analytical model for predicting the performance impact of varying the scope of concurrency detection as a function of available resources, such as number of pipelines in a superscalar architecture is presented.
Abstract: Detecting independent operations is a prime objective for computers that are capable of issuing and executing multiple operations simultaneously. The number of instructions that are simultaneously examined for detecting those that are independent is the scope of concurrency detection. The authors present an analytical model for predicting the performance impact of varying the scope of concurrency detection as a function of available resources, such as number of pipelines in a superscalar architecture. The model developed can show where a performance bottleneck might be: insufficient resources to exploit discovered parallelism, insufficient instruction stream parallelism, or insufficient scope of concurrency detection. The cost associated with speculative execution is examined via a set of probability distributions that characterize the inherent parallelism in the instruction stream. These results were derived using traces from a Multiflow TRACE SCHEDULING compacting FORTRAN 77 and C compilers. The experiments provide misprediction delay estimates for 11 common application-level benchmarks under scope constraints, assuming speculative, out-of-order execution and run time scheduling. The throughput prediction of the analytical model is shown to be close to the measured static throughput of the compiler output. >

02 Jan 1994
TL;DR: A new priority-cognizant conflict resolution scheme is presented that is shown to provide considerable performance improvement over priority-insensitive algorithms, and to outperform the previously proposed priority-based conflict resolution schemes over a wide operating range.
Abstract: In addition to satisfying data consistency requirements as in conventional database systems, concurrency control in real-time database systems must also satisfy timing constraints, such as deadlines associated with transactions. Concurrency control for a real-time database system can be studied from several different perspectives. This largely depends on how the system is specified in terms of data consistency requirements and timing constraints. The objective of this research is to investigate and propose concurrency control algorithms for real-time database systems, that not only satisfy consistency requirements but also meet transaction timing constraints as much as possible, minimizing the percentage and average lateness of deadline-missing transactions. To fulfill the goals of this study, we conduct our research in three phases. First, we develop a model for a real-time database system and study the performance of various concurrency control protocol classes under a variety of operating conditions. Through this study, we understand the characteristics of each protocol and their impact on the performance, and ensure the validity of our real-time database system model by reconfirming the results from previous performance studies on concurrency control for real-time database systems. Second, we choose optimistic technique as the basic mechanism for our study on concurrency control for real-time database systems, and investigate its behavior in a firm-deadline environment where tardy transactions are discarded. We present a new optimistic concurrency control algorithm that outperforms previous ones over a wide range of operating conditions, and thus provides a promising candidate for the basic concurrency control mechanism for real-time database systems. Finally, we address the problem of incorporating deadline information into optimistic protocols to improve their real-time performance. We present a new priority-cognizant conflict resolution scheme that is shown to provide considerable performance improvement over priority-insensitive algorithms, and to outperform the previously proposed priority-based conflict resolution schemes over a wide operating range. In each step of our research, we report the performance evaluation results by using a detailed simulation model of real-time database system developed in the first phase. In addition to the three phases, we investigate semantic-based concurrency control techniques for real-time database systems, in which the semantics of operations on data objects are used to increase the concurrency of transactions executing on the data objects and to meet the timing constraints imposed on the transactions. We propose an object-oriented data model for real-time database systems. We present a semantic-based concurrency control mechanism which can be implemented through the use of the concurrency control protocols for real-time database systems studied earlier along with a general-purpose method for determining compatibilities of operations.

Journal ArticleDOI
TL;DR: The notion of generalized multiuser editing is defined and motivated and some of the issues, approaches, tradeoffs, principles, and requirements related to the design of the space of collaborative applications are described.
Abstract: The design space of collaborative applications is characterized using the notion of generalized multiuser editing. Generalized multiuser editing allows users to view interactive applications as editors of data structures. It offers several collaboration functions, which allow users to collaboratively edit application data structures. These functions include coupling, concurrency control, access control, and multiuser undo. Coupling allows the users to share editing changes, access control and concurrency control prevent them from making unauthorized and inconsistent changes, respectively, and multiuser undo allows them collaboratively to undo or redo changes. These functions must be performed flexibly to accommodate different applications, users, phases of collaboration, and bandwidths of the communication links. In this paper, we define and motivate the notion of generalized multiuser editing and describe some of the issues, approaches, tradeoffs, principles, and requirements related to the design of the...

Book ChapterDOI
21 Jun 1994
TL;DR: The INDIA project investigates database techniques for dealing with the severe data and service evolution management problems resulting from the IN concept with a concurrency management technique called Atomic Delayed Replication (ADR).
Abstract: The Intelligent Network (IN) is an architectural concept that enables telematic services (freephone, virtual private network, televoting, etc.) to be rapidly deployed and effectively used in the telephone network. The INDIA project investigates database techniques for dealing with the severe data and service evolution management problems resulting from the IN concept. A concurrency management technique called Atomic Delayed Replication (ADR) is presented that takes advantage of the special application semantics of the Service Logic Programs that implement IN. It can address replicated concurrency control for both the service data and the service logic. ADR has been implemented on top of a commercial DBMS as part of an experimental IN environment.

Posted Content
TL;DR: It will be shown how conventional locking can step by step be improved and refined to finally reach the initial goal, namely a comprehensive support of synergistic cooperative work by the exploitation of application-specific semantics.
Abstract: Advanced database applications, such as CAD/CAM, CASE, large AI applications or image and voice processing, place demands on transaction management which differ substantially from those of traditional database applications. In particular, there is a need to support enriched data models (which include, for example, complex objects or version and configuration management), synergistic cooperative work, and application- or user-supported consistency. This paper deals with a subset of these problems. It develops a methodology for implementing semantics-based concurrency control on the basis of ordinary locking. More specifically, it will be shown how conventional locking can step by step be improved and refined to finally reach our initial goal, namely a comprehensive support of synergistic cooperative work by the exploitation of application-specific semantics. In addition to the conventional binding of locks to transactions we consider the binding of locks to objects (object related) and subjects (subject related locks). Object related locks can define persistent and adaptable access restrictions on objects. This permits, among others, the modeling of different types of version models (time versions, version graphs) as well as library (standard) objects. Subject related locks are bound to subjects (user, application, etc.) and can be used among others to supervise or direct the transfer of objects between transactions.

Proceedings ArticleDOI
14 Feb 1994
TL;DR: In modeling FDBMS transaction executions the authors propose a more realistic model than the traditional read/write model; in their model a local database exports high-level operations which are the only operations distributed global transactions can execute to access data in the shared local databases.
Abstract: A federated database management system (FDBMS) is a special type of distributed database system that enables existing local databases, in a heterogeneous environment, to maintain a high degree of autonomy. One of the key problems in this setting is the coexistence of local transactions and global transactions, where the latter access and manipulate data of multiple local databases. In modeling FDBMS transaction executions the authors propose a more realistic model than the traditional read/write model; in their model a local database exports high-level operations which are the only operations distributed global transactions can execute to access data in the shared local databases. Such restrictions are not unusual in practice as, for example, no airline or bank would ever permit foreign users to execute ad hoc queries against their databases for fear of compromising autonomy. The proposed architecture can be elegantly modeled using the multilevel nested transaction model for which a sound theoretical foundation exists to prove concurrent executions correct. A multilevel scheduler that is able to exploit the semantics of exported operations can significantly increase concurrency by ignoring pseudo conflicts. A practical scheduling mechanism for FDBMSs is described that offers the potential for greater performance and more flexibility than previous approaches based on the read/write model. >

Journal ArticleDOI
01 Jan 1994
TL;DR: This paper provides a constructive correctness criterion that leads to the design of unified protocols that guarantee atomicity and serializability in transaction management in shared databases.
Abstract: Transaction management in shared databases is generally viewed as a combination of two problems, concurrency control and recovery, which have been considered as orthogonal problems. Consequently, the correctness criteria derived for these problems are incomparable. Recently a unified theory of concurrency control and recovery has been introduced that is based on commutativity and performs transaction recovery by submitting inverse operations for operations of aborted transactions. In this paper we provide a constructive correctness criterion that leads to the design of unified protocols that guarantee atomicity and serializability.

Book ChapterDOI
05 Sep 1994
TL;DR: The paper presents a new protocol which requires significantly less communication than previously proposed protocols in systems which do not provide hardware atomic broadcasting facilities, and demonstrates the correctness of existing protocols using the theory.
Abstract: Recently, distributed shared memory (DSM) systems have received much attention because such an abstraction simplifies programming. It has been shown that many practical applications using DSMs require competing operations. We have aimed at unifying theory and implementations of protocols for sequential consistency, which provides competing operations. By adopting concepts from concurrency control, we developed theory for sequential consistency, called a sequentializability theory. This paper first presents the sequentializability theory, and then demonstrates the correctness of existing protocols using the theory. Finally, the paper presents a new protocol which requires significantly less communication than previously proposed protocols in systems which do not provide hardware atomic broadcasting facilities.

Journal ArticleDOI
TL;DR: A protocol suite called themultiflow conversation protocol (MCP) is proposed for the realization of the necessary coordination and concurrency control semantics in a collaborative application and its prototype implementation in an internetwork of workstations is described.
Abstract: The development of distributed, multimedia, collaborative applications requires the resolution of communication issues such as concurrency control and temporal and causal synchronization of traffic over related data streams. Existing transport and/or session-layer protocols do not include the desired support for multistream, multipoint communication. In this paper, we propose new communication abstractions and mechanisms that facilitate the implementation of the necessary coordination and concurrency control semantics in a collaborative application. We propose a protocol suite called themultiflow conversation protocol (MCP) for the realization of these abstractions and describe its prototype implementation in an internetwork of workstations. The paper also describes our experience with the prototype and results of a performance evaluation.

Journal ArticleDOI
TL;DR: The characterization of distributed deadlock detection in terms of the contents of local memory of distributed nodes/sites provides an insight into the properties of distributeddeadlocks, expresses inherent limitations of distributed Deadlock detection, and yields new correctness criteria for distributeddeadlock detection algorithms.


Journal ArticleDOI
TL;DR: A mathematical model takes into account the pattern of usage of the databases, communication costs in the network, delays due to queuing of data requests, costs for maintaining consistency among the various copies of a database, and storage costs to solve the problem of allocating database fragments.
Abstract: This research investigates the problem of allocating database fragments across a set of computers connected by a communication network. A mathematical model is presented to aid designers in the development of distributed database systems. The model takes into account the pattern of usage of the databases, communication costs in the network, delays due to queuing of data requests, costs for maintaining consistency among the various copies of a database, and storage costs. A solution procedure based on Lagrangian relaxation is proposed to solve the model. Computational results are reported along with several useful observations. The model is applicable to organizations that are considering migration from a centralized to a distributed computing environment.

Proceedings ArticleDOI
26 Oct 1994
TL;DR: An efficient protocol for token arbitration, which minimizes bottlenecks and hence enhances scalability, and a practical approach to handling deadlocks, race conditions, and recovery issues, which complicate token manager design and implementation are presented.
Abstract: This paper presents the design and implementation of a distributed token manager for a cluster-optimized, distributed Unix file system. In this file system, tokens provide cache consistency and support for single-system Unix semantics. The paper describes the token types used, token arbitration protocol, deadlock-free implementation, fault-tolerance, and recovery. The key contributions of the work reported here are: (1) An efficient protocol for token arbitration, which minimizes bottlenecks and hence enhances scalability; (2) A practical approach to handling deadlocks, race conditions, and recovery issues, which complicate token manager design and implementation. >