scispace - formally typeset
Search or ask a question

Showing papers on "Distributed algorithm published in 1994"


Book
Gerard Tel1
01 Jan 1994
TL;DR: The author concentrates on algorithms for the point-to-point message passing model, and includes algorithms on the implementation of computer communication networks and fault tolerance achievable by distributed algorithms.
Abstract: From the Publisher: The second edition of this textbook provides an up-to-date introduction both to the topic, and to the theory behind the algorithms. The clear presentation makes the book suitable for advanced undergraduate or graduate courses, whilst the coverage is sufficiently deep to make it useful for practising engineers and researchers." "The author concentrates on algorithms for the point-to-point message passing model, and includes algorithms for the implementation of computer communication networks. Other key areas discussed are algorithms for the control of distributed applications and fault tolerance achievable by distributed algorithms. The two new chapters on sense of direction and failure detectors are state-of-the-art and will provide an entry to research in these still developing topics.

894 citations


Book ChapterDOI
Debasis Mitra1
01 Jan 1994
TL;DR: An asynchronous adaptive algorithm for power control in cellular radio systems, which relaxes the demands of coordination and synchrony between the various mobiles and base stations and allows different links to update their power at different rates; unpredictable, bounded propagation delays are taken into account.
Abstract: We give an asynchronous adaptive algorithm for power control in cellular radio systems, which relaxes the demands of coordination and synchrony between the various mobiles and base stations. It relaxes the need for strict clock synchronization and also allows different links to update their power at different rates; unpredictable, bounded propagation delays are taken into account. The algorithm uses only local measurements and incorporates receiver noise. The overall objective is to minimize transmitters’ powers in a Pareto sense while giving each link a Carrier-to-Interference ratio which is not below a prefixed target. The condition for the existence and uniqueness of such a power distribution is obtained. Conditions are obtained for the asynchronous adaptation to converge to the optimal solution at a geometric rate. These conditions are surprisingly not burdensome.

320 citations


Journal ArticleDOI
TL;DR: A distributed self-stabilizing Depth-First Search (DFS) spanning tree algorithm, whose output is a DFS spanning tree of the communication graph, kept in a distributed fashion.

142 citations


Proceedings ArticleDOI
21 Jun 1994
TL;DR: This paper presents an operational system model for explicitly incorporating the effects of host mobility and proposes a general principle for structuring efficient distributed algorithms in this model, which is used to redesign two classical algorithms for distributed mutual exclusion for the mobile environment.
Abstract: Distributed algorithms have hitherto been designed for networks with static hosts. A mobile host (MH) can connect to the network from different locations at different times. This paper presents an operational system model for explicitly incorporating the effects of host mobility and proposes a general principle for structuring efficient distributed algorithms in this model. This principle is used to redesign two classical algorithms for distributed mutual exclusion for the mobile environment. We then consider a problem introduced solely by host mobility viz, location management for groups of MHs, and propose the concept of group location as an efficient approach to tackle the problem. Lastly, we present a framework which enables host mobility to be decoupled from the design of a distributed algorithm per se, to varying degrees. >

131 citations


Journal ArticleDOI
TL;DR: This work determines the call blocking performance of channel-allocation algorithms where every channel is available for use in every cell and where decisions are made by mobiles/portables based only on local observations and suggests that an aggressive algorithm could provide a substantially reduced blocking probability.
Abstract: We determine the call blocking performance of channel-allocation algorithms where every channel is available for use in every cell and where decisions are made by mobiles/portables based only on local observations. Using a novel Erlang-B approximation method, together with simulation, we demonstrate that even the simplest algorithm, the timid, compares favorably with impractical, centrally administered fixed channel allocation. Our results suggest that an aggressive algorithm, that is, one requiring call reconfigurations, could provide a substantially reduced blocking probability. We also present some algorithms which take major steps toward achieving the excellent blocking performance of the hypothetical aggressive algorithm but having the stability of the timid algorithm. >

118 citations


Proceedings ArticleDOI
24 May 1994
TL;DR: This work proposes a distributed search tree that inherits desirable properties from non-distributed trees, and shows that it does indeed combine a guarantee for good storage space utilization with high query efficiency.
Abstract: Databases are growing steadily, and distributed computer systems are more and more easily available. This provides an opportunity to satisfy the increasingly tighter efficiency requirements by means of distributed data structures. The design and analysis of these structures under efficiency aspects, however, has not yet been studied sufficiently. To our knowledge, a single scalable, distributed data structure has been proposed so far. It is a distributed variant of linear hashing with uncontrolled splits, and, as a consequence, performs efficiently for data distributions that are close to uniform, but not necessarily for others. In addition, it does not support queries that refer to the linear order of keys, such as nearest neighbor or range queries. We propose a distributed search tree that avoids these problems, since it inherits desirable properties from non-distributed trees. Our experiments show that our structure does indeed combine a guarantee for good storage space utilization with high query efficiency. Nevertheless, we feel that further research in the area of scalable, distributed data structures is dearly needed; it should eventually lead to a body of knowledge that is comparable with the non-distributed, classical data structures field.

117 citations


Proceedings ArticleDOI
14 Aug 1994
TL;DR: It is shown how counter flushing helps to understand and improve some existing distributed algorithms for tasks such as mutual exclusion and request-response protocols.
Abstract: A useful way to design simple and robust protocols is to make them self-stabilizing. A protocol is said to be self-stabilizing if it begins to exhibit correct behavior even after starting in an arbitrary state.We describe a simple technique for self-stabilization called counter flushing.We show how counter flushing helps us to understand and improve some existing distributed algorithms for tasks such as mutual exclusion and request-response protocols.We also use counter flushing to create new self-stabilizing protocols for propagation of information with feedback and resets.The resulting protocols are simple, require few changes from the nonstabilizing equivalents, and have fast stabilization times.

79 citations


Journal ArticleDOI
TL;DR: This survey presents five techniques that have been widely used in the design of randomized algorithms, illustrated using 12 randomized algorithms that span a wide range of applications, including:primality testing, interactive probabilistic proof systems, dining philosophers, and Byzantine agreement.
Abstract: Probabilistic, or randomized, algorithms are fast becoming as commonplace as conventional deterministic algorithms. This survey presents five techniques that have been widely used in the design of randomized algorithms. These techniques are illustrated using 12 randomized algorithms—both sequential and distributed— that span a wide range of applications, including:primality testing (a classical problem in number theory), interactive probabilistic proof systems (a new method of program testing), dining philosophers (a classical problem in distributed computing), and Byzantine agreement (reaching agreement in the presence of malicious processors). Included with each algorithm is a discussion of its correctness and its computational complexity. Several related topics of interest are also addressed, including the theory of probabilistic automata, probabilistic analysis of conventional algorithms, deterministic amplification, and derandomization of randomized algorithms. Finally, a comprehensive annotated bibliography is given.

76 citations


Proceedings ArticleDOI
14 Aug 1994
TL;DR: In this paper, a method for analyzing time bounds for randomized distributed algorithms is presented, in the context of a new and general framework for describing and reasoning about randomized algorithms, which consists of proving auxiliary statements of the form U (t)->(p) U', which mean that whenever the algorithm begins in a state in set U, with probability p, it will reach a state of set U' within time t.
Abstract: A method of analyzing time bounds for randomized distributed algorithms is presented, in the context of a new and general framework for describing and reasoning about randomized algorithms. The method consists of proving auxiliary statements of the form U (t)->(p) U', which means that whenever the algorithm begins in a state in set U, with probability p, it will reach a state in set U' within time t. The power of the method is illustrated by its use in proving a constant upper bound on the expected time for some process to reach its critical region, in Lehmann and Rabin's Dining Philosophers algorithm.

73 citations


Book
01 Jan 1994
TL;DR: Books, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with, become what you need to get.
Abstract: New updated! The latest book from a very famous author finally comes out. Book of readings in distributed computing systems, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.

73 citations


Proceedings ArticleDOI
14 Aug 1994
TL;DR: In this paper, a method for determining diameter and wall thickness of hollow microspheres or shells was proposed. But this method was not suitable for the case of shells traveling in fluid-filled conduits of differing diameters.
Abstract: Method and apparatus for determining diameter and wall thickness of hollow microspheres or shells wherein terminal velocities of shells traveling in fluid-filled conduits of differing diameters are measured. A wall-effect factor is determined as a ratio of the terminal velocities, and shell outside diameter may then be ascertained as a predetermined empirical function of wall-effect factor. For shells of known outside diameter, wall thickness may then be ascertained as a predetermined empirical function of terminal velocity in either conduit.

Proceedings ArticleDOI
20 Nov 1994
TL;DR: This paper provides the first algorithms that allow processes to cooperate to finish their work in fewer steps, and presents two algorithms (with different strengths), and provides a competitive analysis for each one.
Abstract: We introduce a theory of competitive analysis for distributed algorithms. The first steps in this direction were made in the seminal papers of Y. Bartal et al. (1992), and of B. Awerbuch et al. (1992), in the context of data management and job scheduling. In these papers, as well as in other subsequent sequent work, the cost of a distributed algorithm is compared to the cost of an optimal global-control algorithm. In this paper we introduce a more refined notion of competitiveness for distributed algorithms, one that reflects the performance of distributed algorithms more accurately. In particular, our theory allows one to compare the cost of a distributed on-line algorithm to the cost of an optimal distributed algorithm. We demonstrate our method by studying the cooperative collect primitive, first abstracted by M. Saks, N. Shavit, and H. Woll (1991). We provide the first algorithms that allow processes to cooperate to finish their work in fewer steps. Specifically, we present two algorithms (with different strengths), and provide a competitive analysis for each one. >

Journal ArticleDOI
TL;DR: A new methodological approach to digital image processing applied to the particular case of gray-level image segmentation is introduced, based on a modified and simplified version of classifier systems.

Proceedings ArticleDOI
31 Jan 1994
TL;DR: An object-oriented, distributed, discrete event simulator using Time Warp has been developed, and initial performance measurements completed, enabling simulation runs that require 20 hours on a single workstation to be completed in only 3.5 hours.
Abstract: There has been rapid growth an demand for mobile communications over the past few years that has led to intensive research and development of complex PCS (personal communication service) networks. Capacity planning and performance modeling are necessary to maintain a high quality of service to the mobile subscriber while minimizing the cost. Simulation is widely used in such studies, however, because these models are extremely time consuming to execute, only small-scale PCS networks have previously been simulated. In this paper, we examine the use of the Time Warp distributed simulation mechanism in simulating large scale (1024 or more cells) PCS networks. An object-oriented, distributed, discrete event simulator using Time Warp has been developed, and initial performance measurements completed. Speedups in the range of 2.8 to 7.8 using 8 Unix workstations have been obtained, enabling simulation runs that require 20 hours on a single workstation to be completed in only 3.5 hours. >


Journal ArticleDOI
TL;DR: The Prospero Resource Manager is a scalable resource allocation system that supports the allocation of processing resources in large networks and on multiprocessor systems.
Abstract: Existing techniques for allocating processors in parallel and distributed systems are not suitable for use in large distributed systems. In such systems, dedicated multiprocessors should exist as an integral component of the distributed system, and idle processors should be available to applications that need them. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and on multiprocessor systems. PRM employs three types of managers, the job manager, the system manager, and the node manager, to manage resources in a distributed system. Multiple independent instances of each type of manager exist, reducing bottlenecks. When making scheduling decisions each manager utilizes information most closely associated with the entities for which it is responsible.

Book ChapterDOI
04 Jul 1994
TL;DR: The mechanism proposed in this paper has been implemented on an AL-1/D distributed reflective programming system and it is shown that the mechanism of location control using meta-level programming provides reasonable performance for a distributed application.
Abstract: In distributed environments, location control of objects among hosts is a crucial concern. This paper proposes a new mechanism of object location control using meta-level programming which provides the following advantages to programmers. First, the description of location control can be separated from the application program by exploiting the meta-level architecture. This separation makes it easy for programmers to understand application programs and change location control policies. Second, it is possible for programmers to control object location using runtime information provided at the meta-level such as the number of remote messages. This information enables programmers to control object location more flexibly than in traditional distributed languages. The mechanism proposed in this paper has been implemented on an AL-1/D distributed reflective programming system. We show that our mechanism of location control using meta-level programming provides reasonable performance for a distributed application.

Book ChapterDOI
04 Jul 1994
TL;DR: An automatic data partition algorithm which is based on the analysis of four distinct factors is described, which is applied to a real non-trivial program on a 32 cell KSR-1 where the performance is comparable to that of hand-coded techniques.
Abstract: This paper proposes a compiler strategy for mapping FORTRAN programs onto distributed memory computers. Once the available parallelism has been identified, the minimisation of different costs will suggest different data and computation partitions. This is further complicated, as the effectiveness of the partition will depend on later compiler optimisations. For this reason, partitioning is at the crux point of compilation and this paper describes an automatic data partition algorithm which is based on the analysis of four distinct factors. By determining the relative merit of each form of analysis, a data partitioning decision is made which is part of an overall compilation strategy. The strategy is applied to a real non-trivial program on a 32 cell KSR-1 where the performance is comparable to that of hand-coded techniques.

Proceedings ArticleDOI
14 Aug 1994
TL;DR: This paper proves two general theorems about the solv-ability of set consensus using objects other than read/write registers, and addresses the question of what kinds of tasks can be solved by N processes using (M, j)-consensus objects, for A4 < N.
Abstract: In the (N, k)-consensus task, each process in a group starts with a private input value, communicates with the others by apply ingoperations to shared objects} and then halts after choosing a private output value. Each process is required to choose some process's input value, and the set of values chosen should have size at most k. This problem, first proposed by Chaudhuri in 1990, has been extensively studied using asynchronous read/write memory. In this paper, we investigate this problem in a more powerful asyn-chronous model in which processes may communicate through objects other than read/write memory, such as test&set variables. We prove two general theorems about the solv-ability of set consensus using objects other than read/write registers. The first theorem addresses the question of what kinds of shared objects are needed to solve (N, k)-consensus, and the second addresses the question of what kinds of tasks can be solved by N processes using (M, j)-consensus objects, for A4 < N. Our proofs exploit a number of techniques from algebraic topology.

Proceedings ArticleDOI
14 Aug 1994
TL;DR: It is shown that even in the case of worst-case transient faults (i.e., in a selfstabilizing setting), many fundamental network protocols can be achieved using only O(log* n) bits of memory per incident network edge.
Abstract: In this paper we consider the question of faulttolerant distributed network protocols with extremely small memory requirements per processor. In particular, we show that even in the case of worst-case transient faults (i.e., in a selfstabilizing setting), many fundamental network protocols can be achieved using only O(log* n) bits of memory per incident network edge. In the heart of our construction is a self-stabilizing asynchronous network RESET protocol with the same small memory requirements. *Johns Hopkins University, Baltimore, MD 21218, and MIT Lab. for Computer Science. E-mail: baruch@blaze.cs.jhu. edu. Supported by Air Force Contract TNDGAFOSR-86-O078, ARPA/Army contract DABT6393-C-0038, ARO contract DAAL03-86-K-0171, NSF contract 9114440-CCR, DARPA contract NOO014-J-92-1799. t U.C. Berkeley and ICSI. Supported by NSF postdoctoral fellowship and ICSI. E-mail: rafail@cs.berkeley. edu. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association of Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. PODC 948/94 Los Angeles CA USA (2) 1994 ACM 0-89791 -654-9/84/0008.$3.50

Journal ArticleDOI
TL;DR: In this article, the authors proposed two modifications to the distributed Bellman-Ford algorithm which result in a polynomial message complexity without adversely affecting the response time of the algorithm.
Abstract: Routing algorithms based on the distributed Bellman-Ford algorithm (DBF) suffer from exponential message complexity in some scenarios. We propose two modifications to the algorithm which result in a polynomial message complexity without adversely affecting the response time of the algorithm. However, the new algorithms may not compute the shortest path. Instead, the paths computed can be worse than the shortest path by at most a constant factor ( >


Proceedings ArticleDOI
21 Jun 1994
TL;DR: The proposed mediation concept addresses some of the heterogeneity and flexibility requirements of open service co-operation by a uniform "Service Interface Description Language" (SIDL) by facilitating flexible service selection and client/server interaction.
Abstract: The increased availability of global communication infrastructures allows providers and users of various application services to cooperate in nearly unlimited geographic scopes. Problems of heterogeneity and scale have motivated specific standardisation activities for client/server "trading" or service "mediation" components. Motivated by current limitations of the emerging ODP (Open Distributed Processing) trader, this paper argues for a broader concept of general service "mediation" as more appropriate for realistic open distributed environments. The proposed mediation concept addresses some of the heterogeneity and flexibility requirements of open service co-operation by a uniform "Service Interface Description Language" (SIDL). The goal is to support distributed application development for a "Common Open Service Market" (COSM) by facilitating flexible service selection and client/server interaction. The paper also presents basic components of a generalised trading and mediation architecture as well as the status of a prototype implementation. >

Proceedings ArticleDOI
28 Sep 1994
TL;DR: An implementation architecture for workflow management systems that meets best Scalability (through transparent parallelism) and transparency with respect to distribution and heterogeneity are the major characteristics of this architecture.
Abstract: A specific task of distributed and parallel information systems is workflow management, In particular, workflow management system execute business processes that run on top of distributed and parallel information systems. Parallelism is due to performance requirements and involves data and applications that are spread across a heterogeneous, distributed computing environment. Heterogeneity and distribution of the underlying computing infrastructure should be made transparent in order to alleviate programming and use. We introduce an implementation architecture for workflow management systems that meets best these requirements. Scalability (through transparent parallelism) and transparency with respect to distribution and heterogeneity are the major characteristics of this architecture. A generic client/server class library in an object-oriented environment demonstrates the feasibility of the approach taken. >

Journal ArticleDOI
TL;DR: A lower bound of Ω(log n/log log n) on the competitive ratio of any (deterministic or randomized) distributed algorithm for solving the mobile user problem introduced by Awerbuch and Peleg (1989, 1990) is proved on certain networks of n processors.

Proceedings ArticleDOI
01 May 1994
TL;DR: A new distributed algorithm for power control which operates by adjusting the transmitted powers from the base stations so as to maintain the C/I of every link above the desired threshold is developed.
Abstract: Due to co-channel interference, the carrier to interference ratios (C/I) of some mobiles in a wireless network may drop below a desired quality threshold /spl gamma/, either upon admission of a new mobile or if channel conditions vary. By using dynamic, local measurement of the power gains between base stations and mobiles, we develop a new distributed algorithm for power control which operates by adjusting the transmitted powers from the base stations so as to maintain the C/I of every link above the desired threshold. >

Journal ArticleDOI
TL;DR: The authors efficiently transform bidirectional algorithms to run on unidirectional networks, and in particular solve other problems such as the broadcast and echo in a way that is more efficient than direct transformation.
Abstract: This paper addresses the question of distributively computing over a strongly connected unidirectional data communication network. In unidirectional networks the existence of a communication link from one node to another does not imply the existence of a link in the opposite direction. The strong connectivity means that from every node there is a directed path to any other node. The authors assume an arbitrary topology network in which the strong connectivity is the only restriction. Four models are considered, synchronous and asynchronous, and for each node space availability, which grows as either $O(1)$ bits or $O(\log n)$ bits per incident link, where $n$ is the total number of nodes in the network, is considered. First algorithms for two basic problems in distributed computing in data communication networks, traversal, and election, are provided. Each of these basic protocols produces two directed spanning trees rooted at a distinguished node in the network, one called in-tree, leading to the root, and the other, out-tree, leading from the root. Given these trees, the authors efficiently transform bidirectional algorithms to run on unidirectional networks, and in particular solve other problems such as the broadcast and echo [E. J. Chang}, Decentralized Algorithms in Distributed Systems, Ph. D. thesis, University of Toronto, October 1979] in a way that is more efficient ($O(n^2)$ messages) than direct transformation (which yields $O(nm)$ messages algorithm). The communication cost of the traversal and election algorithms is $O(nm+ n^2 \log n)$ bits ($O(nm)$ messages and time), where $m$ is the total number of links in the network. The traversal algorithms for unidirectional networks of finite automata achieve the same cost ($O(nm+n^2 \log n)$ bits) in the asynchronous case, while in the synchronous case the communication cost of the algorithm is $O(mn)$ bits.

Journal ArticleDOI
TL;DR: This paper presents a very general information structure (and the associated generic algorithm) for token- and tree-based mutual exclusion algorithms.
Abstract: In a distributed context, mutual exclusion algorithms can be divided into two families according to their underlying algorithmic principles: those that are permission-based and those that are token-based. Within the latter family, a lot of algorithms use a rooted tree structure to move the requests and the unique token. This paper presents a very general information structure (and the associated generic algorithm) for token- and tree-based mutual exclusion algorithms. This general structure not only covers, as particular cases, several known algorithms, but also allows for the design of new ones that are well suited for various topology requirements. >

Proceedings ArticleDOI
21 Jun 1994
TL;DR: In this paper, the authors propose a general approach to trace checking based on partial order theory for debugging distributed computations, and more generally when testing protocols or distributed applications, where the expected behavior (or suspected errors) by a global property is described by a predicate on process variables, or the set of admissible orderings on observable events.
Abstract: The problem of checking the correctness of distributed computations arises when debugging distributed algorithms, and more generally when testing protocols or distributed applications. For that purpose, one describes the expected behavior (or suspected errors) by a global property: for example, a predicate on process variables, or the set of admissible orderings on observable events. The problem is to check whether this property is satisfied or not during the execution. A relevant model for this study is the partial order of message causality and the associated state graph, called "lattice of consistent cuts". In this paper, we propose a general approach to trace checking, based on partial order theory. >

Journal ArticleDOI
TL;DR: This paper presents mutual exclusion algorithms which will be self-stabilizing while only requiring each machine in the network to have two states, and introduces the concept of a randomized central demon.
Abstract: A self-stabilizing system is a network of processors, which, when started from an arbitrary (and possibly illegal) initial state, always returns to a legal state in a finite number of steps. This implies that the system can automatically deal with infrequent errors. One issue in designing self-stabilizing algorithms is the number of states required by each machine. This paper presents mutual exclusion algorithms which will be self-stabilizing while only requiring each machine in the network to have two states. The concept of a randomized central demon is also introduced in this paper. The first algorithm is a starting point where no randomization is needed (the randomized central demon is not necessary). The other two algorithms require randomization. The second algorithm builds on the first algorithm and reduces the number of network connections required. Finally, the number of necessary connections is again reduced yielding the final two-state, probabilistic algorithm for an asynchronous, unidirectional ring of processes. >