scispace - formally typeset
Search or ask a question

Showing papers on "Distributed algorithm published in 2002"


Journal ArticleDOI
07 Aug 2002
TL;DR: In this paper, the authors describe decentralized control laws for the coordination of multiple vehicles performing spatially distributed tasks, which are based on a gradient descent scheme applied to a class of decentralized utility functions that encode optimal coverage and sensing policies.
Abstract: This paper describes decentralized control laws for the coordination of multiple vehicles performing spatially distributed tasks. The control laws are based on a gradient descent scheme applied to a class of decentralized utility functions that encode optimal coverage and sensing policies. These utility functions are studied in geographical optimization problems and they arise naturally in vector quantization and in sensor allocation tasks. The approach exploits the computational geometry of spatial structures such as Voronoi diagrams.

2,445 citations


01 Jan 2002
TL;DR: This work argues that a linearly ordered structure of time is not (always) adequate for distributed systems and proposes a generalized non-standard model of time which consists of vectors of clocks which are partially ordered and form a lattice.
Abstract: A distributed system can be characterized by the fact that the global state is distributed and that a common time base does not exist. However, the notion of time is an important concept in every day life of our decentralized \real world" and helps to solve problems like getting a consistent population census or determining the potential causality between events. We argue that a linearly ordered structure of time is not (always) adequate for distributed systems and propose a generalized non-standard model of time which consists of vectors of clocks. These clock-vectors are partially ordered and form a lattice. By using timestamps and a simple clock update mechanism the structure of causality is represented in an isomorphic way. The new model of time has a close analogy to Minkowski's relativistic spacetime and leads among others to an interesting characterization of the global state problem. Finally, we present a new algorithm to compute a consistent global snapshot of a distributed system where messages may be received out of order.

1,450 citations


Journal ArticleDOI
TL;DR: In this article, an abstract model and a comprehensive taxonomy for describing resource management architectures is developed, which is used to identify approaches followed in the implementation of existing resource management systems for very large-scale network computing systems known as Grids.
Abstract: The resource management system is the central component of distributed network computing systems. There have been many projects focused on network computing that have designed and implemented resource management systems with a variety of architectures and services. In this paper, an abstract model and a comprehensive taxonomy for describing resource management architectures is developed. The taxonomy is used to identify approaches followed in the implementation of existing resource management systems for very large-scale network computing systems known as Grids. The taxonomy and the survey results are used to identify architectural approaches and issues that have not been fully explored in the research. Copyright © 2001 John Wiley & Sons, Ltd.

993 citations


Proceedings Article
10 Jun 2002
TL;DR: A distributed algorithm for determining the positions of nodes in an ad-hoc, wireless sensor network is explained in detail and shows that, given an average connectivity of at least 12 nodes and 10% anchors, the algorithm performs well with up to 40% errors in distance measurements.
Abstract: A distributed algorithm for determining the positions of nodes in an ad-hoc, wireless sensor network is explained in detail. Details regarding the implementation of such an algorithm are also discussed. Experimentation is performed on networks containing 400 nodes randomly placed within a square area, and resulting error magnitudes are represented as percentages of each node’s radio range. In scenarios with 5% errors in distance measurements, 5% anchor node population (nodes with known locations), and average connectivity levels between neighbors of 7 nodes, the algorithm is shown to have errors less than 33% on average. It is also shown that, given an average connectivity of at least 12 nodes and 10% anchors, the algorithm performs well with up to 40% errors in distance measurements.

967 citations


Proceedings ArticleDOI
07 Nov 2002
TL;DR: This work presents their own distributed algorithm that outperforms the existing algorithms for minimum CDS and establishes the /spl Omega/(n log n) lower bound on the message complexity of any distributed algorithm for nontrivial CDS, which is thus message-optimal.
Abstract: The connected dominating set (CDS) has been proposed as the virtual backbone or spine of a wireless ad hoc network. Three distributed approximation algorithms have been proposed in the literature for minimum CDS. We first reinvestigate their performances. None of these algorithms have constant approximation factors. Thus these algorithms can not guarantee to generate a CDS of small size. Their message complexities can be as high as O(n/sup 2/), and their time complexities may also be as large as O(n/sup 2/) and O(n/sup 3/). We then present our own distributed algorithm that outperforms the existing algorithms. This algorithm has an approximation factor of at most 8, O(n) time complexity and O(n log n) message complexity. By establishing the /spl Omega/(n log n) lower bound on the message complexity of any distributed algorithm for nontrivial CDS, our algorithm is thus message-optimal.

834 citations


Journal ArticleDOI
TL;DR: It is proved that fixed-size window control can achieve fair bandwidth sharing according to any of these criteria, provided scheduling at each link is performed in an appropriate manner.
Abstract: This paper concerns the design of distributed algorithms for sharing network bandwidth resources among contending flows. The classical fairness notion is the so-called max-min fairness. The alternative proportional fairness criterion has recently been introduced by F. Kelly (see Eur. Trans. Telecommun., vol.8, p.33-7, 1997); we introduce a third criterion, which is naturally interpreted in terms of the delays experienced by ongoing transfers. We prove that fixed-size window control can achieve fair bandwidth sharing according to any of these criteria, provided scheduling at each link is performed in an appropriate manner. We then consider a distributed random scheme where each traffic source varies its sending rate randomly, based on binary feedback information from the network. We show how to select the source behavior so as to achieve an equilibrium distribution concentrated around the considered fair rate allocations. This stochastic analysis is then used to assess the asymptotic behavior of deterministic rate adaption procedures.

591 citations


Journal ArticleDOI
TL;DR: This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling and proposes an architectural framework that supports resource trading and quality of services based scheduling that enables the regulation of supply and demand for resources.
Abstract: Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economic-based systems for peer-to-peer distributed computing by developing users' quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications.

579 citations


Journal ArticleDOI
TL;DR: This work defines the simple path-vector protocol (SPVP), a distributed algorithm for solving the stable paths problem that is intended to capture the dynamic behavior of BGP at an abstract level and shows that SPVP will converge to the unique solution of an instance of the stable path problem if no dispute wheel exists.
Abstract: Dynamic routing protocols such as RIP and OSPF essentially implement distributed algorithms for solving the shortest paths problem. The border gateway protocol (BGP) is currently the only interdomain routing protocol deployed in the Internet. BGP does not solve a shortest paths problem since any interdomain protocol is required to allow policy-based metrics to override distance-based metrics and enable autonomous systems to independently define their routing policies with little or no global coordination. It is then natural to ask if BGP can be viewed as a distributed algorithm for solving some fundamental problem. We introduce the stable paths problem and show that BGP can be viewed as a distributed algorithm for solving this problem. Unlike a shortest path tree, such a solution does not represent a global optimum, but rather an equilibrium point in which each node is assigned its local optimum. We study the stable paths problem using a derived structure called a dispute wheel, representing conflicting routing policies at various nodes. We show that if no dispute wheel can be constructed, then there exists a unique solution for the stable paths problem. We define the simple path vector protocol (SPVP), a distributed algorithm for solving the stable paths problem. SPVP is intended to capture the dynamic behavior of BGP at an abstract level. If SPVP converges, then the resulting state corresponds to a stable paths solution. If there is no solution, then SPVP always diverges. In fact, SPVP can even diverge when a solution exists. We show that SPVP will converge to the unique solution of an instance of the stable paths problem if no dispute wheel exists.

536 citations


Proceedings ArticleDOI
23 Sep 2002
TL;DR: A new heuristic is described, Embedded Wireless Multicast Advantage, that compares well with other proposals and is explained how it can be distributed, and a formal proof that the problem of power-optimal broadcast is NP-complete is provided.
Abstract: In all-wireless networks a crucial problem is to minimize energy consumption, as in most cases the nodes are battery-operated. We focus on the problem of power-optimal broadcast, for which it is well known that the broadcast nature of the radio transmission can be exploited to optimize energy consumption. Several authors have conjectured that the problem of power-optimal broadcast is NP-complete. We provide here a formal proof, both for the general case and for the geometric one; in the former case, the network topology is represented by a generic graph with arbitrary weights, whereas in the latter a Euclidean distance is considered. We then describe a new heuristic, Embedded Wireless Multicast Advantage. We show that it compares well with other proposals and we explain how it can be distributed.

530 citations


Journal ArticleDOI
TL;DR: This paper aims to present the state‐of‐the‐art of Grid computing and attempts to survey the major international efforts in developing this emerging technology.
Abstract: The last decade has seen a substantial increase in commodity computer and network performance, mainly as a result of faster hardware and more sophisticated software. Nevertheless, there are still problems, in the fields of science, engineering, and business, which cannot be effectively dealt with using the current generation of supercomputers. In fact, due to their size and complexity, these problems are often very numerically and/or data intensive and consequently require a variety of heterogeneous resources that are not available on a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources unified to act as a single powerful computer. This new approach is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently peer-to-peer or Grid computing. The early efforts in Grid computing started as a project to link supercomputing sites, but have now grown far beyond their original intent. In fact, many applications can benefit from the Grid infrastructure, including collaborative engineering, data exploration, high-throughput computing, and of course distributed supercomputing. Moreover, due to the rapid growth of the Internet and Web, there has been a rising interest in Web-based distributed computing, and many projects have been started and aim to exploit the Web as an infrastructure for running coarse-grained distributed and parallel applications. In this context, the Web has the capability to be a platform for parallel and collaborative work as well as a key technology to create a pervasive and ubiquitous Grid-based infrastructure. This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in developing this emerging technology.

513 citations


Proceedings ArticleDOI
07 Aug 2002
TL;DR: Efficient distributed algorithms are given to optimally solve the best-coverage problem raised in Meguerdichian, and it is shown that the search space of the best coverage problem can be confined to the relative neighborhood graph, which can be constructed locally.
Abstract: Sensor networks pose a number of challenging conceptual and optimization problems such as location, deployment, and tracking. One of the fundamental problems in sensor networks is the calculation of the coverage. In Meguerdichian et al. (2001), it is assumed that the sensor has uniform sensing ability. In this paper, we give efficient distributed algorithms to optimally solve the best-coverage problem raised in Meguerdichian. Here, we consider the sensing model: the sensing ability diminishes as the distance increases. As energy conservation is a major concern in wireless (or sensor) networks, we also consider how to find an optimum best-coverage-path with the least energy consumption. We also consider how to find an optimum best-coverage-path that travels a small distance. In addition, we justify the correctness of the method proposed in Meguerdichian, that uses the Delaunay triangulation to solve the best coverage problem. Moreover, we show that the search space of the best coverage problem can be confined to the relative neighborhood graph, which can be constructed locally.

Proceedings ArticleDOI
07 Jan 2002
TL;DR: This work presents their own distributed algorithm that outperforms the existing algorithms for minimum CDS and establishes the /spl Omega/(n log n) lower bound on the message complexity of any distributed algorithm for nontrivial CDs, which is thus message-optimal.
Abstract: Connected dominating set (CDs) has been proposed as virtual backbone or spine of wireless ad hoc networks. Three distributed approximation algorithms have been proposed in the literature for minimum CDS. We first reinvestigate their performances. None of these algorithms have constant approximation factors. Thus these algorithms can not guarantee to generate a CDs of small size. Their message complexities can be as high as O(n/sup 2/), and their time complexities may also be as large as O(n/sup 2/) and O(n/sup 3/). We then present our own distributed algorithm that outperforms the existing algorithms. This algorithm has an approximation factor of at most 8, O(n) time complexity and O(n log n) message complexity. By establishing the /spl Omega/(n log n) lower bound on the message complexity of any distributed algorithm for nontrivial CDs, our algorithm is thus message-optimal.

Proceedings ArticleDOI
10 Aug 2002
TL;DR: This work presents a new distributed algorithm that can solve the nearest-neighbor problem for these networks and describes its solution in the context of Tapestry, an overlay network infrastructure that employs techniques proposed by Plaxton, Rajaraman, and Richa.
Abstract: Modern networking applications replicate data and services widely, leading to a need for location-independent routing -- the ability to route queries directly to objects using names independent of the objects' physical locations. Two important properties of a routing infrastructure are routing locality and rapid adaptation to arriving and departing nodes. We show how these two properties can be efficiently achieved for certain network topologies. To do this, we present a new distributed algorithm that can solve the nearest-neighbor problem for these networks. We describe our solution in the context of Tapestry, an overlay network infrastructure that employs techniques proposed by Plaxton, Rajaraman, and Richa [14].

Journal ArticleDOI
TL;DR: This approach is the first of its kind for solving the on-line cooperative observation problem and implementing it on a physical robot team and proposes that the CMOMMT problem makes an excellent domain for studying multi-robot learning in inherently cooperative tasks.
Abstract: An important issue that arises in the automation of many security, surveillance, and reconnaissance tasks is that of observing the movements of targets navigating in a bounded area of interest. A key research issue in these problems is that of sensor placement—determining where sensors should be located to maintain the targets in view. In complex applications involving limited-range sensors, the use of multiple sensors dynamically moving over time is required. In this paper, we investigate the use of a cooperative team of autonomous sensor-based robots for the observation of multiple moving targets. In other research, analytical techniques have been developed for solving this problem in complex geometrical environments. However, these previous approaches are very computationally expensive—at least exponential in the number of robots—and cannot be implemented on robots operating in real-time. Thus, this paper reports on our studies of a simpler problem involving uncluttered environments—those with either no obstacles or with randomly distributed simple convex obstacles. We focus primarily on developing the on-line distributed control strategies that allow the robot team to attempt to minimize the total time in which targets escape observation by some robot team member in the area of interest. This paper first formalizes the problem (which we term CMOMMT for i>Cooperative Multi-Robot Observation of Multiple Moving Targets) and discusses related work. We then present a distributed heuristic approach (which we call A-CMOMMT) for solving the CMOMMT problem that uses weighted local force vector control. We analyze the effectiveness of the resulting weighted force vector approach by comparing it to three other approaches. We present the results of our experiments in both simulation and on physical robots that demonstrate the superiority of the A-CMOMMT approach for situations in which the ratio of targets to robots is greater than 1/2. Finally, we conclude by proposing that the CMOMMT problem makes an excellent domain for studying multi-robot learning in inherently cooperative tasks. This approach is the first of its kind for solving the on-line cooperative observation problem and implementing it on a physical robot team.

Journal ArticleDOI
TL;DR: Two destributed heuristics with constant performance ratios are proposed, which require only single-hop neighborhood knowledge, and a message length of O (1) and O(n log n), respectively.
Abstract: A connected dominating set (CDS) for a graph G(V, E) is a subset V' of V, such that each node in V — V' is adjacent to some node in V', and V' induces a connected subgraph. CDSs have been proposed as a virtual backbone for routing in wireless ad hoc networks. However, it is NP-hard to find a minimum connected dominating set (MCDS). An approximation algorithm for MCDS in general graphs has been proposed in the literature with performance guarantee of 3 + In Δ where Δ is the maximal nodal degree [1]. This algorithm has been implemented in distributed manner in wireless networks [2]–[4]. This distributed implementation suffers from high time and message complexity, and the performance ratio remains 3 + In Δ. Another distributed algorithm has been developed in [5], with performance ratio of Θ(n). Both algorithms require two-hop neighborhood knowledge and a message length of Ω (Δ). On the other hand, wireless ad hoc networks have a unique geometric nature, which can be modeled as a unit-disk graph (UDG), and thus admits heuristics with better performance guarantee. In this paper we propose two destributed heuristics with constant performance ratios. The time and message complexity for any of these algorithms is O(n), and O(n log n), respectively. Both of these algorithms require only single-hop neighborhood knowledge, and a message length of O (1).

Journal ArticleDOI
TL;DR: New randomized distributed algorithms for the dominating set problem are described and analyzed that run in polylogarithmic time, independent of the diameter of the network, and that return a dominating set of size within a logarithic factor from optimal, with high probability.
Abstract: The dominating set problem asks for a small subset D of nodes in a graph such that every node is either in D or adjacent to a node in D. This problem arises in a number of distributed network applications, where it is important to locate a small number of centers in the network such that every node is nearby at least one center. Finding a dominating set of minimum size is NP-complete, and the best known approximation is logarithmic in the maximum degree of the graph and is provided by the same simple greedy approach that gives the well-known logarithmic approximation result for the closely related set cover problem.We describe and analyze new randomized distributed algorithms for the dominating set problem that run in polylogarithmic time, independent of the diameter of the network, and that return a dominating set of size within a logarithmic factor from optimal, with high probability. In particular, our best algorithm runs in O(log n log Δ) rounds with high probability, where n is the number of nodes, Δ is one plus the maximum degree of any node, and each round involves a constant number of message exchanges among any two neighbors; the size of the dominating set obtained is within O (log Δ) of the optimal in expectation and within O(log n) of the optimal with high probability. We also describe generalizations to the weighted case and the case of multiple covering requirements.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: An energy-efficient routing and scheduling algorithm that coordinates transmissions in ad hoc networks where each node has a single directional antenna and achieves all the possible transmitter/receiver gains possible from using directional antennas is presented.
Abstract: Directional antennas can be useful in significantly increasing node and network lifetime in wireless ad hoc networks. In order to utilize directional antennas, an algorithm is needed that will enable nodes to point their antennas to the right place at the right time. In this paper we present an energy-efficient routing and scheduling algorithm that coordinates transmissions in ad hoc networks where each node has a single directional antenna. Using the topology consisting of all the possible links in the network, we first find shortest cost paths to be energy efficient. Then, we calculate the amount of traffic that has to go over each link and find the maximum amount of time each link can be up, using end-to-end traffic information to achieve that routing. Finally, we schedule nodes' transmissions, trying to minimize the total time it takes for all possible transmitter-receiver pairs to communicate with each other. We formulate this link problem as solving a series of maximal-weight matching in a graph. Furthermore, we propose a method that can enable our scheduling algorithm to work in a distributed and adaptive fashion. We demonstrate that our algorithm achieves all the possible transmitter/receiver gains possible from using directional antennas. In addition, we illustrate through simulation that our routing scheme achieves up to another 45% improvement in energy cost for routing.

Dissertation
01 Jan 2002
TL;DR: This paper aims to provide a chronology of the events leading to and following the publication of this book and some of the main events leading up to and including the publication.
Abstract: ______________________________________________________________ iii Acknowledgements ______________________________________________________ iv Statement of Contribution _________________________________________________v Table of

Proceedings Article
01 Jan 2002
TL;DR: This paper gives a unifying treatment of max-min fairness, which encompasses all existing results in a simplifying framework, and extends its applicability to new examples, and shows that, if the set of feasible allocations has the free disposal property, thenmax-min programming reduces to a simpler algorithm, called water filling, whose complexity is much lower.
Abstract: Max-min fairness is widely used in various areas of networking. In every case where it is used, there is a proof of existence and one or several algorithms for computing the max-min fair allocation; in most, but not all cases, they are based on the notion of bottlenecks. In spite of this wide applicability, there are still examples, arising in the context of mobile or peer-to-peer networks, where the existing theories do not seem to apply directly. In this paper, we give a unifying treatment of max-min fairness, which encompasses all existing results in a simplifying framework, and extends its applicability to new examples. First, we observe that the existence of max-min fairness is actually a geometric property of the set of feasible allocations (uniqueness always holds). There exist sets on which max-min fairness does not exist, and we describe a large class of sets on which a max-min fair allocation does exist. This class contains the compact, convex sets of \RR^N, but not only. Second, we give a general purpose, centralized algorithm, called Max-min Programming, for computing the max-min fair allocation in all cases where it exists (whether the set of feasible allocations is in our class or not). Its complexity is of the order of N linear programming steps in R^N, in the case where the feasible set is defined by linear constraints. We show that, if the set of feasible allocations has the free-disposal property, then Max-min Programming degenerates to a simpler algorithm, called Water Filling, whose complexity is much less. Free disposal corresponds to the cases where a bottleneck argument can be made, and Water Filling is the general form of all previously known centralized algorithms for such cases. Our derivations are based on the relation between max-min fairness and leximin ordering. All our results apply mutatis mutandis to min-max fairness. Our results apply to weighted, unweighted and util-max-min and min-max fairness. Distributed algorithms for the computation of max-min fair allocations are left outside the scope of this paper.

Journal ArticleDOI
TL;DR: This paper proposes and evaluates a simple but effective feedback-based distributed algorithm for tuning the p parameter to the optimal values, and shows that power saving and throughput maximization can be jointly achieved.
Abstract: Wireless technologies in the LAN environment are becoming increasingly important. The IEEE 802.11 is the most mature technology for wireless local area networks (WLANs). The limited bandwidth and the finite battery power of mobile computers represent one of the greatest limitations of current WLANs. In this paper, we deeply investigate the efficiency and the energy consumption of MAC protocols that can be described with a p-persistent CSMA model. As already shown in the literature, the IEEE 802.11 protocol performance can be studied using a p-persistent CSMA model (Cali et al. 2000). For this class of protocols, in the paper, we define an analytical framework to study the theoretical performance bounds from the throughput and the energy consumption standpoint. Specifically, we derive the p values (i.e., the average size of the contention window in the IEEE 802.11 protocol (Cali et al.)) that maximizes the throughput, p/sub opt//sup C/, and minimizes the energy consumption, p/sub opt//sup E/. By providing analytical closed formulas for the optimal p values, we discuss the trade-off between efficiency and energy consumption. Specifically, we show that power saving and throughput maximization can be jointly achieved. Our analytical formulas indicate that the optimal p values depend on the network configuration, i.e., number of active stations and length of the messages transmitted on the channel. As network configurations dynamically change, the optimal p values must be dynamically updated. In this paper, we propose and evaluate a simple but effective feedback-based distributed algorithm for tuning the p parameter to the optimal values, i.e., p/sub opt//sup E/ and p/sub opt//sup C/. The performance of the p-persistent IEEE 802.11 protocol, enhanced with our algorithm, is extensively investigated by simulation. Our results indicate that the enhanced p-persistent IEEE 802.11 protocol is very close to the theoretical bounds both in steady-state and in transient conditions.

Book ChapterDOI
07 Sep 2002
TL;DR: This paper describes the recently released DREAM (Distributed Resource Evolutionary Algorithm Machine) framework for the automatic distribution of evolutionary algorithm (EA) processing through a virtual machine built from large numbers of individual machines linked by standard Internet protocols.
Abstract: This paper describes the recently released DREAM (Distributed Resource Evolutionary Algorithm Machine) framework for the automatic distribution of evolutionary algorithm (EA) processing through a virtual machine built from large numbers of individual machines linked by standard Internet protocols. The framework allows five different user entry points which depend on the knowledge and requirements of the user. At the highest level, users may specify and run distributed EAs simply by manipulating graphical displays. At the lowest level the framework turns becomes a P2P (Peer to Peer) mobile agent system, that may be used for the automatic distribution of a class of processes including, but not limited to, EAs.

Proceedings ArticleDOI
10 Dec 2002
TL;DR: This paper presents a distributed fault-tolerant topology control algorithm for minimum energy consumption in these networks and presents algorithms which preserve the connectivity of a network upon failing of, at most, k nodes and simultaneously minimize the transmission power at each node to some extent.
Abstract: We can control the topology of a multi-hop wireless network by varying the transmission power at each node. The life-time of such networks depends on battery power at each node. This paper presents a distributed fault-tolerant topology control algorithm for minimum energy consumption in these networks. More precisely, we present algorithms which preserve the connectivity of a network upon failing of, at most, k nodes (k is constant) and simultaneously minimize the transmission power at each node to some extent. In addition, we present simulations to support the effectiveness of our algorithm. We also demonstrate some optimizations to further minimize the power at each node. Finally, we show how our algorithms can be extended to 3-dimensions.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: A simple distributed algorithm is proposed to obtain an approximation to the social optimal power allocation for multi-class CDMA wireless services in a unified way and it is inferred that the system utility obtained by partial-cooperative optimalPower allocation is quite close to the system Utility obtained by social optimal allocation.
Abstract: We use a utility based power allocation framework in the downlink to treat multi-class CDMA wireless services in a unified way. Our goal is to obtain a power allocation which maximizes the total system utility. Natural utility functions for each mobile are non-concave. Hence we cannot use existing techniques on convex optimization problems to derive a social optimal solution. We propose a simple distributed algorithm to obtain an approximation to the social optimal power allocation. The algorithm is based on dynamic pricing and allows partial cooperation between mobiles and the base station. The algorithm consists of two stages. At the first stage, the base station selects mobiles to which power is allocated, considering their partial-cooperative nature. This is called partial-cooperative optimal selection, since in a partial-cooperative setting and pricing scheme, this selection is optimal and satisfies system feasibility. At the next stage, the base station allocates power to the selected mobiles. This power allocation is a social optimal power allocation among mobiles in the partial-cooperative optimal selection, thus, we call it a partial-cooperative optimal power allocation. We compare the partial-cooperative optimal power allocation with the social optimal power allocation for the single class case. From these results, we infer that the system utility obtained by partial-cooperative optimal power allocation is quite close to the system utility obtained by social optimal allocation.

Journal ArticleDOI
TL;DR: A distributed optimal location algorithm that requires small nodal memory capacity and computational power is developed that simplifies the combination operation used in the design of a dynamic program.

Journal ArticleDOI
TL;DR: This paper focuses on issues concerning the dissemination and retrieval of information and data on Computational Grid platforms, and feels that these issues are particularly critical at this time, and can point to preliminary ideas, work, and results in the Grid community and the distributed computing community.
Abstract: Ensembles of distributed, heterogeneous resources, or Computational Grids, have emerged as popular platforms for deploying large-scale and resource-intensive applications. Large collaborative efforts are currently underway to provide the necessary software infrastructure. Grid computing raises challenging issues in many areas of computer science, and especially in the area of distributed computing, as Computational Grids cover increasingly large networks and span many organizations. In this paper we briefly motivate Grid computing and introduce its basic concepts. We then highlight a number of distributed computing research questions, and discuss both the relevance and the short-comings of previous research results when applied to Grid computing. We choose to focus on issues concerning the dissemination and retrieval of information and data on Computational Grid platforms. We feel that these issues are particularly critical at this time, and as we can point to preliminary ideas, work, and results in the Grid community and the distributed computing community. This paper is of interest to distributing computing researchers because Grid computing provides new challenges that need to be addressed, as well as actual platforms for experimentation and research.

Proceedings ArticleDOI
15 Apr 2002
TL;DR: This work proposes a mechanism, using only local knowledge, to improve the overall performance of peer-to-peer networks based on interests, and implemented this mechanism in the context of a distributed encyclopedia-style information-sharing application which is built on top of the Gnutella network.
Abstract: As computing and communication capabilities have continued to increase, more and more activity is taking place at the edges of the network, typically in homes or on workers desktops. This trend has been demonstrated by the increasing popularity and usability of "peer-to-peer" systems, such as Napster and Gnutella. Unfortunately, this popularity has quickly shown the limitations of these systems, particularly in terms of scale. Because the networks form in an ad-hoc manner, they typically make inefficient use of resources. We propose a mechanism, using only local knowledge, to improve the overall performance of peer-to-peer networks based on interests. Peers monitor which other peers frequently respond successfully to their requests for information. When a peer is discovered to frequently provide good results, the peer attempts to move closer to it in the network by creating a new connection with that peer. This leads to clusters of peers with similar interests, and in turn allows us to limit the depth of searches required to find good results. We have implemented our algorithm in the context of a distributed encyclopedia-style information-sharing application which is built on top of the Gnutella network. In our testing environment, we have shown the ability to greatly reduce the amount of communication resources required to find the desired articles in the encyclopedia.

01 Jan 2002
TL;DR: In this article, the probabilistic asynchronous π-calculus with mixed choice is introduced, which is an extension of the synchronous-asynchronous π -calculus enhanced with a notion of random choice.
Abstract: In this dissertation, we consider a distributed implementation of the π-calculus, more precisely, the version of the π-calculus with mixed choice. To this end, we present the probabilistic asynchronous π-calculus, which is an extension of the asynchronous π-calculus enhanced with a notion of random choice. We define an operational semantics which distinguishes between probabilistic choice, made internally by the process, and nondeterministic choice, made externally by an adversary scheduler. This distinction will allow us to reason about the probabilistic correctness of algorithms under certain schedulers. We show that in this language we can solve the electoral problem, which was proved not possible in the asynchronous π-calculus. We propose a randomized distributed encoding of the π-calculus, using the probabilistic asynchronous π-calculus, and we show that our solution is correct with probability 1 under any proper adversary with respect to a notion of testing semantics. Finally, in order to prove that the probabilistic asynchronous π-calculus is a sensible paradigm for the specification of distributed algorithms, we define a distributed implementation of the synchronization-closed probabilistic asynchronous π-calculus in the Java language.

Journal ArticleDOI
TL;DR: This work proposes a novel scheme for a MAC address assignment that exploits the exploitation of spatial address reuse and an encoded representation of the addresses in data packets, and develops a purely distributed algorithm that relies solely on local message exchanges.
Abstract: Sensor networks consist of autonomous wireless sensor nodes that are networked together in an ad hoc fashion. The tiny nodes are equipped with substantial processing capabilities, enabling them to combine and compress their sensor data. The aim is to limit the amount of network traffic, and as such conserve the nodes' limited battery energy. However, due to the small packet payload, the MAC header is a significant, and energy-costly, overhead. To remedy this, we propose a novel scheme for a MAC address assignment. The two key features which make our approach unique are the exploitation of spatial address reuse and an encoded representation of the addresses in data packets. To assign the addresses, we develop a purely distributed algorithm that relies solely on local message exchanges. Other salient features of our approach are the ability to handle unidirectional links and the excellent scalability of both the assignment algorithm and address representation. In typical scenarios, the MAC overhead is reduced by a factor of three compared to existing approaches.

Journal ArticleDOI
TL;DR: This work addresses the rate control problem for multirate multicast sessions, with the objective of maximizing the total receiver utility, and proposes an algorithm for this problem that converges to the optimal rates.
Abstract: In multirate multicasting, different users (receivers) within the same multicast group can receive service at different rates, depending on the user requirements and the network congestion level. Compared with unirate multicasting, this provides more flexibility to the user and allows more efficient usage of the network resources. We address the rate control problem for multirate multicast sessions, with the objective of maximizing the total receiver utility. This aggregate utility maximization problem not only takes into account the heterogeneity in user requirements, but also provides a unified framework for diverse fairness objectives. We propose an algorithm for this problem and show, through analysis and simulation, that it converges to the optimal rates. In spite of the nonseparability of the problem, the solution that we develop is completely decentralized, scalable and does not require the network to know the receiver utilities. The algorithm requires very simple computations both for the user and the network, and also has a very low overhead of network congestion feedback.

Journal ArticleDOI
TL;DR: The proposed MMRS has several parameters which can be suitably modified to control the end-to-end delay and packet loss in a topology-specific manner and can be adjusted to offer limited priorities to some desired sessions.
Abstract: We propose a new multicast routing and scheduling algorithm called multipurpose multicast routing and scheduling algorithm (MMRS). The routing policy load balances among various possible routes between the source and the destinations, basing its decisions on the message queue lengths at the source node. The scheduling is such that the flow of a session depends on the congestion of the next hop links. MMRS is throughput optimal. In addition, it has several other attractive features. It is computationally simple and can be implemented in a distributed, asynchronous manner. It has several parameters which can be suitably modified to control the end-to-end delay and packet loss in a topology-specific manner. These parameters can be adjusted to offer limited priorities to some desired sessions. MMRS is expected to play a significant role in end-to-end congestion control in the multicast scenario.