scispace - formally typeset
Search or ask a question

Showing papers on "Network topology published in 2002"


Journal ArticleDOI
TL;DR: The per-session throughput for applications with loose delay constraints, such that the topology changes over the time-scale of packet delivery, can be increased dramatically under this assumption, and a form of multiuser diversity via packet relaying is exploited.
Abstract: The capacity of ad hoc wireless networks is constrained by the mutual interference of concurrent transmissions between nodes. We study a model of an ad hoc network where n nodes communicate in random source-destination pairs. These nodes are assumed to be mobile. We examine the per-session throughput for applications with loose delay constraints, such that the topology changes over the time-scale of packet delivery. Under this assumption, the per-user throughput can increase dramatically when nodes are mobile rather than fixed. This improvement can be achieved by exploiting a form of multiuser diversity via packet relaying.

2,736 citations


01 Mar 2002
TL;DR: The results indicate that the co-authorship network of scientists is scale-free, and that the network evolution is governed by preferential attachment, affecting both internal and external links, and a simple model is proposed that captures the network's time evolution.
Abstract: The co-authorship network of scientists represents a prototype of complex evolving networks. In addition, it o8ers one of the most extensive database to date on social networks. By mapping the electronic database containing all relevant journals in mathematics and neuro-science for an 8-year period (1991–98), we infer the dynamic and the structural mechanisms that govern the evolution and topology of this complex system. Three complementary approaches allow us to obtain a detailed characterization. First, empirical measurements allow us to uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The results indicate that the network is scale-free, and that the network evolution is governed by preferential attachment, a8ecting both internal and external links. However, in contrast with most model predictions the average degree increases in time, and the node separation decreases. Second, we propose a simple model that captures the network’s time evolution. In some limits the model can be solved analytically, predicting a two-regime scaling in agreement with the measurements. Third, numerical simulations are used to uncover the behavior of quantities that could not be predicted analytically. The combined numerical and analytical results underline the important role internal links play in determining the observed scaling behavior and network topology. The results and methodologies developed in the context of the co-authorship network could be useful for a systematic study of other complex evolving networks as well, such as the world wide web, Internet, or other social networks. c

2,277 citations


Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the evolution of the co-authorship network of scientists and found that the network is scale-free and the network evolution is governed by preferential attachment, a8ecting both internal and external links.
Abstract: The co-authorship network of scientists represents a prototype of complex evolving networks. In addition, it o8ers one of the most extensive database to date on social networks. By mapping the electronic database containing all relevant journals in mathematics and neuro-science for an 8-year period (1991–98), we infer the dynamic and the structural mechanisms that govern the evolution and topology of this complex system. Three complementary approaches allow us to obtain a detailed characterization. First, empirical measurements allow us to uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The results indicate that the network is scale-free, and that the network evolution is governed by preferential attachment, a8ecting both internal and external links. However, in contrast with most model predictions the average degree increases in time, and the node separation decreases. Second, we propose a simple model that captures the network’s time evolution. In some limits the model can be solved analytically, predicting a two-regime scaling in agreement with the measurements. Third, numerical simulations are used to uncover the behavior of quantities that could not be predicted analytically. The combined numerical and analytical results underline the important role internal links play in determining the observed scaling behavior and network topology. The results and methodologies developed in the context of the co-authorship network could be useful for a systematic study of other complex evolving networks as well, such as the world wide web, Internet, or other social networks. c

2,193 citations


Proceedings ArticleDOI
22 Jun 2002
TL;DR: This paper proposes a query algorithm based on multiple random walks that resolves queries almost as quickly as Gnutella's flooding method while reducing the network traffic by two orders of magnitude in many cases.
Abstract: Decentralized and unstructured peer-to-peer networks such as Gnutella are attractive for certain applications because they require no centralized directories and no precise control over network topology or data placement. However, the flooding-based query algorithm used in Gnutella does not scale; each query generates a large amount of traffic and large systems quickly become overwhelmed by the query-induced load. This paper explores, through simulation, various alternatives to Gnutella's query algorithm, data replication strategy, and network topology. We propose a query algorithm based on multiple random walks that resolves queries almost as quickly as Gnutella's flooding method while reducing the network traffic by two orders of magnitude in many cases. We also present simulation results on a distributed replication strategy proposed in [8]. Finally, we find that among the various network topologies we consider, uniform random graphs yield the best performance.

1,709 citations


Journal ArticleDOI
TL;DR: An on-demand distributed clustering algorithm for multi-hop packet radio networks that takes into consideration the ideal degree, transmission power, mobility, and battery power of mobile nodes, and is aimed to reduce the computation and communication costs.
Abstract: In this paper, we propose an on-demand distributed clustering algorithm for multi-hop packet radio networks. These types of networks, also known as i>ad hoc networks, are dynamic in nature due to the mobility of nodes. The association and dissociation of nodes to and from i>clusters perturb the stability of the network topology, and hence a reconfiguration of the system is often unavoidable. However, it is vital to keep the topology stable as long as possible. The i>clusterheads, form a i>dominant set in the network, determine the topology and its stability. The proposed weight-based distributed clustering algorithm takes into consideration the ideal degree, transmission power, mobility, and battery power of mobile nodes. The time required to identify the clusterheads depends on the diameter of the underlying graph. We try to keep the number of nodes in a cluster around a pre-defined threshold to facilitate the optimal operation of the medium access control (MAC) protocol. The non-periodic procedure for clusterhead election is invoked on-demand, and is aimed to reduce the computation and communication costs. The clusterheads, operating in “dual" power mode, connects the clusters which help in routing messages from a node to any other node. We observe a trade-off between the uniformity of the load handled by the clusterheads and the connectivity of the network. Simulation experiments are conducted to evaluate the performance of our algorithm in terms of the number of clusterheads, i>reaffiliation frequency, and dominant set updates. Results show that our algorithm performs better than existing ones and is also tunable to different kinds of network conditions.

1,419 citations


Proceedings ArticleDOI
09 Jun 2002
TL;DR: This paper classifies existing broadcasting schemes into categories and simulating a subset of each, thus supplying a condensed but comprehensive side by side comparison, and proposes and implements protocol extensions using adaptive responses to network conditions that performs well in the comparative study.
Abstract: Network wide broadcasting in Mobile Ad Hoc Networks provides important control and route establishment functionality for a number of unicast and multicast protocols. Considering its wide use as a building block for other network layer protocols, the MANET community needs to standardize a single methodology that efficiently delivers a packet from one node to all other network nodes. Despite a considerable number of proposed broadcasting schemes, no comprehensive comparative analysis has been previously done. This paper provides such analysis by classifying existing broadcasting schemes into categories and simulating a subset of each, thus supplying a condensed but comprehensive side by side comparison.The simulations are designed to pinpoint, in each, specific failures to network conditions that are relevant to MANETs, e.g., bandwidth congestion and dynamic topologies. In addition, protocol extensions using adaptive responses to network conditions are proposed, implemented and analyzed for one broadcasting scheme that performs well in the comparative study.

1,417 citations


Journal ArticleDOI
TL;DR: In this article, the synchronization phenomenon in scale-free dynamical networks is investigated and it is shown that if the coupling strength of a scale free dynamical network is greater than a positive threshold, then the network will synchronize no matter how large it is.
Abstract: Recently, it has been demonstrated that many large complex networks display a scale-free feature, that is, their connectivity distributions are in the power-law form. In this paper, we investigate the synchronization phenomenon in scale-free dynamical networks. We show that if the coupling strength of a scale-free dynamical network is greater than a positive threshold, then the network will synchronize no matter how large it is. We show that the synchronizability of a scale-free dynamical network is robust against random removal of nodes, but is fragile to specific removal of the most highly connected nodes.

1,089 citations


Journal ArticleDOI
TL;DR: The resulting network exhibits a scale-free link distribution and pronounced small-world behavior, as observed in other social networks, implying that the spreading of e-mail viruses is greatly facilitated in real e- mail networks compared to random architectures.
Abstract: We study the topology of e-mail networks with e-mail addresses as nodes and e-mails as links using data from server log files. The resulting network exhibits a scale-free link distribution and pronounced small-world behavior, as observed in other social networks. These observations imply that the spreading of e-mail viruses is greatly facilitated in real e-mail networks compared to random architectures.

954 citations


Proceedings ArticleDOI
07 Nov 2002
TL;DR: In this paper, the authors present a binning scheme whereby nodes partition themselves into bins such that nodes that fall within a given bin are relatively close to one another in terms of network latency.
Abstract: A number of large-scale distributed Internet applications could potentially benefit from some level of knowledge about the relative proximity between its participating host nodes. For example, the performance of large overlay networks could be improved if the application-level connectivity between the nodes in these networks is congruent with the underlying IP-level topology. Similarly, in the case of replicated Web content, client nodes could use topological information in selecting one of multiple available servers. For such applications, one need not find the optimal solution in order to achieve significant practical benefits. Thus, these applications, and presumably others like them, do not require exact topological information and can instead use sufficiently informative hints about the relative positions of Internet hosts. In this paper, we present a binning scheme whereby nodes partition themselves into bins such that nodes that fall within a given bin are relatively close to one another in terms of network latency. Our binning strategy is simple (requiring minimal support from any measurement infrastructure), scalable (requiring no form of global knowledge, each node only needs knowledge of a small number of well-known landmark nodes) and completely distributed (requiring no communication or cooperation between the nodes being binned). We apply this binning strategy to the two applications mentioned above: overlay network construction and server selection. We test our binning strategy and its application using simulation and Internet measurement traces. Our results indicate that the performance of these applications can be significantly improved by even the rather coarse-grained knowledge of topology offered by our binning scheme.

876 citations


Journal ArticleDOI
01 Jan 2002
TL;DR: The ASCENT algorithm is motivated and described and it is shown that the system achieves linear increase in energy savings as a function of the density and the convergence time required in case of node failures while still providing adequate connectivity.
Abstract: Advances in microsensor and radio technology enable small but smart sensors to be deployed for a wide range of environmental monitoring applications. The low-per node cost allows these wireless networks of sensors and actuators to be densely distributed. The nodes in these dense networks coordinate to perform the distributed sensing and actuation tasks. Moreover, as described in this paper, the nodes can also coordinate to exploit the redundancy provided by high density so as to extend overall system lifetime. The large number of nodes deployed in this systems preclude manual configuration, and the environmental dynamics precludes design-time preconfiguration. Therefore, nodes have to self-configure to establish a topology that provides communication under stringent energy constraints. ASCENT builds on the notion that, as density increases, only a subset of the nodes is necessary to establish a routing forwarding backbone. In ASCENT, each node assesses its connectivity and adapts its participation in the multihop network topology based on the measured operating region. This paper motivates and describes the ASCENT algorithm and presents analysis, simulation, and experimental measurements. We show that the system achieves linear increase in energy savings as a function of the density and the convergence time required in case of node failures while still providing adequate connectivity.

851 citations


Journal ArticleDOI
14 Nov 2002-Nature
TL;DR: A theoretical method for simultaneously predicting key aspects of network functionality, robustness and gene regulation from network structure alone is devised by determining and analysing the non-decomposable pathways able to operate coherently at steady state (elementary flux modes).
Abstract: The relationship between structure, function and regulation in complex cellular networks is a still largely open question. Systems biology aims to explain this relationship by combining experimental and theoretical approaches. Current theories have various strengths and shortcomings in providing an integrated, predictive description of cellular networks. Specifically, dynamic mathematical modelling of large-scale networks meets difficulties because the necessary mechanistic detail and kinetic parameters are rarely available. In contrast, structure-oriented analyses only require network topology, which is well known in many cases. Previous approaches of this type focus on network robustness or metabolic phenotype, but do not give predictions on cellular regulation. Here, we devise a theoretical method for simultaneously predicting key aspects of network functionality, robustness and gene regulation from network structure alone. This is achieved by determining and analysing the non-decomposable pathways able to operate coherently at steady state (elementary flux modes). We use the example of Escherichia coli central metabolism to illustrate the method.

Journal ArticleDOI
TL;DR: This work studied the topology and protocols of the public Gnutella network to evaluate costs and benefits of the peer-to-peer (P2P) approach and to investigate possible improvements that would allow better scaling and increased reliability in Gnutsella and similar networks.
Abstract: We studied the topology and protocols of the public Gnutella network. Its substantial user base and open architecture make it a good large-scale, if uncontrolled, testbed. We captured the network's topology, generated traffic, and dynamic behavior to determine its connectivity structure and how well (if at all) Gnutella's overlay network topology maps to the physical Internet infrastructure. Our analysis of the network allowed us to evaluate costs and benefits of the peer-to-peer (P2P) approach and to investigate possible improvements that would allow better scaling and increased reliability in Gnutella and similar networks. A mismatch between Gnutella's overlay network topology and the Internet infrastructure has critical performance implications.

Journal ArticleDOI
TL;DR: The node architecture for a WDM mesh network with traffic-grooming capability, using wavelength-division multiplexer (OADM) to perform the optical bypass at intermediate nodes to improve the network throughput is studied.
Abstract: In wavelength-division multiplexing (WDM) optical networks, the bandwidth request of a traffic stream can be much lower than the capacity of a lightpath. Efficiently grooming low-speed connections onto high-capacity lightpaths will improve the network throughput and reduce the network cost. In WDM/SONET ring networks, it has been shown in the optical network literature that by carefully grooming the low-speed connection and using wavelength-division multiplexer (OADM) to perform the optical bypass at intermediate nodes, electronic ADMs can be saved and network cost will be reduced. In this study, we investigate the traffic-grooming problem in a WDM-based optical mesh topology network. Our objective is to improve the network throughput. We study the node architecture for a WDM mesh network with traffic-grooming capability. A mathematical formulation of the traffic-grooming problem is presented in this study and several fast heuristics are also proposed and evaluated.

Journal ArticleDOI
TL;DR: This work proposes a new technique, called sparse topology and energy management (STEM), which efficiently wakes up nodes from a deep sleep state without the need for an ultra low-power radio, and shows that this scheme results in energy savings of over two orders of magnitude compared to sensor networks without topology management.
Abstract: In wireless sensor networks, energy efficiency is crucial to achieving satisfactory network lifetime. To reduce the energy consumption significantly, a node should turn off its radio most of the time, except when it has to participate in data forwarding. We propose a new technique, called sparse topology and energy management (STEM), which efficiently wakes up nodes from a deep sleep state without the need for an ultra low-power radio. The designer can trade the energy efficiency of this sleep state for the latency associated with waking up the node. In addition, we integrate STEM with approaches that also leverage excess network density. We show that our hybrid wakeup scheme results in energy savings of over two orders of magnitude compared to sensor networks without topology management. Furthermore, the network designer is offered full flexibility in exploiting the energy-latency-density design space by selecting the appropriate parameter settings of our protocol.

Posted Content
TL;DR: MPICH-G2 as discussed by the authors is a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer.
Abstract: Application development for distributed computing "Grids" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I/O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: The ASCENT algorithm, which assesses its connectivity and adapts its participation in the multi-hop network topology based on the measured operating region, aims to establish a topology that provides communication and sensing coverage under stringent energy constraints.
Abstract: Advances in micro-sensor and radio technology will enable small but smart sensors to be deployed for a wide range of environmental monitoring applications. The low per-node cost will allow these wireless networks of sensors and actuators to be densely distributed. The nodes in these dense networks will coordinate to perform the distributed sensing tasks. Moreover, as described in this paper, the nodes can also coordinate to exploit the redundancy provided by high density, so as to extend overall system lifetime. The large number of nodes deployed in these systems will preclude manual configuration, and the environmental dynamics will preclude design-time pre-configuration. Therefore, nodes will have to self-configure to establish a topology that provides communication and sensing coverage under stringent energy constraints. In ASCENT, each node assesses its connectivity and adapts its participation in the multi-hop network topology based on the measured operating region. This paper motivates and describes the ASCENT algorithm and presents simulation and experimental measurements.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: This work proposes a variation of the recent incremental topology generator of R. Albert and A. Barabasi that is more successful at matching the power law exponent and the clustering behavior of the Internet.
Abstract: Recent work has shown that the node degree in the WWW induced graph and the autonomous system (AS) level Internet topology exhibit power laws. Since then, several algorithms have been proposed to generate such power law graphs. We evaluate the effectiveness of these generators to generate representative AS-level topologies. Our conclusions are mixed. Although they (mostly) do a reasonable job at capturing the power law exponent, they do less well in capturing the clustering phenomena exhibited by the Internet topology. Based on these results, we propose a variation of the recent incremental topology generator of R. Albert and A. Barabasi (see Phys. Rev. Letters, vol.85, p.5234-7, 2000) that is more successful at matching the power law exponent and the clustering behavior of the Internet. Last, we comment on the small world behavior of the Internet topology.

Proceedings ArticleDOI
23 Sep 2002
TL;DR: A new heuristic is described, Embedded Wireless Multicast Advantage, that compares well with other proposals and is explained how it can be distributed, and a formal proof that the problem of power-optimal broadcast is NP-complete is provided.
Abstract: In all-wireless networks a crucial problem is to minimize energy consumption, as in most cases the nodes are battery-operated. We focus on the problem of power-optimal broadcast, for which it is well known that the broadcast nature of the radio transmission can be exploited to optimize energy consumption. Several authors have conjectured that the problem of power-optimal broadcast is NP-complete. We provide here a formal proof, both for the general case and for the geometric one; in the former case, the network topology is represented by a generic graph with arbitrary weights, whereas in the latter a Euclidean distance is considered. We then describe a new heuristic, Embedded Wireless Multicast Advantage. We show that it compares well with other proposals and we explain how it can be distributed.

Journal ArticleDOI
TL;DR: It is argued that traditional shortest path routing protocols are surprisingly effective for engineering the flow of traffic in large IP networks.
Abstract: Traffic engineering involves adapting the routing of traffic to network conditions, with the joint goals of good user performance and efficient use of network resources. We describe an approach to intradomain traffic engineering that works within the existing deployed base of interior gateway protocols, such as Open Shortest Path First and Intermediate System-Intermediate System. We explain how to adapt the configuration of link weights, based on a networkwide view of the traffic and topology within a domain. In addition, we summarize the results of several studies of techniques for optimizing OSPF/IS-IS weights to the prevailing traffic. The article argues that traditional shortest path routing protocols are surprisingly effective for engineering the flow of traffic in large IP networks.

Proceedings ArticleDOI
19 May 2002
TL;DR: It is shown that under weak hypotheses on the class of allowable edge latency functions, the worst-case ratio between the total latency of a Nash equilibrium and of a minimum-latency routing for any multicommodity flow network is achieved by a single-commodity instance on a set of parallel links.
Abstract: We study the degradation in network performance caused by the selfish behavior of noncooperative network users. We consider a directed network in which each edge possesses a latency function describing the common latency incurred by all traffic on the edge as a function of the edge congestion. Given a rate of traffic between each pair of nodes in the network, we aspire toward an assignment of traffic to paths in which the sum of all travel times (the total latency) is minimized; however, in many settings network users are free to route their traffic in a selfish manner, without regard to the total latency. We therefore assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a "selfishly motivated" assignment of traffic to paths (a Nash equilibrium) will not minimize the total latency; hence, selfish behavior carries the cost of decreased network performance. We quantify this degradation in network performance via the price of anarchy, defined as the worst possible ratio between the total latency of a Nash equilibrium and of a minimum-latency routing of the traffic.In this paper, we show that the underlying network topology plays no role in the determination of the price of anarchy. Specifically, we show that under weak hypotheses on the class of allowable edge latency functions, the worst-case ratio between the total latency of a Nash equilibrium and of a minimum-latency routing for any multicommodity flow network is achieved by a single-commodity instance on a set of parallel links. In the special case where the class of allowable latency functions includes all of the constant functions, we prove that a network with only two parallel links suffices to achieve the worst-possible ratio. Informally, these results imply that the inefficiency inherent in a flow at Nash equilibrium stems from the inability of selfish users to discern which of two competing routes is superior and not from the topological complexity arising from the diverse intersections of many paths belonging to different commoditie.Our proof techniques also give powerful methods for computing the price of anarchy with respect to an arbitrary class of latency functions. We apply these methods to function classes that have been well studied in the literature (such as degree-bounded polynomials and functions of the form $\ell(x) = (u—x)—1} that are used to model edges with capacity u), thereby achieving the first tight analyses of the price of anarchy for significant classes of latency functions outside the class of linear functions.

Proceedings ArticleDOI
19 Aug 2002
TL;DR: It is found that network generators based on the degree distribution more accurately capture the large-scale structure of measured topologies, and an explanation is sought by examining the nature of hierarchy in the Internet more closely.
Abstract: Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks with a deliberately hierarchical structure. However, in 1999 a seminal paper by Faloutsos et al. revealed that the Internet's degree distribution is a power-law. Because the degree distributions produced by the Transit-Stub and Tiers generators are not power-laws, the research community has largely dismissed them as inadequate and proposed new network generators that attempt to generate graphs with power-law degree distributions.Contrary to much of the current literature on network topology generators, this paper starts with the assumption that it is more important for network generators to accurately model the large-scale structure of the Internet (such as its hierarchical structure) than to faithfully imitate its local properties (such as the degree distribution). The purpose of this paper is to determine, using various topology metrics, which network generators better represent this large-scale structure. We find, much to our surprise, that network generators based on the degree distribution more accurately capture the large-scale structure of measured topologies. We then seek an explanation for this result by examining the nature of hierarchy in the Internet more closely; we find that degree-based generators produce a form of hierarchy that closely resembles the loosely hierarchical nature of the Internet.

01 Jan 2002
TL;DR: In this article, the authors present version 3.0 of Inet, an Autonomous system (AS) level Internet topology generator, which improves upon Inet-2.2 by creating topologies with more accurate degree distributions and minimum vertex covers.
Abstract: In this report we present version 3.0 of Inet, an Autonomous System (AS) level Internet topology generator. Our understanding of the Internet topology is quickly evolving, and thus, our understanding of how synthetic topologies should be generated is changing too. We document our analysis of Inet2.2, which highlighted two shortcommings in its topologies. Inet-3.0 improves upon Inet-2.2’s two main weaknesses by creating topologies with more accurate degree distributions and minimum vertex covers as compared to Internet topologies. We also examine numerous other metrics to show that Inet3.0 better approximates the actual Internet AS topology than does Inet-2.2. Inet-3.0’s topologies still do not well represent the Internet in terms of maximum clique size and clustering coefficient. These related problems stress a need for a better understanding of Internet connectivity and will be addressed in future work.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: Results show that distortion reduction by about 20 to 40% can be realized even when the underlying CDN is not designed with MDC streaming in mind, and for certain topologies, MDC requires about 50% fewer CDN servers than conventional streaming techniques to achieve the same distortion at the clients.
Abstract: We propose a system that improves the performance of streaming media CDN by exploiting the path diversity provided by existing CDN infrastructure. Path diversity is provided by the different network paths that exist between a client and its nearby edge servers; and multiple description (MD) coding is coupled with this path diversity to provide resilience to losses. In our system, MD coding is used to code a media stream into multiple complementary descriptions, which are distributed across the edge servers in the CDN. When a client requests a media stream, it is directed to multiple nearby servers which host complementary descriptions. These servers simultaneously stream these complementary descriptions to the client over different network paths. This paper provides distortion models for MDC video and conventional video. We use these models to select the optimal pair of servers with complementary descriptions for each client while accounting for path lengths and path jointness and disjointness. We also use these models to evaluate the performance of MD streaming over CDN in a number of real and generated network topologies. Our results show that distortion reduction by about 20 to 40% can be realized even when the underlying CDN is not designed with MDC streaming in mind. Also, for certain topologies, MDC requires about 50% fewer CDN servers than conventional streaming techniques to achieve the same distortion at the clients.

Proceedings ArticleDOI
28 Oct 2002
TL;DR: In this article, the authors present a tree browser that adds dynamic rescaling of branches of the tree to best fit the available screen space, optimized camera movement, and the use of preview icons summarizing the topology of the branches that cannot be expanded.
Abstract: We present a novel tree browser that builds on the conventional node link tree diagrams. It adds dynamic rescaling of branches of the tree to best fit the available screen space, optimized camera movement, and the use of preview icons summarizing the topology of the branches that cannot be expanded. In addition, it includes integrated search and filter functions. This paper reflects on the evolution of the design and highlights the principles that emerged from it. A controlled experiment showed benefits for navigation to already previously visited nodes and estimation of overall tree topology.

Proceedings ArticleDOI
09 Mar 2002
TL;DR: This work proposes a new technique, called sparse topology and energy management (STEM), that dramatically improves the network lifetime by exploiting the fact that most of the time, the network is only sensing its environment waiting for an event to happen.
Abstract: In wireless sensor networks, where energy efficiency is the key design challenge, the energy consumption is typically dominated by the node's communication subsystem. It can only be reduced significantly by transitioning the embedded radio to a sleep state, at which point the node essentially retracts from the network topology. Existing topology management schemes have focused on cleverly selecting which nodes can turn off their radio, without sacrificing the capacity of the network. We propose a new technique, called sparse topology and energy management (STEM), that dramatically improves the network lifetime by exploiting the fact that most of the time, the network is only sensing its environment waiting for an event to happen. By alleviating the restriction of network capacity preservation, we can trade off extensive energy savings for an increased latency to set up a multi-hop path. We will also show how STEM integrates efficiently with existing topology management techniques.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: This work presents a novel localized networking protocol that constructs a planar 2.5-spanner of UDG, called the localized Delaunay triangulation, as network topology, and shows that the delivery rates of existing localized routing protocols are increased when localizedDelaunay Triangulation is used instead of several previously proposed topologies.
Abstract: Several localized routing protocols (see Bose, P. and Morin, P., Proc. 10th Annual Int. Symp. on Algorithms and Computation ISAAC, 1999) guarantee the delivery of packets when the underlying network topology is the Delaunay triangulation of all wireless nodes. However, it is expensive to construct the Delaunay triangulation in a distributed manner. Given a set of wireless nodes, we more accurately model the network as a unit-disk graph, UDG, in which a link between two nodes exists only if the distance between them is at most the maximum transmission range. Given a graph H, a spanning subgraph G of H is a t-spanner if the length of the shortest path connecting any two points in G is no more than t times the length of the shortest path connecting the two points in H. We present a novel localized networking protocol that constructs a planar 2.5-spanner of UDG, called the localized Delaunay triangulation, as network topology. It contains all edges that are in both the UDG and the Delaunay triangulation of all wireless nodes. Our experiments show that the delivery rates of existing localized routing protocols are increased when localized Delaunay triangulation is used instead of several previously proposed topologies. The total communication cost of our networking protocol is O(n log n) bits. Moreover, the computation cost of each node u is O(d/sub u/ log d/sub u/), where d/sub u/ is the number of 1-hop neighbors of u in UDG.

Proceedings ArticleDOI
10 Aug 2002
TL;DR: This work presents a new distributed algorithm that can solve the nearest-neighbor problem for these networks and describes its solution in the context of Tapestry, an overlay network infrastructure that employs techniques proposed by Plaxton, Rajaraman, and Richa.
Abstract: Modern networking applications replicate data and services widely, leading to a need for location-independent routing -- the ability to route queries directly to objects using names independent of the objects' physical locations. Two important properties of a routing infrastructure are routing locality and rapid adaptation to arriving and departing nodes. We show how these two properties can be efficiently achieved for certain network topologies. To do this, we present a new distributed algorithm that can solve the nearest-neighbor problem for these networks. We describe our solution in the context of Tapestry, an overlay network infrastructure that employs techniques proposed by Plaxton, Rajaraman, and Richa [14].

Proceedings ArticleDOI
16 Nov 2002
TL;DR: This work introduces a framework for solving online problems that aim to minimize the congestion in general topology networks and achieves a competitive ratio of O(log/sup 3/ n) with respect to the congestion of the network links.
Abstract: A principle task in parallel and distributed systems is to reduce the communication load in the interconnection network, as this is usually the major bottleneck for the performance of distributed applications. We introduce a framework for solving online problems that aim to minimize the congestion (i.e. the maximum load of a network link) in general topology networks. We apply this framework to the problem of online routing of virtual circuits and to a dynamic data management problem. For both scenarios we achieve a competitive ratio of O(log/sup 3/ n) with respect to the congestion of the network links. Our online algorithm for the routing problem has the remarkable property that it is oblivious, i.e., the path chosen for a virtual circuit is independent of the current network load. Oblivious routing strategies can easily be implemented in distributed environments and have therefore been intensively studied for certain network topologies as e.g. meshes, tori and hypercubic networks. This is the first oblivious path selection algorithm that achieves a polylogarithmic competitive ratio in general networks.

Patent
02 Jan 2002
TL;DR: In this paper, a system and method using inheritance for the configuration, management, and/or monitoring of computer applications and devices via a computer network are disclosed. The method generally comprises determining a hierarchical tree structure based upon locations of devices in a network topology, each device being a node in the hierarchical tree, determining policies for each node in hierarchical tree to be enforced by an agent corresponding to each node, the agent being in communication with the device and the resources corresponding to the device, and communicating the policy to the corresponding agent.
Abstract: A system and method using inheritance for the configuration, management, and/or monitoring of computer applications and devices via a computer network are disclosed. The method generally comprises determining a hierarchical tree structure based upon locations of devices in a network topology, each device being a node in the hierarchical tree structure, determining policies for each node in the hierarchical tree structure to be enforced by an agent corresponding to each node, the agent being in communication with the device and the resources corresponding to the device, and communicating the policy to the corresponding agent, wherein the policies corresponding to the resources of each device are selectively inherited along the hierarchical tree structure of the network directory.

Journal ArticleDOI
01 Jul 2002
TL;DR: The current ModelNet prototype is able to accurately subject thousands of instances of a distrbuted application to Internet-like conditions with gigabits of bisection bandwidth, including novel techniques to balance emulation accuracy against scalability.
Abstract: This paper presents ModelNet, a scalable Internet emulation environment that enables researchers to deploy unmodified software prototypes in a configurable Internet-like environment and subject them to faults and varying network conditions. Edge nodes running user-specified OS and application software are configured to route their packets through a set of ModelNet core nodes, which cooperate to subject the traffic to the bandwidth, congestion constraints, latency, and loss profile of a target network topology. This paper describes and evaluates the ModelNet architecture and its implementation, including novel techniques to balance emulation accuracy against scalability. The current ModelNet prototype is able to accurately subject thousands of instances of a distrbuted application to Internet-like conditions with gigabits of bisection bandwidth. Experiments with several large-scale distributed services demonstrate the generality and effectiveness of the infrastructure.