scispace - formally typeset
Search or ask a question

Showing papers on "Network topology published in 2000"


Proceedings ArticleDOI
26 Mar 2000
TL;DR: This work considers the problem of adjusting the transmit powers of nodes in a multihop wireless network as a constrained optimization problem with two constraints-connectivity and biconnectivity, and one optimization objective-maximum power used.
Abstract: We consider the problem of adjusting the transmit powers of nodes in a multihop wireless network (also called an ad hoc network) to create a desired topology. We formulate it as a constrained optimization problem with two constraints-connectivity and biconnectivity, and one optimization objective-maximum power used. We present two centralized algorithms for use in static networks, and prove their optimality. For mobile networks, we present two distributed heuristics that adaptively adjust node transmit powers in response to topological changes and attempt to maintain a connected topology using minimum power. We analyze the throughput, delay, and power consumption of our algorithms using a prototype software implementation, an emulation of a power-controllable radio, and a detailed channel model. Our results show that the performance of multihop wireless networks in practice can be substantially increased with topology control.

1,728 citations


Proceedings ArticleDOI
26 Mar 2000
TL;DR: It is demonstrated that even though DSR and AODV share a similar on-demand behavior the differences in the protocol mechanics can lead to significant performance differentials.
Abstract: Ad hoc networks are characterized by multi-hop wireless connectivity, frequently changing network topology and the need for efficient dynamic routing protocols. We compare the performance of two prominent on-demand routing protocols for mobile ad hoc networks - dynamic source routing (DSR) and ad hoc on-demand distance vector routing (AODV). A detailed simulation model with MAC and physical layer models is used to study inter-layer interactions and their performance implications. We demonstrate that even though DSR and AODV share a similar on-demand behavior the differences in the protocol mechanics can lead to significant performance differentials. The performance differentials are analyzed using varying network load, mobility and network size. Based on the observations, we make recommendations about how the performance of either protocol can be improved.

1,629 citations


Journal ArticleDOI
TL;DR: The Virtual Inter Network Testbed (VINT) project as discussed by the authors has enhanced its network simulator and related software to provide several practical innovations that broaden the conditions under which researchers can evaluate network protocols.
Abstract: Network researchers must test Internet protocols under varied conditions to determine whether they are robust and reliable. The paper discusses the Virtual Inter Network Testbed (VINT) project which has enhanced its network simulator and related software to provide several practical innovations that broaden the conditions under which researchers can evaluate network protocols.

784 citations


Proceedings ArticleDOI
07 Dec 2000
TL;DR: A new multi-channel MAC protocol is proposed, which follows an "on-demand" style to assign channels to mobile hosts and flexibly adapts to host mobility and only exchanges few control messages to achieve channel assignment and medium access.
Abstract: The wireless mobile ad hoc network (MANET) architecture has received a lot of attention recently. This paper considers the access of multiple channels in a MANET with multi-hop communication behavior. We point out several interesting issues when using multiple channels. We then propose a new multi-channel MAC protocol, which is characterized by the following features: (i) it follows an "on-demand" style to assign channels to mobile hosts, (ii) the number of channels required is independent of the network topology and degree, (iii) it flexibly adapts to host mobility and only exchanges few control messages to achieve channel assignment and medium access, and (iv) no clock synchronization is required. Compared to existing protocols, some assign channels to hosts statically (thus a host will occupy a channel even when it has no intention to transmit) some require a number of channels which is a function of the maximum connectivity, and some necessitate a clock synchronization among all hosts in the MANET. Extensive simulations are conducted to evaluate the proposed protocol.

776 citations


Proceedings ArticleDOI
23 Sep 2000
TL;DR: This work proposes a scheme to improve existing on-demand routing protocols by creating a mesh and providing multiple alternate routes to the Ad-hoc On-Demand Distance Vector protocol and evaluates the performance improvements by simulation.
Abstract: Nodes in mobile ad hoc networks communicate with one another via packet radios on wireless multihop links. Because of node mobility and power limitations, the network topology changes frequently. Routing protocols therefore play an important role in mobile multihop network communications. A trend in ad hoc network routing is the reactive on-demand philosophy where routes are established only when required. Most of the protocols in this category, however, use a single route and do not utilize multiple alternate paths. We propose a scheme to improve existing on-demand routing protocols by creating a mesh and providing multiple alternate routes. Our algorithm establishes the mesh and multipaths without transmitting any extra control message. We apply our scheme to the Ad-hoc On-Demand Distance Vector (AODV) protocol and evaluate the performance improvements by simulation.

711 citations


Proceedings ArticleDOI
20 Nov 2000
TL;DR: In this work, an efficient approach to reduce the broadcast redundancy is proposed using local topology information and the statistical information about the duplicate broadcasts to avoid unnecessary rebroadcasts.
Abstract: Flooding in mobile ad hoc networks has poor scalability as it leads to serious redundancy, contention and collision. In this paper, we propose an efficient approach to reduce the broadcast redundancy. In our approach, local topology information and the statistical information about the duplicate broadcasts are utilized to avoid unnecessary rebroadcasts. Simulation is conducted to compare the performance of our approach and flooding. The simulation results demonstrate the advantages of our approach. It can greatly reduce the redundant messages, thus saving much network bandwidth and energy. It can also enhance the reliability of broadcasting. It can be used in static or mobile wireless networks to implement scalable broadcast or multicast communications.

663 citations


Proceedings ArticleDOI
18 Jun 2000
TL;DR: FSR introduces the notion of multi-level fisheye scope to reduce routing update overhead in large networks and is presented as a simple, efficient and scalable routing solution in a mobile, ad hoc environment.
Abstract: This paper presents a novel routing protocol for wireless ad hoc networks-fisheye state routing (FSR). FSR introduces the notion of multi-level fisheye scope to reduce routing update overhead in large networks. Nodes exchange link state entries with their neighbors with a frequency which depends on distance to destination. From link state entries, nodes construct the topology map of the entire network and compute optimal routes. Simulation experiments show that FSR is a simple, efficient and scalable routing solution in a mobile, ad hoc environment.

654 citations


Journal ArticleDOI
28 Aug 2000
TL;DR: This paper presents a two-year study of Internet routing convergence through the experimental instrumentation of key portions of the Internet infrastructure, including both passive data collection and fault-injection machines at major Internet exchange points, and describes several unexpected properties of convergence.
Abstract: This paper examines the latency in Internet path failure, failover and repair due to the convergence properties of inter-domain routing. Unlike switches in the public telephony network which exhibit failover on the order of milliseconds, our experimental measurements show that inter-domain routers in the packet switched Internet may take tens of minutes to reach a consistent view of the network topology after a fault. These delays stem from temporary routing table oscillations formed during the operation of the BGP path selection process on Internet backbone routers. During these periods of delayed convergence, we show that end-to-end Internet paths will experience intermittent loss of connectivity, as well as increased packet loss and latency. We present a two-year study of Internet routing convergence through the experimental instrumentation of key portions of the Internet infrastructure, including both passive data collection and fault-injection machines at major Internet exchange points. Based on data from the injection and measurement of several hundred thousand inter-domain routing faults, we describe several unexpected properties of convergence and show that the measured upper bound on Internet inter-domain routing convergence delay is an order of magnitude slower than previously thought. Our analysis also shows that the upper theoretic computational bound on the number of router states and control messages exchanged during the process of BGP convergence is factorial with respect to the number of autonomous systems in the Internet. Finally, we demonstrate that much of the observed convergence delay stems from specific router vendor implementation decisions and ambiguity in the BGP specification.

542 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the entire optical network design problem can be considerably simplified and made computationally tractable, and that terminating the optimization within the first few iterations of the branch-and-bound method provides high-quality solutions.
Abstract: We present algorithms for the design of optimal virtual topologies embedded on wide-area wavelength-routed optical networks. The physical network architecture employs wavelength-conversion-enabled wavelength-routing switches (WRS) at the routing nodes, which allow the establishment of circuit-switched all-optical wavelength-division multiplexed (WDM) channels, called lightpaths. We assume packet-based traffic in the network, such that a packet travelling from its source to its destination may have to multihop through one or more such lightpaths. We present an exact integer linear programming (ILP) formulation for the complete virtual topology design, including choice of the constituent lightpaths, routes for these lightpaths, and intensity of packet flows through these lightpaths. By minimizing the average packet hop distance in our objective function and by relaxing the wavelength-continuity constraints, we demonstrate that the entire optical network design problem can be considerably simplified and made computationally tractable. Although an ILP may take an exponential amount of time to obtain an exact optimal solution, we demonstrate that terminating the optimization within the first few iterations of the branch-and-bound method provides high-quality solutions. We ran experiments using the CPLEX optimization package on the NSFNET topology, a subset of the PACBELL network topology, as well as a third random topology to substantiate this conjecture. Minimizing the average packet hop distance is equivalent to maximizing the total network throughput under balanced flows through the lightpaths. The problem formulation can be used to design a balanced network, such that the utilizations of both transceivers and wavelengths in the network are maximized, thus reducing the cost of the network equipment. We analyze the trade-offs in budgeting of resources (transceivers and switch sizes) in the optical network, and demonstrate how an improperly designed network may have low utilization of any one of these resources. We also use the problem formulation to provide a reconfiguration methodology in order to adapt the virtual topology to changing traffic conditions.

486 citations


Journal ArticleDOI
TL;DR: The design, implementation, and evaluation of INSIGNIA, an IP-based quality of service framework that supports adaptive services in mobile ad hoc networks, paying particular attention to the performance of the in-band signaling system, which helps counter time-varying network dynamics in support of the delivery of adaptive services.

460 citations


Journal ArticleDOI
01 Apr 2000
TL;DR: Four factors in the formation of Internet topologies are examined and it is observed that some generated topologies may not obey power laws P1 and P2, and the value of α in P3 and P4 can be used as a litmus test for the representativeness of a generated topology.
Abstract: Recent empirical studies [6] have shown that Internet topologies exhibit power laws of the form y = xα for the following relationships: (P1) outdegree of node (domain or router) versus rank; (P2) number of nodes versus outdegree; (P3) number of node pairs within a neighborhood versus neighborhood size (in hops); and (P4) eigenvalues of the adjacency matrix versus rank. However, causes for the appearance of such power laws have not been convincingly given. In this paper, we examine four factors in the formation of Internet topologies. These factors are (F1) preferential connectivity of a new node to existing nodes; (F2) incremental growth of the network; (F3) distribution of nodes in space; and (F4) locality of edge connections. In synthetically generated network topologies, we study the relevance of each factor in causing the aforementioned power laws as well as other properties, namely diameter, average path length and clustering coefficient. Different kinds of network topologies are generated: (T1) topologies generated using our parametrized generator, we call BRITE; (T2) random topologies generated using the well-known Waxman model [12]; (T3) Transit-Stub topologies generated using GT-ITM tool [3]; and (T4) regular grid topologies. We observe that some generated topologies may not obey power laws P1 and P2. Thus, the existence of these power laws can be used to validate the accuracy of a given tool in generating representative Internet topologies. Power laws P3 and P4 were observed in nearly all considered topologies, but different topologies showed different values of the power exponent α. Thus, while the presence of power laws P3 and P4 do not give strong evidence for the representativeness of a generated topology, the value of α in P3 and P4 can be used as a litmus test for the representativeness of a generated topology. We also find that factors F1 and F2 are the key contributors in our study which provide the resemblance of our generated topologies to that of the Internet.

Proceedings ArticleDOI
22 Oct 2000
TL;DR: This paper presents various enhancements to unicast and multicast routing protocols using mobility prediction and utilizes GPS location information, and evaluates the effectiveness of mobility prediction.
Abstract: Wireless networks allow a more flexible communication model than traditional networks since the user is not limited to a fixed physical location. Unlike cellular wireless networks, ad hoc wireless networks do not have any fixed communication infrastructure. In ad hoc networks, routes are mostly multihop and network hosts communicate via packet radios. Each host moves in an arbitrary manner and thus routes are subject to frequent disconnections. In typical mobile networks, nodes exhibit some degree of regularity in the mobility pattern. By exploiting a mobile user's non-random traveling pattern, we can predict the future state of network topology and thus provide a transparent network access during the period of topology changes. In this paper we present various enhancements to unicast and multicast routing protocols using mobility prediction. The proposed scheme utilizes GPS location information. By simulation, we evaluate the effectiveness of mobility prediction.

Journal ArticleDOI
TL;DR: An algorithmic framework is established that allows for a variety of dynamic SPT algorithms including dynamic versions of the well-known Dijkstra, Bellman-Ford, D'Esopo-Pape algorithms, and to establish proofs of correctness for these algorithms in a unified way.
Abstract: The open shortest path first (OSPF) and IS-IS routing protocols widely used in today's Internet compute a shortest path tree (SPT) from each router to other routers in a routing area Many existing commercial routers recompute an SPT from scratch following changes in the link states of the network Such recomputation of an entire SPT is inefficient and may consume a considerable amount of CPU time Moreover, as there may coexist multiple SPTs in a network with a set of given link states, recomputation from scratch causes frequent unnecessary changes in the topology of an existing SPT and may lead to routing instability We present new dynamic SPT algorithms that make use of the structure of the previously computed SPT Besides efficiency, our algorithm design objective is to achieve routing stability by making minimum changes to the topology of an existing SPT (while maintaining shortest path property) when some link states in the network have changed We establish an algorithmic framework that allows us to characterize a variety of dynamic SPT algorithms including dynamic versions of the well-known Dijkstra, Bellman-Ford, D'Esopo-Pape algorithms, and to establish proofs of correctness for these algorithms in a unified way The theoretical asymptotic complexity of our new dynamic algorithms matches the best known results in the literature

Proceedings ArticleDOI
27 Nov 2000
TL;DR: A weighted clustering algorithm (WCA) which takes into consideration the ideal degree, transmission power, mobility and battery power of a mobile node to maintain the stability of the network, thus lowering the computation and communication costs associated with it.
Abstract: We consider a multi-cluster, multi-hop packet radio network architecture for wireless systems which can dynamically adapt itself with the changing network configurations. Due to the dynamic nature of the mobile nodes, their association and dissociation to and from clusters perturb the stability of the system, and hence a reconfiguration of the system is unavoidable. At the same time it is vital to keep the topology stable as long as possible. The clusterheads, which form a dominant set in the network, decide the topology and are responsible for its stability. In this paper, we propose a weighted clustering algorithm (WCA) which takes into consideration the ideal degree, transmission power, mobility and battery power of a mobile node. We try to keep the number of nodes in a cluster around a pre-defined threshold to facilitate the optimal operation of the medium access control (MAC) protocol, Our clusterhead election procedure is not periodic as in earlier research, but adapts based on the dynamism of the nodes. This on-demand execution of WCA aims to maintain the stability of the network, thus lowering the computation and communication costs associated with it. Simulation experiments are conducted to evaluate the performance of WCA in terms of the number of clusterheads, reaffiliation frequency and dominant set updates, Results show that the WCA performs better than the existing algorithms and is also tunable to different types of ad hoc networks.

Journal ArticleDOI
TL;DR: How this principle could generate self-organization in natural complex systems is discussed for two examples: neural networks and regulatory networks in the genome.
Abstract: We evolve network topology of an asymmetrically connected threshold network by a simple local rewiring rule: quiet nodes grow links, active nodes lose links. This leads to convergence of the average connectivity of the network towards the critical value K(c) = 2 in the limit of large system size N. How this principle could generate self-organization in natural complex systems is discussed for two examples: neural networks and regulatory networks in the genome.

Journal ArticleDOI
Anja Feldmann1, Albert Greenberg2, Carsten Lund2, Nicholas Reingold2, Jennifer Rexford2 
TL;DR: The AT&T Labs NetScope tool is described, a unified set of software tools for managing the performance of IP backbone networks to generate global views of the network on the basis of configuration and usage data associated with the individual network elements.
Abstract: Managing large IP networks requires an understanding of the current traffic flows, routing policies, and network configuration. However, the state of the art for managing IP networks involves manual configuration of each IP router, and traffic engineering based on limited measurements. The networking industry is sorely lacking in software systems that a large Internet service provider can use to support traffic measurement and network modeling, the underpinnings of effective traffic engineering. This article describes the AT&T Labs NetScope, a unified set of software tools for managing the performance of IP backbone networks. The key idea behind NetScope is to generate global views of the network on the basis of configuration and usage data associated with the individual network elements. Having created an appropriate global view, we are able to infer and visualize the networkwide implications of local changes in traffic, configuration, and control. Using NetScope, a network provider can experiment with changes in network configuration in a simulated environment rather than the operational network. In addition, the tool provides a sound framework for additional modules for network optimization and performance debugging. We demonstrate the capabilities of the tool through an example traffic engineering exercise of locating a heavily loaded link, identifying which traffic demands flow on the link, and changing the configuration of intradomain routing to reduce the congestion.

Journal ArticleDOI
TL;DR: Two constructive learning algorithms MPyramid-real and MTiling-real are presented that extend the pyramid and tiling algorithms, respectively, for learning real to M-ary mappings and it is proved the convergence of these algorithms and empirically demonstrate their applicability to practical pattern classification problems.
Abstract: Constructive learning algorithms offer an attractive approach for the incremental construction of near-minimal neural-network architectures for pattern classification. They help overcome the need for ad hoc and often inappropriate choices of network topology in algorithms that search for suitable weights in a priori fixed network architectures. Several such algorithms are proposed in the literature and shown to converge to zero classification errors (under certain assumptions) on tasks that involve learning a binary to binary mapping (i.e., classification problems involving binary-valued input attributes and two output categories). We present two constructive learning algorithms, MPyramid-real and MTiling-real, that extend the pyramid and tiling algorithms, respectively, for learning real to M-ary mappings (i.e., classification problems involving real-valued input attributes and multiple output classes). We prove the convergence of these algorithms and empirically demonstrate their applicability to practical pattern classification problems. Additionally, we show how the incorporation of a local pruning step can eliminate several redundant neurons from MTiling-real networks.

Proceedings ArticleDOI
27 Nov 2000
TL;DR: It is shown that current topology generators do not obey all of the power-laws, and two new topology generator that do are presented.
Abstract: Recent studies have shown that Internet graphs and other network systems follow power-laws. Are these laws obeyed by the artificial network topologies used in network simulations? Does it matter? In this paper we show that current topology generators do not obey all of the power-laws, and we present two new topology generators that do. We also re-evaluate a multicast study to show the impact of using power-law topologies.

Journal ArticleDOI
TL;DR: In this article, a nonparametric structural damage detection methodology based on nonlinear system identification approaches is presented for the health monitoring of structure-unknown systems, which relies on the use of vibration measurements from a healthy system to train a neural network for identification purposes.
Abstract: A nonparametric structural damage detection methodology based on nonlinear system identification approaches is presented for the health monitoring of structure-unknown systems. In its general form, the method requires no information about the topology or the nature of the physical system being monitored. The approach relies on the use of vibration measurements from a “healthy” system to train a neural network for identification purposes. Subsequently, the trained network is fed comparable vibration measurements from the same structure under different episodes of response in order to monitor the health of the structure and thereby provide a relatively sensitive indicator of changes (damage) in the underlying structure. For systems with certain topologies, the method can also furnish information about the region within which structural changes have occurred. The approach is applied to an intricate mechanical system that incorporates significant nonlinear behavior typically encountered in the applied mechani...

Proceedings ArticleDOI
23 Sep 2000
TL;DR: This paper presents on-demand routing scalability improvements achieved using a "passive" clustering protocol scheme which is mostly supported/maintained by user data packets instead of explicit control packets, consistent with the on- demand routing philosophy.
Abstract: This paper presents on-demand routing scalability improvements achieved using a "passive" clustering. Any on-demand routing typically requires some form of flooding. Clustering can dramatically reduce transmission overhead during flooding. In fact, by using clustering, we restrict the set of forwarding nodes during flood search and thus reduce the energy cost and traffic overhead of routing in dynamic traffic and topology environments. However existing "active" clustering mechanisms require periodic refresh of neighborhood information and tend to introduce quite a large amount of communication maintenance overhead. We introduce a passive clustering protocol scheme which is mostly supported/maintained by user data packets instead of explicit control packets. The passive scheme is consistent with the on-demand routing philosophy. Simulation results show significant performance improvements when passive clustering is used.

Proceedings ArticleDOI
20 Nov 2000
TL;DR: This paper borrows the idea of fair queueing from wireline networks and defines the "fairness index" for ad-hoc network to quantify the fairness, so that the goal of achieving fairness becomes equivalent to minimizing the fairness index.
Abstract: The Medium Access Control (MAC) protocol through which mobile stations can share a common broadcast channel is essential in an ad-hoc network. Due to the existence of hidden terminal problem, partially-connected network topology and lack of central administration, existing popular MAC protocols like IEEE 802.11 Distributed Foundation Wireless Medium Access Control (DFWMAC) [1] may lead to "capture" effects which means that some stations grab the shared channel and other stations suffer from starvation. This is also known as the "fairness problem". This paper reviews some related work in the literature and proposes a general approach to address the problem. This paper borrows the idea of fair queueing from wireline networks and defines the "fairness index" for ad-hoc network to quantify the fairness, so that the goal of achieving fairness becomes equivalent to minimizing the fairness index. Then this paper proposes a different backoff scheme for IEEE 802.11 DFWMAC, instead of the original binary exponential backoff scheme. Simulation results show that the new backoff scheme can achieve far better fairness without loss of simplicity.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: A distributed database coverage heuristic (DDCH) is introduced, which is equivalent to the centralized greedy algorithm for virtual backbone generation, but only requires local information exchange and local computation.
Abstract: In this paper, we present the implementation issues of a virtual backbone that supports the operations of the uniform quorum system (UQS) and the randomized database group (RDG) mobility management schemes in an ad hoc network. The virtual backbone comprises nodes that are dynamically selected to contain databases that store the location information of the network nodes. Together with the UQS and RDG schemes, the virtual backbone allows both dynamic database residence and dynamic database access, which provide high degree of location data availability and reliability. We introduce a distributed database coverage heuristic (DDCH), which is equivalent to the centralized greedy algorithm for virtual backbone generation, but only requires local information exchange and local computation. We show how DDCH can be employed to dynamically maintain the structure of the virtual backbone, along with database merging, as the network topology changes. We also provide a means to maintain connectivity among the virtual backbone nodes. We discuss optimization issues of DDCH through simulations. Simulation results suggest that the cost of ad hoc mobility management with a virtual backbone can be far below that of the conventional link-state routing.

Journal ArticleDOI
TL;DR: This paper proposes a methodology for performing automatic protection switching (APS) in optical networks with arbitrary mesh topologies in order to protect the network from fiber link failures.
Abstract: A fault recovery system that is fast and reliable is essential to today's networks, as it can be used to minimize the impact of the fault on the operation of the network and the services it provides. This paper proposes a methodology for performing automatic protection switching (APS) in optical networks with arbitrary mesh topologies in order to protect the network from fiber link failures. All fiber links interconnecting the optical switches are assumed to be bidirectional. In the scenario considered, the layout of the protection fibers and the setup of the protection switches is implemented in nonreal time, during the setup of the network. When a fiber link fails, the connections that use that link are automatically restored and their signals are routed to their original destination using the protection fibers and protection switches. The protection process proposed is fast, distributed, and autonomous. It restores the network in real time, without relying on a central manager or a centralized database. It is also independent of the topology and the connection state of the network at the time of the failure.

Journal ArticleDOI
G. A. Hamoud1
TL;DR: In this paper, a method for determining the available transfer capability (ATC) between any two locations in a transmission system (single-area or multi-area) under a given set of system operating conditions is described.
Abstract: The available transfer capability (ATC) of a transmission system is a measure of unutilized capability of the system at a given time and depends on a number of factors such as the system generation dispatch, system load level, load distribution in the network, power transfers between areas, network topology, and the limits imposed on the transmission network due to thermal, voltage and stability considerations. This paper describes a method for determining the ATC between any two locations in a transmission system (single-area or multiarea) under a given set of system operating conditions. The method also provides ATCs for selected transmission paths between the two locations in the system and identifies the most limiting facilities in determining the network's ATC. In addition, the method can be used to compute multiple ATCs between more than one pair of locations. The proposed method is illustrated using the IEEE reliability test system (RTS).

Proceedings ArticleDOI
18 Jun 2000
TL;DR: A routing protocol wherein the route selection is done on the basis of an intelligent residual lifetime assessment of the candidate routes, backed by simulations that show excellent adaptation to increasing network mobility is proposed.
Abstract: Owing to the absence of any static support structure, ad-hoc networks are prone to link failures. The 'shortest path seeking' routing protocols may not lead to stable routes. The consequent route failures that ensue, lead to the degradation of system throughput. This paper suggests a routing protocol wherein the route selection is done on the basis of an intelligent residual lifetime assessment of the candidate routes. Schemes for performance enhancement with TCP and non-TCP traffic in ad-hoc networks are proposed. The protocol is backed by simulations in ns that show excellent adaptation to increasing network mobility. We have also introduced new route cache management and power aware data transmission schemes.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: Novel algorithms for discovering physical topology in heterogeneous (i.e., multi-vendor) IP networks are presented, which rely on standard SNMP MIB information that is widely supported by modern IP network elements and require no modifications to the operating system software running on elements or hosts.
Abstract: Knowledge of the up-to-date physical topology of an IP network is crucial to a number of critical network management tasks, including reactive and proactive resource management, event correlation, and root-cause analysis. Given the dynamic nature of today's IP networks, keeping track of topology information manually is a daunting (if not impossible) task. Thus, effective algorithms for automatically discovering physical network topology are necessary. Earlier work has typically concentrated on either: (a) discovering logical (i.e., layer-3) topology, which implies that the connectivity of all layer-2 elements (e.g., switches and bridges) is ignored; or (b) proprietary solutions targeting specific product families. In this paper, we present novel algorithms for discovering physical topology in heterogeneous (i.e., multi-vendor) IP networks. Our algorithms rely on standard SNMP MIB information that is widely supported by modern IP network elements and require no modifications to the operating system software running on elements or hosts. We have implemented the algorithms presented in this paper in the context of a topology discovery tool that has been tested on Lucent's own research network. The experimental results clearly validate our approach, demonstrating that our tool can consistently discover the accurate physical network topology in time that is roughly quadratic in the number of network elements.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: This paper studies constrained multicast routing in WDM networks with sparse light splitting, i.e., where some switches are incapable of splitting light due to evolutional and/or economical reasons.
Abstract: As WDM technology matures and multicast applications become increasingly popular, supporting multicast at the WDM layer becomes an important and yet challenging topic. In this paper, we study constrained multicast routing in WDM networks with sparse light splitting, i.e., where some switches are incapable of splitting light (or copying data in the optical domain). Specifically, we propose four WDM multicast routing algorithms, namely, Re-route-to Source, Re-route-to-Any, Member-First, and Member-Only. Given the network topology, multicast membership information, and light splitting capability of the switches, these algorithms construct a source-based multicast light-forest (consisting one or more multicast trees) for each multicast session. The performance of these algorithms are compared in terms of the average number of wavelengths used per forest (or multicast session), average number of branches involved (bandwidth) per forest as well as average number of hops encountered (delay) from a multicast source to a multicast member.

Journal ArticleDOI
TL;DR: A novel methodology, solidly grounded on statistical estimation theory, that can be used to characterize the internal loss and delay behavior of a network based on end-to-end multicast measurements is presented.
Abstract: We present a novel methodology for identifying internal network performance characteristics based on end-to-end multicast measurements. The methodology, solidly grounded on statistical estimation theory, can be used to characterize the internal loss and delay behavior of a network. Measurements on the MBone have been used to validate the approach in the case of losses. Extensive simulation experiments provide further validation of the approach, not only for losses, but also for delays. We also describe our strategy for deploying the methodology on the Internet. This includes the continued development of the National Internet Measurement Infrastructure to support RTP-based end-to-end multicast measurements and the development of software tools to analyze the traces. Once complete, this combined software/hardware infrastructure will provide a service for understanding and forecasting the performance of the Internet.

Proceedings ArticleDOI
01 Nov 2000
TL;DR: An approach in which the collective communications are tuned for a given system by conducting a series of experiments on the system and a dynamic topology method that uses the tuned static topology shape, but re-orders the logical addresses to compensate for changing run time variations are discussed.
Abstract: The performance of the MPI's collective communications is critical in most MPI-based applications. A general algorithm for a given collective communication operation may not give good performance on all systems due to the differences in architectures, network parameters and the storage capacity of the underlying MPI implementation. In this paper we discuss an approach in which the collective communications are tuned for a given system by conducting a series of experiments on the system. We also discuss a dynamic topology method that uses the tuned static topology shape, but re-orders the logical addresses to compensate for changing run time variations. A series of experiments were conducted comparing our tuned collective communication operations to various native vendor MPI implementations. The use of the tuned collective communications resulted in about 30 percent to 650 percent improvement in performance over the native MPI implementations.

Journal ArticleDOI
TL;DR: The article describes a versatile heuristic based on simulated annealing that may be adopted to optimize the concurrent use of IP restoration and WDM protection schemes in the same (mesh) network, taking into account topology constraints and network cost minimization.
Abstract: The exponentially growing number of Internet users armed with emerging multimedia Internet applications is continuously thirsty for more network capacity. Wavelength-division multiplexing networks that directly support IP-the so-called IP over WDM architecture-have the appropriate characteristics to quench this bandwidth thirst. As everyday life increasingly relies on telecommunication services, users become more and more demanding, and connection reliability is currently as critical as high capacity. Both IP and WDM layers can fulfil this need by providing various resilient schemes to protect users' traffic from disruptions due to network faults. This article first reviews the most common restoration and protection schemes available at the IP and WDM layers. These schemes may be present concurrently in the IP over WDM architecture, with the resilient mechanism of each connection specifically chosen as a function of the overall cost, application requirements, and management complexity. The article describes a versatile heuristic based on simulated annealing that may be adopted to optimize the concurrent use of IP restoration and WDM protection schemes in the same (mesh) network. The proposed heuristic allows varying the percentage of traffic protected by the WDM layer and that of traffic relying on IP restoration, taking into account topology constraints and network cost minimization. An additional feature of the proposed heuristic is the potential to trade solution optimality for computational time, thus yielding fast solutions in support of interactive design.