scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 1999"


Journal ArticleDOI
TL;DR: An optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates to solve the dual problem using a gradient projection algorithm.
Abstract: We propose an optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates. We view network links and sources as processors of a distributed computation system to solve the dual problem using a gradient projection algorithm. In this system, sources select transmission rates that maximize their own benefits, utility minus bandwidth cost, and network links adjust bandwidth prices to coordinate the sources' decisions. We allow feedback delays to be different, substantial, and time varying, and links and sources to update at different times and with different frequencies. We provide asynchronous distributed algorithms and prove their convergence in a static environment. We present measurements obtained from a preliminary prototype to illustrate the convergence of the algorithm in a slowly time-varying environment. We discuss its fairness property.

2,101 citations


Journal ArticleDOI
TL;DR: It is argued that router mechanisms are needed to identify and restrict the bandwidth of selected high-bandwidth best-effort flows in times of congestion, and several general approaches are discussed for identifying those flows suitable for bandwidth regulation.
Abstract: This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet. These negative impacts range from extreme unfairness against competing TCP traffic to the potential for congestion collapse. To promote the inclusion of end-to-end congestion control in the design of future protocols using best-effort traffic, we argue that router mechanisms are needed to identify and restrict the bandwidth of selected high-bandwidth best-effort flows in times of congestion. The paper discusses several general approaches for identifying those flows suitable for bandwidth regulation. These approaches are to identify a high-bandwidth flow in times of congestion as unresponsive, "not TCP-friendly", or simply using disproportionate bandwidth. A flow that is not "TCP-friendly" is one whose long-term arrival rate exceeds that of any conformant TCP in the same circumstances. An unresponsive flow is one failing to reduce its offered load at a router in response to an increased packet drop rate, and a disproportionate-bandwidth flow is one that uses considerably more bandwidth than other flows in a time of congestion.

1,787 citations


Journal ArticleDOI
Nick McKeown1
TL;DR: This paper presents a scheduling algorithm called iSLIP, an iterative, round-robin algorithm that can achieve 100% throughput for uniform traffic, yet is simple to implement in hardware, and describes the implementation complexity of the algorithm.
Abstract: An increasing number of high performance internetworking protocol routers, LAN and asynchronous transfer mode (ATM) switches use a switched backplane based on a crossbar switch. Most often, these systems use input queues to hold packets waiting to traverse the switching fabric. It is well known that if simple first in first out (FIFO) input queues are used to hold packets then, even under benign conditions, head-of-line (HOL) blocking limits the achievable bandwidth to approximately 58.6% of the maximum. HOL blocking can be overcome by the use of virtual output queueing, which is described in this paper. A scheduling algorithm is used to configure the crossbar switch, deciding the order in which packets will be served. Previous results have shown that with a suitable scheduling algorithm, 100% throughput can be achieved. In this paper, we present a scheduling algorithm called iSLIP. An iterative, round-robin algorithm, iSLIP can achieve 100% throughput for uniform traffic, yet is simple to implement in hardware. Iterative and noniterative versions of the algorithms are presented, along with modified versions for prioritized traffic. Simulation results are presented to indicate the performance of iSLIP under benign and bursty traffic conditions. Prototype and commercial implementations of iSLIP exist in systems with aggregate bandwidths ranging from 50 to 500 Gb/s. When the traffic is nonuniform, iSLIP quickly adapts to a fair scheduling policy that is guaranteed never to starve an input queue. Finally, we describe the implementation complexity of iSLIP. Based on a two-dimensional (2-D) array of priority encoders, single-chip schedulers have been built supporting up to 32 ports, and making approximately 100 million scheduling decisions per second.

1,277 citations


Journal ArticleDOI
TL;DR: The prevalence of unusual network events such as out-of-order delivery and packet replication are characterized and a robust receiver-based algorithm for estimating "bottleneck bandwidth" is discussed that addresses deficiencies discovered in techniques based on "packet pair".
Abstract: We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20000 TCP bulk transfers between 35 Internet sites. Because we traced each 100-kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behavior due to the different directions of the Internet paths, which often exhibit asymmetries. We: (1) characterize the prevalence of unusual network events such as out-of-order delivery and packet replication; (2) discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair;" (3) investigate patterns of packet loss, finding that loss events are not well modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and (4) analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales.

913 citations


Journal ArticleDOI
TL;DR: An ideal wireless fair-scheduling algorithm which provides a packetized implementation of the fluid mode, while assuming full knowledge of the current channel conditions is described, and the worst-case throughput and delay bounds are derived.
Abstract: Fair scheduling of delay and rate-sensitive packet flows over a wireless channel is not addressed effectively by most contemporary wireline fair-scheduling algorithms because of two unique characteristics of wireless media: (1) bursty channel errors and (2) location-dependent channel capacity and errors. Besides, in packet cellular networks, the base station typically performs the task of packet scheduling for both downlink and uplink flows in a cell; however, a base station has only a limited knowledge of the arrival processes of uplink flows. We propose a new model for wireless fair-scheduling based on an adaptation of fluid fair queueing (FFQ) to handle location-dependent error bursts. We describe an ideal wireless fair-scheduling algorithm which provides a packetized implementation of the fluid mode, while assuming full knowledge of the current channel conditions. For this algorithm, we derive the worst-case throughput and delay bounds. Finally, we describe a practical wireless scheduling algorithm which approximates the ideal algorithm. Through simulations, we show that the algorithm achieves the desirable properties identified in the wireless FFQ model.

796 citations


Journal ArticleDOI
TL;DR: It is found that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected and that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.
Abstract: It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to "route fluttering", router "pauses" or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.

434 citations


Journal ArticleDOI
TL;DR: The impact of uncertainty is minimal for flows with only bandwidth requirements, but that it makes path selection intractable when end-to-end delay requirements are considered, and efficient solutions for special cases of interest and useful heuristics are provided.
Abstract: This paper investigates the problem of routing flows with quality-of-service (QoS) requirements through one or more networks, when the information available for making such routing decisions is inaccurate. Inaccuracy in the information used in computing QoS routes, e.g., network state such as link and node metrics, arises naturally in a number of different environments that are reviewed in the paper. The goal is to determine the impact of such inaccuracy on the ability of the path-selection process to successfully identify paths with adequate available resources. In particular, we focus on devising algorithms capable of selecting path(s) that are most likely to successfully accommodate the desired QoS, in the presence of uncertain network state information for the purpose of the analysis, we assume that this uncertainty is expressed through probabilistic models, and we briefly discuss sample cases that can give rise to such models. We establish that the impact of uncertainty is minimal for flows with only bandwidth requirements, but that it makes path selection intractable when end-to-end delay requirements are considered. For this latter case, we provide efficient solutions for special cases of interest and develop useful heuristics.

399 citations


Journal ArticleDOI
TL;DR: This paper argues that much previous modeling work has failed to consider the impact of two important parameters, namely the finite range of time scales of interest in performance evaluation and prediction problems, and the first-order statistics such as the marginal distribution of the process.
Abstract: There is much experimental evidence that network traffic processes exhibit ubiquitous properties of self-similarity and long-range dependence, i.e., of correlations over a wide range of time scales. However, there is still considerable debate about how to model such processes and about their impact on network and application performance. In this paper, we argue that much previous modeling work has failed to consider the impact of two important parameters, namely the finite range of time scales of interest in performance evaluation and prediction problems, and the first-order statistics such as the marginal distribution of the process. We introduce and evaluate a model in which these parameters can be controlled. Specifically, our model is a modulated fluid traffic model in which the correlation function of the fluid rate matches that of an asymptotically second-order self-similar process with given Hurst parameter up to an arbitrary cutoff time lag, then drops to zero. We develop a very efficient numerical procedure to evaluate the performance of a single-server queue fed with the above fluid input process. We use this procedure to examine the fluid loss rate for a wide range of marginal distributions, Hurst (1950) parameters, cutoff lags, and buffer sizes. Our main results are as follows. First, we find that the amount of correlation that needs to be taken into account for performance evaluation depends not only on the correlation structure of the source traffic, but also on time scales specific to the system under study. For example, the time scale associated with a queueing system is a function of the maximum buffer size. Thus, for finite buffer queues, we find that the impact on loss of the correlation in the arrival process becomes nil beyond a time scale we refer to as the correlation horizon. This means, in particular, that for performance-modeling purposes, we may choose any model among the panoply of available models (including Markovian and self-similar models) as long as the chosen model captures the correlation structure of the source traffic up to the correlation horizon. Second, we find that loss can depend in a crucial way on the marginal distribution of the fluid rate process. Third, our results suggest that reducing loss by buffering is hard for traffic with correlation over many time scales. We advocate the use of source traffic control and statistical multiplexing instead.

357 citations


Journal ArticleDOI
TL;DR: This paper shows how binary search can be adapted for solving the best-matching prefix problem, and how to improve the performance of any best- Matching prefix scheme using an initial array indexed by the first X bits of the address.
Abstract: IP address lookup is becoming critical because of increasing routing table sizes, speed, and traffic in the Internet. Given a set S of prefixes and an IP address D, the IP address lookup problem is to find the longest matching prefix of D in set S. This paper shows how binary search can be adapted for solving the best-matching prefix problem. Next, we show how to improve the performance of any best-matching prefix scheme using an initial array indexed by the first X bits of the address. We then describe how to take advantage of cache line size to do a multiway search with six-way branching. Finally, we show how to extend the binary search solution and the multiway search solution for IPv6. For a database of N prefixes with address length W, naive binary search would take O(W*log N); we show how to reduce this to O(W+log N) using multiple-column binary search. Measurements using a practical (Mae-East) database of 38000 entries yield a worst-case lookup time of 490 ns, five times faster than the Patricia trie scheme used in BSD UNIX. Our scheme is attractive for IPv6 because of its small storage requirement (2N nodes) and speed (estimated worst case of 7 cache line reads per lookup).

348 citations


Journal ArticleDOI
TL;DR: To further improve the procedures, several extensions to the Feige-Fiat-Shamir (1987) digital signature scheme are proposed to substantially speed up both the signing and verification operations, as well as to allow "adjustable and incremental" verification.
Abstract: We present chaining techniques for signing/verifying multiple packets using a single signing/verification operation. We then present flow signing and verification procedures based upon a tree-chaining technique. Since a single signing/verification operation is amortized over many packets, these procedures improve signing and verification rates by one to two orders of magnitude, compared to the approach of signing/verifying packets individually. Our procedures do not depend upon reliable delivery of packets. They also provide delay-bounded signing, and are thus suitable for delay-sensitive flows and multicast applications. To further improve our procedures, we propose several extensions to the Feige-Fiat-Shamir (1987) digital signature scheme to substantially speed up both the signing and verification operations, as well as to allow "adjustable and incremental" verification. The extended scheme, called eFFS, is compared to four other digital signature schemes (RSA, DSA, ElGamal (1985), and Rabin). We compare their signing and verification times, as well as key and signature sizes. We observe that: (1) eFFS is the fastest in signing (by a large margin over any of the other four schemes) and as fast as RSA in verification (tie for a close second behind Rabin (1979)); (2) eFFS allows a tradeoff between memory and signing/verification time; and (3) eFFS allows adjustable and incremental verification by receivers.

332 citations


Journal ArticleDOI
TL;DR: A new algorithm is presented which creates redundant trees on arbitrary node- redundant or link-redundant networks such that any node is connected to the common root of the trees by at least one of the Trees in case of node or link failure.
Abstract: We present a new algorithm which creates redundant trees on arbitrary node-redundant or link-redundant networks. These trees are such that any node is connected to the common root of the trees by at least one of the trees in case of node or link failure. Our scheme provides rapid preplanned recovery of communications with great flexibility in the topology design. Unlike previous algorithms, our algorithm can establish two redundant trees in the case of a node failing in the network. In the case of failure of a communications link, our algorithm provides a superset of the previously known trees.

Journal ArticleDOI
TL;DR: In this paper, the authors propose an algorithm called R/spl times/W that provides good performance across all of these criteria and can be tuned to trade off average and worst-case waiting time.
Abstract: Broadcast is becoming an increasingly attractive data-dissemination method for large client populations. In order to effectively utilize a broadcast medium for such a service, it is necessary to have efficient on-line scheduling algorithms that can balance individual and overall performance and can scale in terms of data set sizes, client populations, and broadcast bandwidth. We propose an algorithm, called R/spl times/W, that provides good performance across all of these criteria and can be tuned to trade off average and worst-case waiting time. Unlike previous work on low overhead scheduling, the algorithm does not use estimates of the access probabilities of items, but rather, it makes scheduling decisions based on the current queue state, allowing it to easily adapt to changes in the intensity and distribution of the workload. We demonstrate the performance advantages of the algorithm under a range of scenarios using a simulation model and present analytical results that describe the intrinsic behavior of the algorithm.

Journal ArticleDOI
TL;DR: The average cost, due to call loss and location updates using such systems, is analyzed in the presence of database disconnections and the tradeoff between the system reliability and the cost of location updates in the UQS scheme is investigated.
Abstract: A distributed mobility management scheme using a class of uniform quorum systems (UQS) is proposed for ad hoc networks. In the proposed scheme, location databases are stored in the network nodes themselves, which form a self-organizing virtual backbone within the flat network structure. The databases are dynamically organized into quorums, every two of which intersect at a constant number of databases. Upon location update or call arrival, a mobile's location information is written to or read from all the databases of a quorum, chosen in a nondeterministic manner. Compared with a conventional scheme [such as the use of home location register (HLR)] with fixed associations, this scheme is more suitable for ad hoc networks, where the connectivity of the nodes with the rest of the network can be intermittent and sporadic and the databases are relatively unstable. We introduce UQS, where the size of the quorum intersection is a design parameter that can be tuned to adapt to the traffic and mobility patterns of the network nodes. We propose the construction of UQS through the balanced incomplete block designs. The average cost, due to call loss and location updates using such systems, is analyzed in the presence of database disconnections. Based on the average cost, we investigate the tradeoff between the system reliability and the cost of location updates in the UQS scheme. The problem of optimizing the quorum size under different network traffic and mobility patterns is treated numerically. A dynamic and distributed HLR scheme, as a limiting case of the UQS, is also analyzed and shown to be suboptimal in general. It is also shown that partitioning of the network is sometimes necessary to reduce the cost of mobility management.

Journal ArticleDOI
TL;DR: A new heuristic algorithm based on the minimum cost route concept is developed for the design of large self-healing ATM networks using path restoration, and results illustrate that the heuristicgorithm is efficient and gives near-optimal solutions for the spare capacity allocation and flow assignment.
Abstract: This paper studies the capacity and flow assignment problem arising in the design of self-healing asynchronous transfer mode (ATM) networks using the virtual path concept. The problem is formulated here as a linear programming problem which is solved using standard methods. The objective is to minimize the spare capacity cost for the given restoration requirement. The spare cost depends on the restoration strategies used in the network. We compare several restoration strategies quantitatively in terms of spare cost, notably: global versus failure-oriented reconfiguration, path versus link restoration, and state-dependent versus state-independent restoration. The advantages and disadvantages of various restoration strategies are also highlighted. Such comparisons provide useful guidance for real network design. Further, a new heuristic algorithm based on the minimum cost route concept is developed for the design of large self-healing ATM networks using path restoration. Numerical results illustrate that the heuristic algorithm is efficient and gives near-optimal solutions for the spare capacity allocation and flow assignment for tested examples.

Journal ArticleDOI
TL;DR: This paper analyzes the performance of multi-path routing algorithms and compares them to single-path reservation that might be persistent, i.e., retry after a failure, and shows that the connection-establishment time for multi- Path reservation is significantly lower, which makes it an attractive alternative for interactive applications such as World Wide Web browsing.
Abstract: In connection-oriented networks, resource reservations must be made before data can be sent along a route. For short or bursty connections, a selected route must have the required resources to ensure appropriate communication with regard to desired quality-of-service (QoS). For example, in ATM networks, the route setup process considers only links with sufficient resources and reserves these resources while it advances toward the destination. The same concern for QoS routing appears in datagram networks such as the Internet, when applications with QoS requirements need to reserve resources along pinned routes. In this paper, we analyze the performance of multi-path routing algorithms and compare them to single-path reservation that might be persistent, i.e., retry after a failure. The analysis assumes that the routing process reserves resources while it advances toward the destination, thus there is a penalty associated with a reservation that cannot be used. Our analysis shows that while multi-path reservation algorithms perform comparably to single-path reservation algorithms, either persistent or not, the connection-establishment time for multi-path reservation is significantly lower. Thus, multi-path reservation becomes an attractive alternative for interactive applications such as World Wide Web browsing.

Journal ArticleDOI
TL;DR: It is shown by using both analysis and simulation methods that FPLC routing with the first-fit wavelength-assignment method performs much better than the alternate routing method in mesh-torus networks and in the NSFnet T1 backbone network (irregular topology).
Abstract: We present two dynamic routing algorithms based on path and neighborhood link congestion in all-optical networks. In such networks, a connection request encounters higher blocking probability than in circuit-switched networks because of the wavelength-continuity constraint. Much research has focused on the shortest-path routing and alternate shortest-path routing. We consider fixed-paths least-congestion (FPLC) routing in which the shortest path may not be preferred to use. We then extend the algorithm to develop a new routing method: dynamic routing using neighborhood information. It is shown by using both analysis and simulation methods that FPLC routing with the first-fit wavelength-assignment method performs much better than the alternate routing method in mesh-torus networks (regular topology) and in the NSFnet T1 backbone network (irregular topology). Routing using neighborhood information also achieves good performance when compared to alternate shortest-path routing.

Journal ArticleDOI
TL;DR: This work studies the impact of measurement uncertainty, flow arrival, departure dynamics, and of estimation memory on the performance of a generic MBAC system in a common analytical framework, and shows that a certainty equivalence assumption can grossly compromise the target performance of the system.
Abstract: Measurement-based admission control (MBAC) is an attractive mechanism to concurrently offer quality of service (QoS) to users, without requiring a priori traffic specification and on-line policing. However, several aspects of such a system need to be dearly understood in order to devise robust MBAC schemes, i.e., schemes that can match a given QoS target despite the inherent measurement uncertainty, and without the tuning of external system parameters. We study the impact of measurement uncertainty, flow arrival, departure dynamics, and of estimation memory on the performance of a generic MBAC system in a common analytical framework. We show that a certainty equivalence assumption, i.e., assuming that the measured parameters are the real ones, can grossly compromise the target performance of the system. We quantify the improvement in performance as a function of the length of the estimation window and an adjustment of the target QoS. We demonstrate the existence of a critical time scale over which the impact of admission decisions persists. Our results yield new insights into the performance of MBAC schemes, and represent quantitative and qualitative guidelines for the design of robust schemes.

Journal ArticleDOI
TL;DR: A novel medium access control (MAC) protocol called wireless multimedia access control protocol with BER scheduling (in short form, WISPER) for CDMA-based systems is proposed, which utilizes the novel idea of scheduling the transmission of multimedia packets according to their BER requirements.
Abstract: In future wireless multimedia networks, there will be a mixture of different traffic classes which have their own maximum tolerable bit error rate (BER) requirements. In this paper, a novel medium access control (MAC) protocol called wireless multimedia access control protocol with BER scheduling (in short form, WISPER) for CDMA-based systems is proposed. WISPER utilizes the novel idea of scheduling the transmission of multimedia packets according to their BER requirements. The scheduler assigns priorities to the packets, and performs an iterative procedure to determine a good accommodation of the highest-priority packets in the slots of a frame so that packets with equal or similar BER requirements are transmitted in the same slots. The proposed WISPER protocol has been validated using a software emulator on the cellular environment. Performance evaluation results based on the implementation are also included.

Journal ArticleDOI
TL;DR: A simple conceptual framework for analyzing the flow of data in integrated services networks is discussed, which allows us to easily model and analyze the behavior of open loop, rate based flow control protocols, as well as closed loop, window based flow Control protocols.
Abstract: We discuss a simple conceptual framework for analyzing the flow of data in integrated services networks. The framework allows us to easily model and analyze the behavior of open loop, rate based flow control protocols, as well as closed loop, window based flow control protocols. Central to the framework is the concept of a service curve element, whose departure process is bounded between the convolution of the arrival process with a minimum service curve and the convolution of the arrival process with a maximum service curve. Service curve elements can model links, propagation delays, schedulers, regulators, and window based throttles. The mathematical properties of convolution allow us to easily analyze complex configurations of service curve elements to obtain bounds on the end-to-end performance. We demonstrate this by examples, and investigate tradeoffs between buffering requirements, throughput, and delay, for different flow control strategies.

Journal ArticleDOI
TL;DR: By exploiting the structure of such topologies, an /spl epsiv/-optimal algorithm for the constrained shortest-path problem is obtained, which offers a substantial improvement in terms of scalability.
Abstract: We consider routing schemes for connections with end-to-end delay requirements, and investigate several fundamental problems. First, we focus on networks which employ rate-based schedulers and, hence, map delay guarantees into nodal rate guarantees, as done with the guaranteed service class proposed for the Internet. We consider first the basic problem of identifying a feasible route for the connection, for which a straightforward yet computationally costly solution exists. Accordingly, we establish several approximation schemes that offer substantially lower computational complexity. We then consider the more general problem of optimizing the route choice in terms of balancing loads and accommodating multiple connections, for which we formulate and validate several optimal algorithms. We discuss the implementation of such schemes in the context of link-state and distance-vector protocols. Next, we consider the fundamental problem of constrained path optimization. This problem, typical of quality of service routing, is NP-hard. While standard approximation methods exist, their complexity may often be prohibitive in terms of scalability. Such approximations do not make use of the particular properties of large-scale networks, such as the face that the path selection process is typically presented with a hierarchical, aggregated topology. By exploiting the structure of such topologies, we obtain an /spl epsiv/-optimal algorithm for the constrained shortest-path problem, which offers a substantial improvement in terms of scalability.

Journal ArticleDOI
TL;DR: An approximate analysis is presented, validated by computer simulations, for TCP performance over wireless links and shows that a simple solution, that of using an appropriately designed link-layer error-recovery scheme, prevents excessive deterioration of TCP throughput on wireless links.
Abstract: This paper considers the problem of supporting TCP, the Internet data transport protocol, over a lossy wireless link whose quality varies over time. In order to prevent throughput degradation, it is necessary to "hide" the losses and the time variations of the wireless link from TCP. A number of solutions to this problem have been proposed in previous studies, but their performance was studied on a purely experimental basis. This paper presents an approximate analysis, validated by computer simulations, for TCP performance over wireless links. The analysis provides the basis for a systematic approach to supporting TCP over wireless links. The specific case of a Rayleigh-faded wireless link and automatic repeat request-based link-layer recovery is considered for the purpose of illustration. The numerical results presented for this case show that a simple solution, that of using an appropriately designed link-layer error-recovery scheme, prevents excessive deterioration of TCP throughput on wireless links.

Journal ArticleDOI
TL;DR: This work shows that feedback implosion is avoided while feedback latency is low, and proposes a new method of probabilistic feedback based on exponentially distributed timers that achieves lower negative acknowledgment character (NAK) latency for the same performance in terms of NAK suppression.
Abstract: We investigate the scalability of feedback in multicast communication and propose a new method of probabilistic feedback based on exponentially distributed timers. By analysis and simulation for up to 10/sup 6/ receivers, we show that feedback implosion is avoided while feedback latency is low. The mechanism is robust against the loss of feedback messages and works well in case of homogeneous and heterogeneous delays. We apply the feedback mechanism to reliable multicast and compare it to existing timer-based feedback schemes. Our mechanism achieves lower negative acknowledgment character (NAK) latency for the same performance in terms of NAK suppression. No topological information of the network is used, and data delivery is the only support required from the network. The mechanism adapts to a dynamic number of receivers and leads to a stable performance for implosion avoidance and feedback latency.

Journal ArticleDOI
TL;DR: An admission control algorithm is proposed in which a call is admitted if an approximate interrupt probability is below a threshold and can be better than alternative schemes that do not allow interruption, such as a strict partitioning of resources.
Abstract: In order to provide an adequate quality of service to large-bandwidth calls, such as video conference calls, service providers of integrated services networks may want to allow some customers to book their calls ahead, i.e., make advance reservations. We propose a scheme for sharing resources among book-ahead (BA) calls (that announce their call holding times as well as their call initiation times upon arrival) and non-BA calls (that do not announce their holding times). It is possible to share resources without allowing any calls in progress to be interrupted, but in order to achieve a more efficient use of resources, we think that it may be desirable to occasionally allow a call in progress to be interrupted. (In practice, it may be possible to substitute service degradation, such as bit dropping or coarser encoding of video, for interruption.) Thus, we propose an admission control algorithm in which a call is admitted if an approximate interrupt probability (computed in real time) is below a threshold. Simulation experiments show that the proposed admission control algorithm can be better (i.e., yield higher total utilization or higher revenue) than alternative schemes that do not allow interruption, such as a strict partitioning of resources.

Journal ArticleDOI
TL;DR: This paper examines the use of adaptive priority marking for providing soft bandwidth guarantees in a differentiated-services Internet and describes the control mechanisms and evaluates their behavior in various network environments.
Abstract: This paper examines the use of adaptive priority marking for providing soft bandwidth guarantees in a differentiated-services Internet. In contrast to other proposals for achieving the same objective, the proposed scheme does not require resource reservation for individual connections and can be supported with minimal changes to the network infrastructure. It uses modest support from the network in the form of priority handling for appropriately marked packets, and relies on intelligent transmission control mechanisms at the edges of the network to achieve the desired throughput levels. This paper describes the control mechanisms and evaluates their behavior in various network environments. These mechanisms are show in to have several salient features which make them suitable for deployment in an evolving Internet.

Journal ArticleDOI
TL;DR: A control-theoretic approach to the design of closed-loop rate-based flow control in high-speed networks using a dual proportional-plus-derivative controller that demonstrates the excellent transient and steady-state performance of the controller.
Abstract: We present a control-theoretic approach to the design of closed-loop rate-based flow control in high-speed networks. The proposed control uses a dual proportional-plus-derivative controller, where the control parameters can be designed to ensure the stability of the traffic patterns and propagation delays. We show how the control mechanism can be used to design a controller to support ABR service based on feedback of explicit rates (ERs). We demonstrate the excellent transient and steady-state performance of the controller through a number of examples. We also show experimental results that have been obtained from our asynchronous transfer mode (ATM) testbed, which consists of two interconnected ATM LANs, one located in Princeton, NJ, and the other in Berlin, Germany, with an all-software ER-controller implementation.

Journal ArticleDOI
TL;DR: It is proved that uniform spacing of converters is optimal for the end-to-end performance when link loads are uniform and independent and it is shown that significant gains are achievable with optimal placement compared to random placement.
Abstract: Wavelength converters increase the traffic-carrying capacity of circuit-switched optical networks by relaxing the wavelength continuity constraints. We consider the problem of optimally placing a given number of wavelength converters on a path to minimize the call-blocking probability. Using a simple performance model, we first prove that uniform spacing of converters is optimal for the end-to-end performance when link loads are uniform and independent. We then show that significant gains are achievable with optimal placement compared to random placement. For nonuniform link loads, we provide a dynamic programming algorithm for the optimal placement and compare the performance with random and uniform placement. Optimal solutions for bus and ring topologies are also presented. Finally, we discuss the effect of the traffic model on the placement decision.

Journal ArticleDOI
TL;DR: The deterministic broadcast protocols introduced in this paper overcome the above limitations by using a novel mobility-transparent schedule, thus providing a delivery (time) guarantee without the need to recompute the schedules when topology changes.
Abstract: Broadcast (distributing a message from a source node to all other nodes) is a fundamental problem in distributed computing. Several solutions for solving this problem in mobile wireless networks are available, in which mobility is dealt with either by the use of randomized retransmissions or, in the case of deterministic delivery protocols, by using conflict-free transmission schedules. Randomized solutions can be used only when unbounded delays can be tolerated. Deterministic conflict-free solutions require schedule recomputation when topology changes, thus becoming unstable when the topology rate of change exceeds the schedule recomputation rate. The deterministic broadcast protocols we introduce in this paper overcome the above limitations by using a novel mobility-transparent schedule, thus providing a delivery (time) guarantee without the need to recompute the schedules when topology changes. We show that the proposed protocol is simple and easy to implement, and that it is optimal in networks in which assumptions on the maximum number of the neighbors of a node can be made.

Journal ArticleDOI
TL;DR: This work introduces a new scheduling policy which provides guaranteed service for a session based on a flexible service specification called the service curve, and shows that the SCED policy has a greater capability to support end-to-end delay-bound requirements than other known scheduling policies.
Abstract: We introduce a new scheduling policy which provides guaranteed service for a session based on a flexible service specification called the service curve. This policy, referred to as the service curve based earliest deadline first policy (SCED), is a generalized policy to which well-known policies such as virtual clock and the earliest deadline first (EDF) can be mapped as special cases, by appropriate specification of the service curves. Rather than characterizing service by a single number, such as minimum bandwidth or maximum delay, service curves provide a wide spectrum of service characterization by specifying the service using a function. The flexibility in service specification allows a user, or the network, to specify a service that best matches the quality-of-service required by the user, preventing an over-allocation of network resources to the user. For a single server, we show that the SCED policy is optimal in the sense of supporting the largest possible schedulability region, given a set of delay-bound requirements and traffic burstiness specifications. For the case of a network of servers, we show that the SCED policy has a greater capability to support end-to-end delay-bound requirements than other known scheduling policies. The key to this capability is the ability of SCED to allocate and guarantee service curves with arbitrary shapes.

Journal ArticleDOI
TL;DR: A set of algorithms for allocating FWCs in all-optical networks that can significantly reduce the overall blocking probability and the maximum of the blocking probabilities experienced at all the source nodes and are widely applicable.
Abstract: In an all-optical wide area network, some network nodes may handle heavier volumes of traffic. It is desirable to allocate more full-range wavelength converters (FWCs) to these nodes, so that the FWCs can be fully utilized to resolve wavelength conflict. We propose a set of algorithms for allocating FWCs in all-optical networks. We adopt the simulation-based optimization approach, in which we collect utilization statistics of FWCs from computer simulations and then perform optimization to allocate the FWCs. Therefore, our algorithms are widely applicable and they are not restricted to any particular model or assumption. We have conducted extensive computer simulations on regular and irregular networks under both uniform and nonuniform traffic. Compared with the best existing allocation, the results show that our algorithms can significantly reduce: (1) the overall blocking probability (i.e., better mean quality of service) and (2) the maximum of the blocking probabilities experienced at all the source nodes (i.e., better fairness). Equivalently, for a given performance requirement on blocking probability, our algorithms can significantly reduce the number of FWCs required.

Journal ArticleDOI
TL;DR: This paper deals with designing communications protocols with energy constraint, in which the number of time slots in which tags need to be in the active state is minimized, while the access delay meets the applications constraints.
Abstract: A myriad of applications are emerging, in which energy conservation is a critical system parameter for communications. Radio frequency identification device (RFID) networks, smart cards, and even mobile computing devices, in general, need to conserve energy. In RFID systems, nodes are small battery-operated inexpensive devices with radio receiving/transmitting and processing capabilities, integrated into the size of an ID card or smaller. These identification devices are designed for extremely low-cost large-scale applications, such that the replacement of batteries is not feasible. This imposes a critical energy constraint on the communications (access) protocols used in these systems, so that the total time a node needs to be active for transmitting or receiving information should be minimized. Among existing protocols, classical random access protocols are not energy conserving, while deterministic protocols lead to unacceptable delays. This paper deals with designing communications protocols with energy constraint, in which the number of time slots in which tags need to be in the active state is minimized, while the access delay meets the applications constraints. We propose three classes of protocols which combine the fairness of random access protocols with low energy requirements.