scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 1998"


Journal ArticleDOI
TL;DR: This paper focuses on algorithms for essential components of the "allocated-capacity" framework: a differential dropping algorithm for network routers and a tagging algorithm for profile meters at the edge of the network for bulk-data transfers.
Abstract: This paper presents the "allocated-capacity" framework for providing different levels of best-effort service in times of network congestion. The "allocated-capacity" framework-extensions to the Internet protocols and algorithms-can allocate bandwidth to different users in a controlled and predictable way during network congestion. The framework supports two complementary ways of controlling the bandwidth allocation: sender-based and receiver-based. In today's heterogeneous and commercial Internet the framework can serve as a basis for charging for usage and for more efficiently utilizing the network resources. We focus on algorithms for essential components of the framework: a differential dropping algorithm for network routers and a tagging algorithm for profile meters at the edge of the network for bulk-data transfers. We present simulation results to illustrate the effectiveness of the combined algorithms in controlling transmission control protocol (TCP) traffic to achieve certain targeted sending rates.

1,015 citations


Journal ArticleDOI
TL;DR: The analysis in this paper is based on data collected from border gateway protocol (BGP) routing messages generated by border routers at five of the Internet core's public exchange points during a nine month period, and reveals several unexpected trends and ill-behaved systematic properties in Internet routing.
Abstract: This paper examines the network interdomain routing information exchanged between backbone service providers at the major US public Internet exchange points. Internet routing instability, or the rapid fluctuation of network reachability information, is an important problem currently facing the Internet engineering community. High levels of network instability can lead to packet loss, increased network latency and time to convergence. At the extreme, high levels of routing instability have led to the loss of internal connectivity in wide-area, national networks. We describe several unexpected trends in routing instability, and examine a number of anomalies and pathologies observed in the exchange of inter-domain routing information. The analysis in this paper is based on data collected from border gateway protocol (BGP) routing messages generated by border routers at five of the Internet core's public exchange points during a nine month period. We show that the volume of these routing updates is several orders of magnitude more than expected and that the majority of this routing information is redundant, or pathological. Furthermore, our analysis reveals several unexpected trends and ill-behaved systematic properties in Internet routing. We finally posit a number of explanations for these anomalies and evaluate their potential impact on the Internet infrastructure.

576 citations


Journal ArticleDOI
TL;DR: The theory of /spl Lscr//spl Rscr/ servers enables computation of tight upper bounds on end-to-end delay and buffer requirements in a heterogeneous network, where individual servers may support different scheduling architectures and under different traffic models.
Abstract: We develop a general model, called latency-rate servers (/spl Lscr//spl Rscr/ servers), for the analysis of traffic scheduling algorithms in broadband packet networks. The behavior of an /spl Lscr//spl Rscr/ server is determined by two parameters-the latency and the allocated rate. Several well-known scheduling algorithms, such as weighted fair queueing, virtualclock, self-clocked fair queueing, weighted round robin, and deficit round robin, belong to the class of /spl Lscr//spl Rscr/ servers. We derive tight upper bounds on the end-to-end delay, internal burstiness, and buffer requirements of individual sessions in an arbitrary network of /spl Lscr//spl Rscr/ servers in terms of the latencies of the individual schedulers in the network, when the session traffic is shaped by a token bucket. The theory of /spl Lscr//spl Rscr/ servers enables computation of tight upper bounds on end-to-end delay and buffer requirements in a heterogeneous network, where individual servers may support different scheduling architectures and under different traffic models.

574 citations


Journal ArticleDOI
TL;DR: This work adopts a more general approach in which all paths between a source-destination pair are considered and incorporate network state information into the routing decision, and performs routing and wavelength assignment jointly and adaptively, and outperforms fixed routing techniques.
Abstract: We consider routing and wavelength assignment in wavelength-routed all-optical networks (WAN) with circuit switching. The conventional approaches to address this issue consider the two aspects of the problem disjointly by first finding a route from a predetermined set of candidate paths and then searching for an appropriate wavelength assignment. We adopt a more general approach in which we consider all paths between a source-destination (s-d) pair and incorporate network state information into the routing decision. This approach performs routing and wavelength assignment jointly and adaptively, and outperforms fixed routing techniques. We present adaptive routing and wavelength assignment algorithms and evaluate their blocking performance. We obtain an analytical technique to compute approximate blocking probabilities for networks employing fixed and alternate routing. The analysis can also accommodate networks with multiple fibers per link. The blocking performance of the proposed adaptive routing algorithms are compared along with their computational complexity.

543 citations


Journal ArticleDOI
TL;DR: This paper explores how the client buffer space can be used most effectively toward reducing the variability of the transmitted bit rate, and shows how to achieve the greatest possible reduction in rate variability when sending stored video to a client with given buffer size.
Abstract: Variable-bit-rate (VBR) compressed video can exhibit significant multiple-time-scale bit-rate variability. In this paper we consider the transmission of stored video from a server to a client across a network, and explore how the client buffer space can be used most effectively toward reducing the variability of the transmitted bit rate. Two basic results are presented. First, we show how to achieve the greatest possible reduction in rate variability when sending stored video to a client with given buffer size. We formally establish the optimality of our approach and illustrate its performance over a set of long MPEG-1 encoded video traces. Second, we evaluate the impact of optimal smoothing on the network resources needed for video transport, under two network service models: deterministic guaranteed service (Chang 1994; Wrege et al. 1996) and renegotiated constant-bit-rate (RCBR) service (Grossglauser et al. 1997). Under both models, the impact of optimal smoothing is dramatic.

431 citations


Journal ArticleDOI
TL;DR: The results show that introducing FEC as a transparent layer below ARQ can improve multicast transmission efficiency and scalability, however, there are substantial additional improvements when FEC and ARQ are integrated.
Abstract: We investigate how forward error correction (FEC) can be combined with automatic repeat request (ARQ) to achieve scalable reliable multicast transmission. We consider the two scenarios where FEC is introduced as a transparent layer underneath a reliable multicast layer that uses ARQ, and where FEC and ARQ are both integrated into a single layer that uses the retransmission of parity data to recover from the loss of original data packets. To evaluate the performance improvements due to FEC, we consider different loss rates and different types of loss behavior (spatially or temporally correlated loss, homogeneous or heterogeneous loss) for up to 10/sup 6/ receivers. Our results show that introducing FEC as a transparent layer below ARQ can improve multicast transmission efficiency and scalability. However, there are substantial additional improvements when FEC and ARQ are integrated.

397 citations


Journal ArticleDOI
TL;DR: A router, nearly completed, which is more than fast enough to keep up with the latest transmission technologies and can forward tens of millions of packets per second.
Abstract: Aggressive research on gigabit-per-second networks has led to dramatic improvements in network transmission speeds. One result of these improvements has been to put pressure on router technology to keep pace. This paper describes a router, nearly completed, which is more than fast enough to keep up with the latest transmission technologies. The router has a backplane speed of 50 Gb/s and can forward tens of millions of packets per second.

334 citations


Journal ArticleDOI
Anurag Kumar1
TL;DR: A stochastic model is used to study the throughput performance of various transport control protocol (TCP) versions in the presence of random losses on a wireless link in a local network, and shows that, for large packet-loss probabilities, TCP-Reno performs no better, or worse, than TCP-Tahoe.
Abstract: We use a stochastic model to study the throughput performance of various transport control protocol (TCP) versions (Tahoe (including its older version that we call OldTahoe), Reno, and NewReno) in the presence of random losses on a wireless link in a local network. We model the cyclic evolution of TCP, each cycle starting at the epoch at which recovery starts from the losses in the previous cycle. TCP throughput is computed as the reward rate in a certain Markov renewal-reward process. Our model allows us to study the performance implications of various protocol features, such as fast retransmit and fast recovery. We show the impact of coarse timeouts. In the local network environment the key issue is to avoid a coarse timeout after a loss occurs. We show the effect of reducing the number of duplicate acknowledgements (ACKs) for triggering a fast retransmit. A large coarse timeout granularity seriously affects the performance of TCP, and the various protocol versions differ in their ability to avoid a coarse timeout when random loss occurs; we quantify these differences. We show that, for large packet-loss probabilities, TCP-Reno performs no better, or worse, than TCP-Tahoe. TCP-NewReno is a considerable improvement over TCP-Tahoe, and reducing the fast-retransmit threshold from three to one yields a large gain in throughput; this is similar to one of the modifications in the TCP-Vegas proposal. We explain some of these observations in terms of the variation of fast-recovery probabilities with packet-loss probability. The results of our analysis compare well with a simulation that uses actual TCP code.

322 citations


Journal ArticleDOI
TL;DR: This paper compares the proposed algorithm for transmission scheduling with that of Chlamtac and Farago's and with the TDMA algorithm, and finds that the algorithm given gives better performance in terms of minimum throughput and minimum and maximum delay times.
Abstract: Many transmission scheduling algorithms have been proposed to maximize the spatial reuse and minimize the time-division multiple-access (TDMA) frame length in multihop packet radio networks. Almost all existing algorithms assume exact network topology information and do not adapt to different traffic requirements. Chlamtac and Farago (1994) proposed a topology-transparent algorithm. Following their approach, but with a different design strategy, we propose another algorithm which is optimal in that it maximizes the minimum throughput. We compare our algorithm with that of Chlamtac and Farago's and with the TDMA algorithm, and find that it gives better performance in terms of minimum throughput and minimum and maximum delay times. Our algorithm requires estimated values of the number of nodes and the maximum nodal degree in the network. However, we show that the performance of our algorithm is insensitive to these design parameters.

275 citations


Journal ArticleDOI
TL;DR: Two novel scheduling algorithms that have O(1) complexity for timestamp computations and provide the same bounds on end-to-end delay and buffer requirements as those of WFQ are presented.
Abstract: Although weighted fair queueing (WFQ) has been regarded as an ideal scheduling algorithm in terms of its combined delay bound and proportional fairness properties, its asymptotic time complexity increases linearly with the number of sessions serviced by the scheduler, thus limiting its use in high-speed networks. An algorithm that combines the delay and fairness bounds of WFQ with O(1) timestamp computations had remained elusive so far. In this paper we present two novel scheduling algorithms that have O(1) complexity for timestamp computations and provide the same bounds on end-to-end delay and buffer requirements as those of WFQ. The first algorithm, frame-based fair queueing (FFQ), uses a framing mechanism to periodically recalibrate a global variable tracking the progress of work in the system, limiting any short-term unfairness to within a frame period. The second algorithm, starting potential based fair queueing (SPFQ), performs the recalibration at packet boundaries, resulting in improved fairness while still maintaining the O(1) timestamp computations. Both algorithms are based on the general framework of rate-proportional servers (RPSs) introduced by Stiliadis and Varma (see ibid., vol.6, no.2, p.164-74, 1998). The algorithms may be used in both general packet networks with variable packet sizes and in asynchronous transfer mode (ATM) networks.

267 citations


Journal ArticleDOI
TL;DR: A method for capacity optimization of path restorable networks which is applicable to both synchronous transfer mode (STM) and asynchronous transfermode (ATM) virtual path (VP)-based restoration and jointly optimizing working path routing and spare capacity placement.
Abstract: The total transmission capacity required by a transport network to satisfy demand and protect it from failures contributes significantly to its cost, especially in long-haul networks. Previously, the spare capacity of a network with a given set of working span sizes has been optimized to facilitate span restoration. Path restorable networks can, however, be even more efficient by defining the restoration problem from an end to end rerouting viewpoint. We provide a method for capacity optimization of path restorable networks which is applicable to both synchronous transfer mode (STM) and asynchronous transfer mode (ATM) virtual path (VP)-based restoration. Lower bounds on spare capacity requirements in span and path restorable networks are first compared, followed by an integer program formulation based on flow constraints which solves the spare and/or working capacity placement problem in either span or path restorable networks. The benefits of path and span restoration, and of jointly optimizing working path routing and spare capacity placement, are then analyzed.

Journal ArticleDOI
TL;DR: An analysis of HRW and validate it with simulation results showing that it gives faster service times than traditional request allocation schemes such as round-robin or least-loaded, and adapts well to changes in the set of servers.
Abstract: Clusters of identical intermediate servers are often created to improve availability and robustness in many domains. The use of proxy servers for the World Wide Web (WWW) and of rendezvous points in multicast routing are two such situations. However, this approach can be inefficient if identical requests are received and processed by multiple servers. We present an analysis of this problem, and develop a method called the highest random weight (HRW) mapping that eliminates these difficulties. Given an object name and a set of servers, HRW maps a request to a server using the object name, rather than any a priori knowledge of server states. Since HRW always maps a given object name to the same server within a given cluster, it may be used locally at client sites to achieve consensus on object-server mappings. We present an analysis of HRW and validate it with simulation results showing that it gives faster service times than traditional request allocation schemes such as round-robin or least-loaded, and adapts well to changes in the set of servers. HRW is particularly applicable to domains in which there are a large number of requestable objects, there is a significant probability that a requested object will be requested again, and the CPU load due to any single object can be handled by a single server. HRW has now been adopted by the multicast routing protocols PIMv2 and CBTv2 as its mechanism for routers to identify rendezvous points/cores.

Journal ArticleDOI
TL;DR: This work considers the problem of routing connections with quality of service (QoS) requirements across networks when the information available for making routing decisions is inaccurate and shows that by decomposing the end-to-end constraint into local delay constraints, efficient and tractable solutions can be established.
Abstract: We consider the problem of routing connections with quality of service (QoS) requirements across networks when the information available for making routing decisions is inaccurate. Such uncertainty about the actual state of a network component arises naturally in a number of different environments. The goal of the route selection process is then to identify a path that is most likely to satisfy the QoS requirements. For end-to-end delay guarantees, this problem is intractable. However, we show that by decomposing the end-to-end constraint into local delay constraints, efficient and tractable solutions can be established. Moreover, we argue that such decomposition better reflects the interoperability between the routing and reservation phases. We first consider the simpler problem of decomposing the end-to-end constraint into local constraints for a given path. We show that, for general distributions, this problem is also intractable. Nonetheless, by defining a certain class of probability distributions, which includes typical distributions, and restricting ourselves to that class, we are able to establish efficient and exact solutions. We then consider the general problem of combined path optimization and delay decomposition and present efficient solutions. Our findings are applicable also to a broader problem of finding a path that meets QoS requirements at minimal cost, where the cost of each link is some general increasing function of the QoS requirements from the link.

Journal ArticleDOI
TL;DR: Significant improvements in traffic-carrying capacity can be obtained in WDM networks by providing very limited wavelength conversion capability within the network, extended to tree networks and networks with arbitrary topologies.
Abstract: This paper proposes optical wavelength division multiplexed (WDM) networks with limited wavelength conversion that can efficiently support lightpaths (connections) between nodes. Each lightpath follows a route in a network and must be assigned a channel on each link along the route. The load /spl lambda//sub max/ of a set of lightpaths is the maximum over all links of the number of lightpaths that use the link. At least /spl lambda//sub max/ wavelengths will be needed to assign channels to the lightpaths. If the network has full wavelength conversion capabilities, then /spl lambda//sub max/ wavelengths are sufficient to perform the channel assignment. Ring networks with fixed wavelength conversion capability within the nodes are proposed that can support all lightpath sets with load /spl lambda//sub max/ at most W-1, where W is the number of wavelengths in each link. Ring networks with a small additional amount of wavelength conversion capability within the nodes are also proposed that allow the support of any set of lightpaths with load /spl lambda//sub max/ at most W. A star network is also proposed with fixed wavelength conversion capability at its hub node that can support all lightpath sets with load /spl lambda//sub max/ at most W. These results are extended to tree networks and networks with arbitrary topologies. This provides evidence that significant improvements in traffic-carrying capacity can be obtained in WDM networks by providing very limited wavelength conversion capability within the network.

Journal ArticleDOI
TL;DR: This work proposes a novel scheme called dynamic threshold (DT) that combines the simplicity of ST and the adaptivity of PO, and uses computer simulation to compare the loss performance of DT, ST, and PO.
Abstract: In shared-memory packet switches, buffer management schemes can improve overall loss performance, as well as fairness, by regulating the sharing of memory among the different output port queues. Of the conventional schemes, static threshold (ST) is simple but does not adapt to changing traffic conditions, while pushout (PO) is highly adaptive but difficult to implement. We propose a novel scheme called dynamic threshold (DT) that combines the simplicity of ST and the adaptivity of PO. The key idea is that the maximum permissible length, for any individual queue at any instant of time, is proportional to the unused buffering in the switch. A queue whose length equals or exceeds the current threshold value may accept no more arrivals. An analysis of the DT algorithm shows that a small amount of buffer space is (intentionally) left unallocated, and that the remaining buffer space becomes equally distributed among the active output queues. We use computer simulation to compare the loss performance of DT, ST, and PO. DT control is shown to be more robust to uncertainties and changes in traffic conditions than ST control.

Journal ArticleDOI
TL;DR: The bounded shortest multicast algorithm (BSMA) is presented for constructing minimum-cost multicast trees with delay constraints and simulation results are provided showing that BSMA can achieve near-optimal cost reduction with fast execution.
Abstract: The bounded shortest multicast algorithm (BSMA) is presented for constructing minimum-cost multicast trees with delay constraints. The BSMA can handle asymmetric link characteristics and variable delay bounds on destinations, specified as real values, and minimizes the total cost of a multicast routing tree. Instead of the single-pass tree construction approach used in most previous heuristics, the new algorithm is based on a feasible-search optimization strategy that starts with the minimum-delay multicast tree and monotonically decreases the cost by iterative improvement of the delay-bounded multicast tree. The BSMA's expected time complexity is analyzed, and simulation results are provided showing that BSMA can achieve near-optimal cost reduction with fast execution.

Journal ArticleDOI
TL;DR: This work presents a methodology for the design of scheduling algorithms that provide the same end-to-end delay bound as that of WFQ and bounded unfairness without the complexity of GPS emulation, and produces a class of algorithms, called rate-proportional servers (RPSs).
Abstract: Generalized processor sharing (GPS) has been considered as an ideal scheduling discipline based on its end-to-end delay bounds and fairness properties. Until recently, emulation of GPS in a packet server has been regarded as the ideal means of designing a packet-level scheduling algorithm to obtain low delay bounds and bounded unfairness. Strict emulation of GPS, as required in the weighted fair queueing (WFQ) scheduler, however, incurs a time-complexity of O(N) where N is the number of sessions sharing the link. Efforts in the past to simplify the implementation of WFQ, such as self-clocked fair queueing (SCFQ), have resulted in degrading its isolation properties, thus affecting the delay bound. We present a methodology for the design of scheduling algorithms that provide the same end-to-end delay bound as that of WFQ and bounded unfairness without the complexity of GPS emulation. The resulting class of algorithms, called rate-proportional servers (RPSs), are based on isolating scheduler properties that give rise to ideal delay and fairness behavior. Network designers can use this methodology to construct efficient fair-queueing algorithms, balancing their fairness with implementation complexity.

Journal ArticleDOI
TL;DR: A dynamic bandwidth allocation strategy to support variable bit rate (VBR) video traffic is proposed andalyses indicate that prediction errors for the bandwidth required for the next frames and group of pictures (GOP) are almost white noise or short memory.
Abstract: A dynamic bandwidth allocation strategy to support variable bit rate (VBR) video traffic is proposed. This strategy predicts the bandwidth requirements for future frames using adaptive linear prediction that minimizes the mean square error. The adaptive technique does not require any prior knowledge of the traffic statistics nor assume stationarity. Analyses using six one-half-hour video tracts indicate that prediction errors for the bandwidth required for the next frames and group of pictures (GOP) are almost white noise or short memory. The performance of the strategy is studied using renegotiated constant bit rate (RCBR) network service model and methods that control the tradeoff between the number of renegotiations and network utilization are proposed. Simulation results using MPEG-I video traces for predicting GOP rates show that the queue size is reduced by a factor of 15-160 and the network utilization is increased between 190%-300% as compared to a fixed service rate. Results also show that even when renegotiations occur on the average in tens of seconds, the queue size is reduced by a factor between 16-30.

Journal ArticleDOI
Peter Newman, Greg Minshall, T. Lyon1
TL;DR: This work discards the end-to-end ATM connection and integrates fast ATM hardware directly with IP, preserving the connectionless nature of IP, and uses the soft-state in the ATM hardware to cache the IP forwarding decision.
Abstract: Internet protocol (IP) traffic on the Internet and private enterprise networks has been growing exponentially for some time. This growth is beginning to stress the traditional processor-based design of current-day routers. Switching technology offers much higher aggregate bandwidth, but presently only offers a layer-2 bridging solution. Various proposals are under way to support IP routing over an asynchronous transfer mode (ATM) network. However, these proposals hide the real network topology from the IP layer by treating the data-link layer as a large opaque network cloud. We argue that this leads to complexity, inefficiency, and duplication of functionality in the resulting network. We propose an alternative in which we discard the end-to-end ATM connection and integrate fast ATM hardware directly with IP, preserving the connectionless nature of IP. We use the soft-state in the ATM hardware to cache the IP forwarding decision. This enables further traffic on the same IP flow to be switched by the ATM hardware rather than forwarded by IP software. We claim that this approach combines the simplicity, scalability, and robustness of IP, with the speed, capacity, and multiservice traffic capabilities of ATM.

Journal ArticleDOI
TL;DR: This paper presents a distributed heuristic algorithm which generates routing trees having a suboptimal network cost under the delay bound constraint, which is fully distributed, efficient in terms of the number of messages and convergence time, and flexible in dynamic membership changes.
Abstract: Multicast routing is to find a tree which is rooted from a source node and contains all multicast destinations. There are two requirements of multicast routing in many multimedia applications: optimal network cost and bounded delay. The network cost of a tree is defined as the sum of the cost of all links in the tree. The bounded delay of a routing tree refers to the feature that the accumulated delay from the source to any destination along the tree shall not exceed a prespecified bound. This paper presents a distributed heuristic algorithm which generates routing trees having a suboptimal network cost under the delay bound constraint. The proposed algorithm is fully distributed, efficient in terms of the number of messages and convergence time, and flexible in dynamic membership changes. A large amount of simulations have been done to show the network cost of the routing trees generated by our algorithm is similar to, or even better than, other existing algorithms.

Journal ArticleDOI
TL;DR: This paper studies P(/spl Qscr/>x), the tail of the steady-state queue length distribution at a high-speed multiplexer, and provides two asymptotic upper bounds for the tail probability and an asymPTotic result that emphasizes the importance of the dominant time scale and the maximum variance.
Abstract: In this paper, we study P(/spl Qscr/>x), the tail of the steady-state queue length distribution at a high-speed multiplexer. In particular, we focus on the case when the aggregate traffic to the multiplexer can be characterized by a stationary Gaussian process. We provide two asymptotic upper bounds for the tail probability and an asymptotic result that emphasizes the importance of the dominant time scale and the maximum variance. One of our bounds is in a single-exponential form and can be used to calculate an upper bound to the asymptotic constant. However, we show that this bound, being of a single-exponential form, may not accurately capture the tail probability. Our asymptotic result on the importance of the maximum variance and our extensive numerical study on a known lower bound motivate the development of our second asymptotic upper bound. This bound is expressed in terms of the maximum variance of a Gaussian process, and enables the accurate estimation of the tail probability over a wide range of queue lengths. We apply our results to Gaussian as well as multiplexed non-Gaussian input sources, and validate their performance via simulations. Wherever possible, we have conducted our simulation study using importance sampling in order to improve its reliability and to effectively capture rare events. Our analytical study is based on extreme value theory, and therefore different from the approaches using traditional Markovian and large deviations techniques.

Journal ArticleDOI
TL;DR: This work evaluates the processor and switch overheads for transferring HTTP server traffic through a flow-switched network and focuses on the full probability distributions of flow sizes and cost-performance metrics to highlight the subtle influence of the HTTP protocol and user behavior on the performance of flow switching.
Abstract: To efficiently transfer diverse traffic over high-speed links, modern integrated networks require more efficient packet-switching techniques that can capitalize on the advances in switch hardware. Several promising approaches attempt to improve the performance by creating dedicated "shortcut" connections for long-lived traffic flows, at the expense of the network overhead for establishing and maintaining these shortcuts. The network can balance these cost-performance tradeoffs through three tunable parameters: the granularity of flow end-point addresses, the timeout for grouping related packets into flows, and the trigger for migrating a long-lived flow to a shortcut connection. Drawing on a continuous one-week trace of Internet traffic, we evaluate the processor and switch overheads for transferring HTTP server traffic through a flow-switched network. In contrast to previous work, we focus on the full probability distributions of flow sizes and cost-performance metrics to highlight the subtle influence of the HTTP protocol and user behavior on the performance of flow switching. We find that moderate levels of aggregation and triggering yield significant reductions in overhead with a negligible reduction in performance. The traffic characterization results further suggest schemes for limiting shortcut overhead by temporarily delaying the creation of shortcuts during peak load and by aggregating related packets that share a portion of their routes through the network.

Journal ArticleDOI
TL;DR: This paper describes the analysis, implementation, and performance of a new algorithm engineered to discipline a computer clock to a source of standard time, such as a GPS receiver or another computer synchronized to such a source.
Abstract: This paper describes the analysis, implementation, and performance of a new algorithm engineered to discipline a computer clock to a source of standard time, such as a GPS receiver or another computer synchronized to such a source. The algorithm is intended for the network time protocol (NTP), which is in widespread use to synchronize computer clocks in the global Internet, or with another functionally equivalent protocol such as DTSS or PCS. It controls the computer clock time and frequency using an adaptive-parameter hybrid phase/frequency lock feedback loop. Compared with the current NTP Version 3 algorithm, the new algorithm developed for NTP Version 4 provides improved accuracy and reduced network overhead, especially when per-packet or per-call charges are involved. The algorithm has been implemented in a special-purpose NTP simulator, which also includes the entire suite of NTP algorithms. The performance has been verified using this simulator and both synthetic data and real data from Internet time servers in Europe, Asia, and the Americas.

Journal ArticleDOI
TL;DR: This paper considers the case of network nodes that use a priority-service discipline to support multiple classes of service to determine an appropriate notion of effective bandwidths, and uses large-buffer asymptotics (large deviations principles) for workload tail probabilities as a theoretical basis.
Abstract: The notion of effective bandwidths has provided a useful practical framework for connection admission control and capacity planning in high-speed communication networks. The associated admissible set with a single linear boundary makes it possible to apply stochastic-loss-network (generalized-Erlang) models for capacity planning. We consider the case of network nodes that use a priority-service discipline to support multiple classes of service, and we wish to determine an appropriate notion of effective bandwidths. Just as was done previously for the first-in first-out (FIFO) discipline, we use large-buffer asymptotics (large deviations principles) for workload tail probabilities as a theoretical basis. We let each priority class have its own buffer and its own constraint on the probability of buffer overflow. Unfortunately, however, this leads to a constraint for each priority class. Moreover, the large-buffer asymptotic theory with priority classes does not produce an admissible set with linear boundaries, but we show that it nearly does and that a natural bound on the admissible set does have this property. We propose it as an approximation for priority classes; then there is one linear constraint for each priority class. This linear-admissible-set structure implies a new notion of effective bandwidths, where a given connection is associated with multiple effective bandwidths: one for the priority level of the given connection and one for each lower priority level. This structure can be used regardless of whether the individual effective bandwidths are determined by large-buffer asymptotics or by some other method.

Journal ArticleDOI
TL;DR: Contrary to a wide belief in the economic advantage of the end-to-end restoration scheme, this study reveals that the attainable gain could be marginal for a well-connected and/or unbalanced network.
Abstract: This paper addresses an optimal link capacity design problem for self-healing asynchronous transfer mode (ATM) networks based on two different restoration schemes: line restoration and end-to-end restoration. Given a projected traffic demand, capacity and flow assignment is jointly optimized to find an optimal capacity placement. The problem can be formulated as a large-scale linear programming. The basis matrix can be readily factorized into an LU form by taking advantage of its special structure, which results in a substantial reduction on the computation time of the revised simplex method. A row generation and deletion mechanism is developed to cope with the explosive number of constraints for the end-to-end restoration-based networks. In self-healing networks, end-to-end restoration schemes have been considered more advantageous than line restoration schemes because of a possible reduction of the redundant capacity to construct a fully restorable network. A comparative analysis is presented to clarify the benefit of end-to-end restoration schemes quantitatively in terms of the minimum resource installation cost. Several networks with diverse topological characteristics as well as multiple projected traffic demand patterns are employed in the experiments to see the effect of various network parameters. The results indicate that the network topology has a significant impact on the required resource installation cost for each restoration scheme. Contrary to a wide belief in the economic advantage of the end-to-end restoration scheme, this study reveals that the attainable gain could be marginal for a well-connected and/or unbalanced network.

Journal ArticleDOI
TL;DR: It is shown that for fragmentation-and-reassembly error models, the checksum contribution of each fragment are, in effect, colored by the fragment's offset in the splice, explaining the performance of Fletcher's sum on nonuniform data.
Abstract: Checksum and cyclic redundancy check (CRC) algorithms have historically been studied under the assumption that the data fed to the algorithms was uniformly distributed This paper examines the behavior of checksums and CRCs over real data from various UNIX file systems We show that, when given real data in small to modest pieces (eg, 48 bytes), all the checksum algorithms have skewed distributions These results have implications for CRCs and checksums when applied to real data They also can cause a spectacular failure rate for both the TCP and ones-complement Fletcher (1983) checksums when trying to detect certain types of packet splices When measured over several large file systems, the 16 bit TCP checksum performed about as well as a 10-bit CRC We show that for fragmentation-and-reassembly error models, the checksum contribution of each fragment are, in effect, colored by the fragment's offset in the splice This coloring explains the performance of Fletcher's sum on nonuniform data, and shows that placing checksum fields in a packet trailer is theoretically no worse than a header checksum field In practice, the TCP trailer sums outperform even Fletcher header sums

Journal ArticleDOI
TL;DR: A robust and flexible real-time scheme for determining appropriate parameter values for usage parameter control (UPC) of an arbitrary source in an asynchronous transfer mode network via dynamic renegotiation of the UPC parameters is presented.
Abstract: This paper presents a robust and flexible real-time scheme for determining appropriate parameter values for usage parameter control (UPC) of an arbitrary source in an asynchronous transfer mode network. In our approach, the UPC parameters are chosen as a function of the statistical characteristics of the observed cell stream, the user's tolerance for traffic shaping, and a measure of the network cost. For this purpose, we develop an approximate statistical characterization for an arbitrary cell stream. The statistical characterization is mapped to a UPC descriptor that can be negotiated, with the network. The selected UPC descriptor is optimal in the sense of minimizing a network cost function, subject to meeting user-specified constraints on shaping delay. The UPC estimation scheme is extended to adapt to slow time-scale changes in traffic characteristics via dynamic renegotiation of the UPC parameters. We illustrate the effectiveness of our methodologies with examples taken from MPEG video sequences.

Journal ArticleDOI
TL;DR: This paper develops a simple and accurate analytical technique to determine the loss probability at an access node to an asynchronous transfer mode (ATM) network and shows that capacity allocation based on the popular effective-bandwidth scheme can lead to considerable under-utilization of the network.
Abstract: In this paper we develop a simple and accurate analytical technique to determine the loss probability at an access node to an asynchronous transfer mode (ATM) network. This is an important problem from the point of view of admission control and network design. The arrival processes we analyze are the Markov-modulated Poisson process (MMPP) and the Markov-modulated fluid (MMF) process. These arrival processes have been shown to model various traffic types, such as voice, video, and still images, that are expected to be transmitted by ATM networks. Our hybrid analytical technique combines results from large buffer theories and quasi-stationary approaches to analyze the loss probability of a finite-buffer queue being fed by Markov-modulated sources such as the MMPP and MMF. Our technique is shown to be valid for both heterogeneous and homogeneous sources. We also show that capacity allocation based on the popular effective-bandwidth scheme can lead to considerable under-utilization of the network and that allocating bandwidth based on our model can improve the utilization significantly. We provide numerical results for different types of traffic and validate our model via simulations.

Journal ArticleDOI
TL;DR: This work proposes a smoothing and rate adaptation algorithm for compressed video, called SAVE, that is used in conjunction, with explicit rate based control in the network, to help achieve good multiplexing gains.
Abstract: Supporting compressed video efficiently on networks is a challenge because of its burstiness. Although a large number of applications using compressed video allow adaptive rates, it is also important to preserve quality as much as possible. We propose a smoothing and rate adaptation algorithm for compressed video, called SAVE, that is used in conjunction, with explicit rate based control in the network. SAVE smooths the demand from the source to the network, thus helping achieve good multiplexing gains. SAVE maintains the quality of the video and ensures that the delay at the source buffer does not exceed a bound. We show that SAVE is effective by demonstrating its performance across 28 different traces (entertainment and teleconferencing videos) that use different compression algorithms.

Journal ArticleDOI
TL;DR: This work analyzes the fairness properties of CORR and shows that it achieves near perfect fairness, and compares it with packet-by-packet generalized processor sharing and stop-and-go systems.
Abstract: We propose a simple mechanism named carry-over round robin (CORR) for scheduling cells in asynchronous transfer mode networks. We quantify the operational complexity of CORR scheduling and show that it is comparable to that of a simple round-robin scheduler. We then show that, albeit its simplicity, CORR is very competitive with much more sophisticated and significantly more complex scheduling disciplines in terms of performance. We evaluate the performance of CORR using both analysis and simulation, We derive analytical bounds on the worst case end-to-end delay achieved by a CORR scheduler for different traffic arrival patterns. Using traffic traces from MPEG video streams, we compare the delay performance of CORR with that of packet-by-packet generalized processor sharing (PGPS) and stop-and-go (SG). Our results show that, in terms of delay performance, CORR compares favorably with both PGPS and SG. We also analyze the fairness properties of CORR and show that it achieves near perfect fairness.