scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 2000"


Journal ArticleDOI
TL;DR: This paper demonstrates the benefits of cache sharing, measures the overhead of the existing protocols, and proposes a new protocol called "summary cache", which reduces the number of intercache protocol messages, reduces the bandwidth consumption, and eliminates 30% to 95% of the protocol CPU overhead, all while maintaining almost the same cache hit ratios as ICP.
Abstract: The sharing of caches among Web proxies is an important technique to reduce Web traffic and alleviate network bottlenecks. Nevertheless it is not widely deployed due to the overhead of existing protocols. In this paper we demonstrate the benefits of cache sharing, measure the overhead of the existing protocols, and propose a new protocol called "summary cache". In this new protocol, each proxy keeps a summary of the cache directory of each participating proxy, and checks these summaries for potential hits before sending any queries. Two factors contribute to our protocol's low overhead: the summaries are updated only periodically, and the directory representations are very economical, as low as 8 bits per entry. Using trace-driven simulations and a prototype implementation, we show that, compared to existing protocols such as the Internet cache protocol (ICP), summary cache reduces the number of intercache protocol messages by a factor of 25 to 60, reduces the bandwidth consumption by over 50%, eliminates 30% to 95% of the protocol CPU overhead, all while maintaining almost the same cache hit ratios as ICP. Hence summary cache scales to a large number of proxies. (This paper is a revision of Fan et al. 1998; we add more data and analysis in this version.).

2,174 citations


Journal ArticleDOI
TL;DR: The existence of fair end-to-end window-based congestion control protocols for packet-switched networks with first come-first served routers is demonstrated using a Lyapunov function.
Abstract: In this paper, we demonstrate the existence of fair end-to-end window-based congestion control protocols for packet-switched networks with first come-first served routers. Our definition of fairness generalizes proportional fairness and includes arbitrarily close approximations of max-min fairness. The protocols use only information that is available to end hosts and are designed to converge reasonably fast. Our study is based on a multiclass fluid model of the network. The convergence of the protocols is proved using a Lyapunov function. The technical challenge is in the practical implementation of the protocols.

2,161 citations


Journal ArticleDOI
TL;DR: A distributed algorithm is proposed that enables each station to tune its backoff algorithm at run-time and indicates that the capacity of the enhanced protocol is very close to the theoretical upper bound in all the configurations analyzed.
Abstract: In wireless LANs (WLANs), the medium access control (MAC) protocol is the main element that determines the efficiency in sharing the limited communication bandwidth of the wireless channel. In this paper we focus on the efficiency of the IEEE 802.11 standard for WLANs. Specifically, we analytically derive the average size of the contention window that maximizes the throughput, hereafter theoretical throughput limit, and we show that: 1) depending on the network configuration, the standard can operate very far from the theoretical throughput limit; and 2) an appropriate tuning of the backoff algorithm can drive the IEEE 802.11 protocol close to the theoretical throughput limit. Hence we propose a distributed algorithm that enables each station to tune its backoff algorithm at run-time. The performances of the IEEE 802.11 protocol, enhanced with our algorithm, are extensively investigated by simulation. Specifically, we investigate the sensitiveness of our algorithm to some network configuration parameters (number of active stations, presence of hidden terminals). Our results indicate that the capacity of the enhanced protocol is very close to the theoretical upper bound in all the configurations analyzed.

1,436 citations


Journal ArticleDOI
TL;DR: It is shown that the group key management service, using any of the three rekeying strategies, is scalable to large groups with frequent joins and leaves, and the average measured processing time per join/leave increases linearly with the logarithm of group size.
Abstract: Many emerging network applications are based upon a group communications model. As a result, securing group communications, i.e., providing confidentiality, authenticity, and integrity of messages delivered between group members, will become a critical networking issue. We present, in this paper, a novel solution to the scalability problem of group/multicast key management. We formalize the notion of a secure group as a triple (U,K,R) where U denotes a set of users, K a set of keys held by the users, and R a user-key relation. We then introduce key graphs to specify secure groups. For a special class of key graphs, we present three strategies for securely distributing rekey messages after a join/leave and specify protocols for joining and leaving a secure group. The rekeying strategies and join/leave protocols are implemented in a prototype key server we have built. We present measurement results from experiments and discuss performance comparisons. We show that our group key management service, using any of the three rekeying strategies, is scalable to large groups with frequent joins and leaves. In particular, the average measured processing time per join/leave increases linearly with the logarithm of group size.

1,376 citations


Journal ArticleDOI
TL;DR: A simple analytic characterization of the steady-state send rate as a function of loss rate and round trip time for a bulk transfer TCP flow is developed and is able to more accurately predict TCP send rate and is accurate over a wider range of loss rates.
Abstract: The steady-state performance of a bulk transfer TCP flow (i.e., a flow with a large amount of data to send, such as FTP transfers) may be characterized by the send rate, which is the amount of data sent by the sender in unit time. In this paper we develop a simple analytic characterization of the steady-state send rate as a function of loss rate and round trip time (RTT) for a bulk transfer TCP flow. Unlike the models of Lakshman and Madhow (see IEE/ACM Trans. Networking, vol.5, p.336-50, 1997), Mahdavi and Floyd (1997), Mathis, Semke, Mahdavi and Ott (see Comput. Commun. Rev., vol.27, no.3, 1997) and by by Ott et al., our model captures not only the behavior of the fast retransmit mechanism but also the effect of the time-out mechanism. Our measurements suggest that this latter behavior is important from a modeling perspective, as almost all of our TCP traces contained more time-out events than fast retransmit events. Our measurements demonstrate that our model is able to more accurately predict TCP send rate and is accurate over a wider range of loss rates. We also present a simple extension of our model to compute the throughput of a bulk transfer TCP flow, which is defined as the amount of data received by the receiver in unit time.

1,192 citations


Journal ArticleDOI
TL;DR: A game theoretic framework for bandwidth allocation for elastic services in high-speed networks based on the Nash bargaining solution from cooperative game theory that can be used to characterize a rate allocation and a pricing policy which takes into account users' budget in a fair way.
Abstract: In this paper, we present a game theoretic framework for bandwidth allocation for elastic services in high-speed networks. The framework is based on the idea of the Nash bargaining solution from cooperative game theory, which not only provides the rate settings of users that are Pareto optimal from the point of view of the whole system, but are also consistent with the fairness axioms of game theory. We first consider the centralized problem and then show that this procedure can be decentralized so that greedy optimization by users yields the system optimal bandwidth allocations. We propose a distributed algorithm for implementing the optimal and fair bandwidth allocation and provide conditions for its convergence. The paper concludes with the pricing of elastic connections based on users' bandwidth requirements and users' budget. We show that the above bargaining framework can be used to characterize a rate allocation and a pricing policy which takes into account users' budget in a fair way and such that the total network revenue is maximized.

728 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the entire optical network design problem can be considerably simplified and made computationally tractable, and that terminating the optimization within the first few iterations of the branch-and-bound method provides high-quality solutions.
Abstract: We present algorithms for the design of optimal virtual topologies embedded on wide-area wavelength-routed optical networks. The physical network architecture employs wavelength-conversion-enabled wavelength-routing switches (WRS) at the routing nodes, which allow the establishment of circuit-switched all-optical wavelength-division multiplexed (WDM) channels, called lightpaths. We assume packet-based traffic in the network, such that a packet travelling from its source to its destination may have to multihop through one or more such lightpaths. We present an exact integer linear programming (ILP) formulation for the complete virtual topology design, including choice of the constituent lightpaths, routes for these lightpaths, and intensity of packet flows through these lightpaths. By minimizing the average packet hop distance in our objective function and by relaxing the wavelength-continuity constraints, we demonstrate that the entire optical network design problem can be considerably simplified and made computationally tractable. Although an ILP may take an exponential amount of time to obtain an exact optimal solution, we demonstrate that terminating the optimization within the first few iterations of the branch-and-bound method provides high-quality solutions. We ran experiments using the CPLEX optimization package on the NSFNET topology, a subset of the PACBELL network topology, as well as a third random topology to substantiate this conjecture. Minimizing the average packet hop distance is equivalent to maximizing the total network throughput under balanced flows through the lightpaths. The problem formulation can be used to design a balanced network, such that the utilizations of both transceivers and wavelengths in the network are maximized, thus reducing the cost of the network equipment. We analyze the trade-offs in budgeting of resources (transceivers and switch sizes) in the optical network, and demonstrate how an improperly designed network may have low utilization of any one of these resources. We also use the problem formulation to provide a reconfiguration methodology in order to adapt the virtual topology to changing traffic conditions.

486 citations


Journal ArticleDOI
TL;DR: There is a surprising consistency over time in the relative amount of web traffic from the server along a path, lending a stability to the TERC location solution and these techniques can be used by network providers to reduce traffic load in their network.
Abstract: This paper studies the problem of where to place network caches. Emphasis is given to caches that are transparent to the clients since they are easier to manage and they require no cooperation from the clients. Our goal is to minimize the overall flow or the average delay by placing a given number of caches in the network. We formulate these location problems both for general caches and for transparent en-route caches (TERCs), and identify that, in general, they are intractable. We give optimal algorithms for line and ring networks, and present closed form formulae for some special cases. We also present a computationally efficient dynamic programming algorithm for the single server case. This last case is of particular practical interest. It models a network that wishes to minimize the average access delay for a single web server. We experimentally study the effects of our algorithm using real web server data. We observe that a small number of TERCs are sufficient to reduce the network traffic significantly. Furthermore, there is a surprising consistency over time in the relative amount of web traffic from the server along a path, lending a stability to our TERC location solution. Our techniques can be used by network providers to reduce traffic load in their network.

400 citations


Journal ArticleDOI
TL;DR: It is shown analytically and computationally, that the performance of an optimal pricing strategy is closely matched by a suitably chosen static price, which does not depend on instantaneous congestion, which indicates that the easily implementable time-of-day pricing will often suffice.
Abstract: We consider a service provider (SP) who provides access to a communication network or some other form of on-line services. Users initiate calls that belong to a set of diverse service classes, differing in resource requirements, demand pattern, and call duration. The SP charges a fee per call, which can depend on the current congestion level, and which affects users' demand for calls. We provide a dynamic programming formulation of the problems of revenue and welfare maximization, and derive some qualitative properties of the optimal solution. We also provide a number of approximate approaches, together with an analysis that indicates that near-optimality is obtained for the case of many, relatively small, users. In particular, we show analytically as well as computationally, that the performance of an optimal pricing strategy is closely matched by a suitably chosen static price, which does not depend on instantaneous congestion. This indicates that the easily implementable time-of-day pricing will often suffice. Throughout, we compare the alternative formulations involving revenue or welfare maximization, respectively, and draw some qualitative conclusions.

379 citations


Journal ArticleDOI
TL;DR: A distributed power-control algorithm with active link protection (DPC/ALP) that maintains the quality of service of operational links above given thresholds at all times (link quality protection) is studied.
Abstract: A distributed power-control algorithm with active link protection (DPC/ALP) is studied in this paper. It maintains the quality of service of operational (active) links above given thresholds at all times (link quality protection). As network congestion builds up, established links sustain their quality, while incoming ones may be blocked and rejected. A suite of admission control algorithms, based on the DPC/ALP one, is also studied. They are distributed/autonomous and operate using local interference measurements. A primarily networking approach to power control is taken here, based on the concept of active link protection, which naturally supports the implementation of admission control. Extensive simulation experiments are used to explore the network dynamics and investigate basic operational effects/tradeoffs related to system performance.

361 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel replacement policy, called LRV, which selects for replacement the document with the lowest relative value among those in cache, and shows how LRV outperforms least recently used (LRU) and other policies and can significantly improve the performance of the cache, especially for a small one.
Abstract: In this paper, we analyze access traces to a Web proxy, looking at statistical parameters to be used in the design of a replacement policy for documents held in the cache. In the first part of this paper, we present a number of properties of the lifetime and statistics of access to documents, derived from two large trace sets coming from very different proxies and spanning over time intervals of up to five months. In the second part, we propose a novel replacement policy, called LRV, which selects for replacement the document with the lowest relative value among those in cache. In LRV, the value of a document is computed adaptively based on information readily available to the proxy server. The algorithm has no hardwired constants, and the computations associated with the replacement policy require only a small constant time. We show how LRV outperforms least recently used (LRU) and other policies and can significantly improve the performance of the cache, especially for a small one.

Journal ArticleDOI
TL;DR: An algorithmic framework is established that allows for a variety of dynamic SPT algorithms including dynamic versions of the well-known Dijkstra, Bellman-Ford, D'Esopo-Pape algorithms, and to establish proofs of correctness for these algorithms in a unified way.
Abstract: The open shortest path first (OSPF) and IS-IS routing protocols widely used in today's Internet compute a shortest path tree (SPT) from each router to other routers in a routing area Many existing commercial routers recompute an SPT from scratch following changes in the link states of the network Such recomputation of an entire SPT is inefficient and may consume a considerable amount of CPU time Moreover, as there may coexist multiple SPTs in a network with a set of given link states, recomputation from scratch causes frequent unnecessary changes in the topology of an existing SPT and may lead to routing instability We present new dynamic SPT algorithms that make use of the structure of the previously computed SPT Besides efficiency, our algorithm design objective is to achieve routing stability by making minimum changes to the topology of an existing SPT (while maintaining shortest path property) when some link states in the network have changed We establish an algorithmic framework that allows us to characterize a variety of dynamic SPT algorithms including dynamic versions of the well-known Dijkstra, Bellman-Ford, D'Esopo-Pape algorithms, and to establish proofs of correctness for these algorithms in a unified way The theoretical asymptotic complexity of our new dynamic algorithms matches the best known results in the literature

Journal ArticleDOI
TL;DR: This paper examines the definition and support of the anycasting paradigm at the application-layer, providing a service that uses an anycasting resolver to map an anycast domain name and a selection criteria into an IP address and shows that selecting a server using the architecture and estimation technique can improve the client response time by a factors of two over nearest server selection and by a factor of four over random server selection.
Abstract: Server replication improves the ability of a service to handle a large number of clients. One of the important factors in the efficient utilization of replicated servers is the ability to direct client requests to the "best" server, according to some optimality criteria. In the anycasting communication paradigm, a sender communicates with a receiver chosen from an anycast group of equivalent receivers. As such, anycasting is well suited to the problem of directing clients to replicated servers. This paper examines the definition and support of the anycasting paradigm at the application-layer, providing a service that uses an anycast resolver to map an anycast domain name and a selection criteria into an IP address. By realizing anycasting in the application-layer, we achieve flexibility in the optimization criteria and ease the deployment of the service. As a case study, we examine the performance of our system for a key service: replicated Web servers. To this end, we develop an approach for estimating the response time that a client will experience when accessing given servers. Such information is maintained in the anycast resolver that clients query to obtain the identity of the server with the best estimated response time. Our performance collection technique combines server push with resolver probes to estimate the expected response time without undue overhead. Our experiments show that selecting a server using our architecture and estimation technique can improve the client response time by a factor of two over nearest server selection and by a factor of four over random server selection.

Journal ArticleDOI
TL;DR: Two OADM ring networks are given that have similar performance but are less expensive and two others are considered that are nonblocking, where one has a wide-sense non blocking property and the other has a rearrangeably nonblocking property.
Abstract: We provide network designs for optical add-drop wavelength-division-multiplexed (OADM) rings that minimize overall network cost, rather than just the number of wavelengths needed. The network cost includes the cost of the transceivers required at the nodes as well as the number of wavelengths. The transceiver cost includes the cost of terminating equipment as well as higher-layer electronic processing equipment, which in practice can dominate over the cost of the number of wavelengths in the network. The networks support dynamic (i.e., time-varying) traffic streams that are at lower rates (e.g., OC-3, 155 Mb/s) than the lightpath capacities (e.g., OC-48, 2.5 Gb/s). A simple OADM ring is the point-to-point ring, where traffic is transported on WDM links optically, but switched through nodes electronically. Although the network is efficient in using link bandwidth, it has high electronic and opto-electronic processing costs. Two OADM ring networks are given that have similar performance but are less expensive. Two other OADM ring networks are considered that are nonblocking, where one has a wide-sense nonblocking property and the other has a rearrangeably nonblocking property. All the networks are compared using the cost criteria of number of wavelengths and number of transceivers.

Journal ArticleDOI
TL;DR: In this article, an explicit rate indication for congestion avoidance (ERICA) scheme for rate-based feedback from asynchronous transfer mode (ATM) switches is described. But the scheme is designed to achieve high link utilization with low delays and fast transient response and is also fair and robust to measurement errors caused by the variations in ABR demand and capacity.
Abstract: This paper describes the "explicit rate indication for congestion avoidance" (ERICA) scheme for rate-based feedback from asynchronous transfer mode (ATM) switches. In ERICA, the switches monitor their load on each link and determine a load factor, the available capacity, and the number of currently active virtual channels. This information is used to advise the sources about the rates at which they should transmit. The algorithm is designed to achieve high link utilization with low delays and fast transient response. It is also fair and robust to measurement errors caused by the variations in ABR demand and capacity. We present performance analysis of the scheme using both analytical arguments and simulation results. The scheme is being considered for implementation by several ATM switch manufacturers.

Journal ArticleDOI
TL;DR: This study shows that using the proposed algorithms, lower bounds on the number of wavelengths and S-ADMs required for a given traffic pattern can be closely approached in most cases or even achieved in some cases.
Abstract: In high-speed SONET rings with point-to-point WDM links, the cost of SONET add-drop multiplexers (S-ADMs) can be dominantly high. However, by grooming traffic (i.e., multiplexing lower-rate streams) appropriately and using wavelength ADMs (WADMs), the number of S-ADMs can be dramatically reduced. In this paper, we propose optimal or near-optimal algorithms for traffic grooming and wavelength assignment to reduce both the number of wavelengths and the number of S-ADMs. The algorithms proposed are generic in that they can be applied to both unidirectional and bidirectional rings having an arbitrary number of nodes under both uniform and nonuniform (i.e., arbitrary) traffic with an arbitrary grooming factor. Some lower bounds on the number of wavelengths and S-ADMs required for a given traffic pattern are derived, and used to determine the optimality of the proposed algorithms. Our study shows that using the proposed algorithms, these lower bounds can he closely approached in most cases or even achieved in some cases. In addition, even when using a minimum number of wavelengths, the savings in S-ADMs due to traffic grooming (and the use of WADMs) are significant, especially for large networks.

Journal ArticleDOI
TL;DR: A high-performance, modular, extended services router software architecture in the Net BSD operating system kernel that allows code modules, called plugins, to be dynamically added and configured at run time and can forward packets up to three times faster than the best effort kernel.
Abstract: Present-day Internet protocol routers typically employ monolithic operating systems that are not easily upgradable and extensible. With the rapid rate of protocol development it is becoming increasingly important to dynamically upgrade router software in an incremental fashion. We have designed and implemented a high-performance, modular, extended services router software architecture in the Net BSD operating system kernel. This architecture allows code modules, called plugins, to be dynamically added and configured at run time. One of the novel features of our design is the ability to bind different plugins to individual flows; this allows for distinct plugin implementations to seamlessly coexist in the same runtime environment. We achieve high performance through a carefully designed modular architecture, an innovative packet classification algorithm that is highly efficient, and by caching that exploits the flow-like characteristics of Internet traffic. Compared to a monolithic best effort kernel, our implementation requires an average increase in packet processing overhead of only 8%, or 600 cycles per packet when running on an Intel Pentium Pro at 233 MHz. By shortcutting the forward loop based on the per-flow state we establish, we can forward packets up to three times faster than the best effort kernel.

Journal ArticleDOI
TL;DR: An optimization technique, called connection splicing, is introduced that can be applied to a TCP forwarder and improves TCP forwarding performance by a factor of two to four, making it competitive with IP router performance on the same hardware.
Abstract: A TCP forwarder is a network node that establishes and forwards data between a pair of TCP connections. An example of a TCP forwarder is a firewall that places a proxy between a TCP connection to an external host and a TCP connection to an internal host, controlling access to a resource on the internal host. Once the proxy approves the access, it simply forwards data from one connection to the other. We use the term TCP forwarding to describe indirect TCP communication via a proxy in general. This paper characterizes the behavior of TCP forwarding, and illustrates the role TCP forwarding plays in common network services like firewalls and HTTP proxies. We then introduce an optimization technique, called connection splicing, that can be applied to a TCP forwarder, and report the results of a performance study designed to evaluate its impact. Connection splicing improves TCP forwarding performance by a factor of two to four, making it competitive with IP router performance on the same hardware.

Journal ArticleDOI
TL;DR: It is proved that the total cost function is a convex function of the threshold, and an efficient algorithm is proposed to find the optimal threshold directly directly.
Abstract: We study a dynamic mobility management scheme: the movement-based location update scheme. An analytical model is applied to formulate the costs of location update and paging in the movement-based location update scheme. The problem of minimizing the total cost is formulated as an optimization problem that finds the optimal threshold in the movement-based location update scheme. We prove that the total cost function is a convex function of the threshold. Based on the structure of the optimal solution, an efficient algorithm is proposed to find the optimal threshold directly. Furthermore, the proposed algorithm is applied to study the effects of changing important parameters of mobility and calling patterns numerically.

Journal ArticleDOI
TL;DR: This work proves DCUR's correctness by showing that it is always capable of constructing a loop-free delay-constrained path within finite time, if such a path exists.
Abstract: We study the NP-hard delay-constrained least cost (DCLC) path problem. A solution to this problem is needed to provide real-time communication service to connection-oriented applications, such as video and voice. We propose a simple, distributed heuristic solution, called the delay-constrained unicast routing (DCUR) algorithm, DCUR requires limited network state information to be kept at each node: a cost vector and a delay vector. We prove DCUR's correctness by showing that it is always capable of constructing a loop-free delay-constrained path within finite time, if such a path exists. The worst case message complexity of DCUR is O(|V|/sup 2/) messages, where |V| is the number of nodes. However, simulation results show that, on the average, DCUR requires much fewer messages. Therefore, DCUR scales well to large networks. We also use simulation to compare DCUR to the optimal algorithm, and to the least delay path algorithm. Our results show that DCUR's path costs are within 10% of those of the optimal solution.

Journal ArticleDOI
TL;DR: The results demonstrate that the proposed proxy-server-based approach provides an effective and scalable solution to the problem of the end-to-end video delivery over WANs.
Abstract: Real-time distribution of stored video over wide-area networks (WANs) is a crucial component of many emerging distributed multimedia applications. The heterogeneity in the underlying network environments is an important factor that must be taken into consideration when designing an end-to-end video delivery system. We present a novel approach to the problem of end-to-end video delivery over WANs using proxy servers situated between local-area networks (LANs) and a backbone WAN. A major objective of our approach is to reduce the backbone WAN bandwidth requirement. Toward this end, we develop an effective video delivery technique called video staging via intelligent utilization of the disk bandwidth and storage space available at proxy servers. Using this video staging technique, only part of a video stream is retrieved directly from the central video server across the backbone WAN whereas the rest of the video stream is delivered to users locally from proxy servers attached to the LANs. In this manner, the WAN bandwidth requirement can be significantly reduced, particularly when a large number of users from the same LAN access the video data. We design several video staging methods and evaluate their effectiveness in trading the disk bandwidth of a proxy server for the backbone WAN bandwidth. We also develop two heuristic algorithms to solve the problem of designing a multiple video staging scheme for a proxy server with a given video access profile of a LAN. Our results demonstrate that the proposed proxy-server-based approach provides an effective and scalable solution to the problem of the end-to-end video delivery over WANs.

Journal ArticleDOI
TL;DR: Using simulations that incorporate multilayered video codecs, it is demonstrated that SAMM algorithms can exhibit better scalability and responsiveness to congestion than algorithms that are not source-adaptive.
Abstract: Layered transmission of data is often recommended as a solution to the problem of varying bandwidth constraints in multicast video applications. Multilayered encoding, however, is not sufficient to provide high video quality and high network utilization, since bandwidth constraints frequently change over time. Adaptive techniques capable of adjusting the rates of video layers are required to maximize video quality and network utilization. We define a class of algorithms known as source-adaptive multilayered multicast (SAMM) algorithms. In SAMM algorithms, the source uses congestion feedback to adjust the number of generated layers and the bit rate of each layer. We contrast two specific SAMM algorithms: an end-to-end algorithm, in which only end systems monitor available bandwidth and report the amount of available bandwidth to the source, and a network-based algorithm, in which intermediate nodes also monitor and report available bandwidth. Using simulations that incorporate multilayered video codecs, we demonstrate that SAMM algorithms can exhibit better scalability and responsiveness to congestion than algorithms that are not source-adaptive. We also study the performance trade-offs between end-to-end and network-based SAMM algorithms.

Journal ArticleDOI
TL;DR: This study aims at showing that having a global common time reference, together with time-driven priority (TDP) and VBR MPEG video encoding, provides adequate end-to-end delay, which is below 10 ms; independent of the network instant load; andindependent of the connection rate.
Abstract: Videoconferencing is an important global application-it enables people around the globe to interact when distance separates them. In order for the participants in a videoconference call to interact naturally, the end-to-end delay should be below human perception; even though an objective and unique figure cannot be set, 100 ms is widely recognized as the desired one-way delay requirement for interaction. Since the global propagation delay can be about 100 ms, the actual end-to-end delay budget available to the system designer (excluding propagation delay) can be no more than 10 ms. We identify the components of the end-to-end delay in various configurations with the objective of understanding how it can be kept below the desired 10-ms bound. We analyze these components step-by-step through six system configurations obtained by combining three generic network architectures with two video encoding schemes. We study the transmission of raw video and variable bit rate (VBR) MPEG video encoding over (1) circuit switching; (2) synchronous packet switching; and (3) asynchronous packet switching. In addition, we show that constant bit rate (CBR) MPEG encoding delivers unacceptable delay-on the order of the group of pictures (GOP) time interval-when maximizing the quality for static scenes. This study aims at showing that having a global common time reference, together with time-driven priority (TDP) and VBR MPEG video encoding, provides adequate end-to-end delay, which is (1) below 10 ms; (2) independent of the network instant load; and (3) independent of the connection rate. The resulting end-to-end delay (excluding propagation delay) can be smaller than the video frame period, which is better than what can be obtained with circuit switching.

Journal ArticleDOI
TL;DR: A novel scheduling algorithm called hierarchical fair service curve (H-FSC) is proposed that approximates the model closely and efficiently and ensures real-time and priority services, while trying to minimize the discrepancy between the actual services provided to and the services defined by the FSC link-sharing model for the interior classes.
Abstract: We study hierarchical resource management models and algorithms that support both link-sharing and guaranteed real-time services with priority (decoupled delay and bandwidth allocation). We extend the service curve based quality of service (QoS) model, which defines both delay and bandwidth requirements of a class in a hierarchy, to include fairness, which is important for the integration of real-time and hierarchical link-sharing services. The resulting fair service curve (FSC) link-sharing model formalizes the goals of link-sharing, real-time and priority services and exposes the fundamental trade-offs between these goals. In particular, with decoupled delay and bandwidth allocation, it is impossible to simultaneously provide guaranteed real-time service and achieve perfect link-sharing. We propose a novel scheduling algorithm called hierarchical fair service curve (H-FSC) that approximates the model closely and efficiently. The algorithm always guarantees the service curves of leaf classes, thus ensures real-time and priority services, while trying to minimize the discrepancy between the actual services provided to and the services defined by the FSC link-sharing model for the interior classes. We have implemented the H-FSC scheduler in NetBSD. By performing analyzes, simulations and measurement experiments, we evaluate the link-sharing and real-time performances of H-FSC, and determine the computation overhead.

Journal ArticleDOI
TL;DR: This work considers the cases of a single and multiplexed traffic streams and derives the exact packet-loss rate (PLR) due to buffer overflow at the sender side of the wireless link and obtains a good approximation using the Chernoff-dominant eigenvalue (CDE) approach.
Abstract: Providing quality-of-service (QoS) guarantees over wireless packet networks poses a host of technical challenges that are not present in wireline networks. One of the key issues is how to account for the characteristics of the time-varying wireless channel and for the impact of link-layer error control in the provisioning of packet-level QoS. We accommodate both aspects in analyzing the packet-loss performance over a wireless link. We consider the cases of a single and multiplexed traffic streams. The link capacity fluctuates according to a fluid version of Gilbert-Elliott channel model. Traffic sources are modeled as on-off fluid processes. For the single-stream case, we derive the exact packet-loss rate (PLR) due to buffer overflow at the sender side of the wireless link. We also obtain a closed-form approximation for the corresponding wireless effective bandwidth. In the case of multiplexed streams, we obtain a good approximation for the PLR using the Chernoff-dominant eigenvalue (CDE) approach. Our analysis is then used to study the optimal forward error correction code rate that guarantees a given PLR while minimizing the allocated bandwidth. Numerical results and simulations are used to verify the adequacy of our analysis and to study the impact of error control on the allocation of bandwidth for guaranteed packet-loss performance.

Journal ArticleDOI
TL;DR: An iterative path decomposition algorithm is presented to evaluate accurately and efficiently the blocking performance of such networks with and without wavelength converters to represent a simple and computationally efficient solution to the difficult problem of computing call-blocking probabilities in wavelength-routing networks.
Abstract: We study a class of circuit-switched wavelength-routing networks with fixed or alternate routing and with random wavelength allocation. We present an iterative path decomposition algorithm to evaluate accurately and efficiently the blocking performance of such networks with and without wavelength converters. Our iterative algorithm analyzes the original network by decomposing it into single-path subsystems. These subsystems are analyzed in isolation, and the individual results are appropriately combined to obtain a solution for the overall network. To analyze individual subsystems, we first construct an exact Markov process that captures the behavior of a path in terms of wavelength use. We also obtain an approximate Markov process which has a closed-form solution that can be computed efficiently for short paths. We then develop an iterative algorithm to analyze approximately arbitrarily long paths. The path decomposition approach naturally captures the correlation of both link loads and link blocking events. Our algorithm represents a simple and computationally efficient solution to the difficult problem of computing call-blocking probabilities in wavelength-routing networks. We also demonstrate how our analytical techniques can be applied to gain insight into the problem of converter placement in wavelength-routing networks.

Journal ArticleDOI
TL;DR: The model is compared with simulations, the accuracy of the asymptotic approximations are examined, the increase in bandwidth needed to satisfy the tail-probability performance objective as compared with the mean objective, and regimes where statistical gain can and cannot be realized are shown.
Abstract: Simple and robust engineering rules for dimensioning bandwidth for elastic data traffic are derived for a single bottleneck link via normal approximations for a closed-queueing network (CQN) model in heavy traffic. Elastic data applications adapt to available bandwidth via a feedback control such as the transmission control protocol (TCP) or the available bit rate transfer capability in asynchronous transfer mode. The dimensioning rules satisfy a performance objective based on the mean or tail probability of the per-flow bandwidth. For the mean objective, we obtain a simple expression for the effective bandwidth of an elastic source. We provide a new derivation of the normal approximation in CQNs using more accurate asymptotic expansions and give an explicit estimate of the error in the normal approximation. A CQN model was chosen to obtain the desirable property that the results depend on the distribution of the file sizes only via the mean, and not the heavy-tail characteristics. We view the exogenous "load" in terms of the file sizes and consider the resulting flow of packets as dependent on the presence of other flows and the closed-loop controls. We compare the model with simulations, examine the accuracy of the asymptotic approximations, quantify the increase in bandwidth needed to satisfy the tail-probability performance objective as compared with the mean objective, and show regimes where statistical gain can and cannot be realized.

Journal ArticleDOI
TL;DR: In this paper, the problem of how to communicate securely with a set of users (the target set) over an insecure broadcast channel is addressed, and the problem is solved by f-redundant establishment key allocations, which guarantee that the total number of recipients is no more than f times the number of intended recipients.
Abstract: The problem we address is how to communicate securely with a set of users (the target set) over an insecure broadcast channel. This problem occurs in two application domains: satellite/cable pay TV and the Internet MBone. In these systems, the parameters of major concern are the number of key transmissions and the number of keys held by each receiver. In the Internet domain, previous schemes suggest building a separate key tree for each multicast program, thus incurring a setup cost of at least k log k per program for target sets of size k. In the pay TV domain, a single key structure is used for all programs, but known theoretical bounds show that either very long transmissions are required, or that each receiver needs to keep prohibitively many keys. Our approach is targeted at both domains. Our schemes maintain a single key structure that requires each receiver to keep only a logarithmic number of establishment keys for its entire lifetime. At the same time our schemes admit low numbers of transmissions. In order to achieve these goals, and to break away from the theoretical bounds, we allow a controlled number of users outside the target set to occasionally receive the multicast. This relaxation is appropriate for many scenarios in which the encryption is used to force consumers to pay for a service, rather than to withhold sensitive information. For this purpose, we introduce f-redundant establishment key allocations, which guarantee that the total number of recipients is no more than f times the number of intended recipients. We measure the performance of such schemes by the number of key transmissions they require, by their redundancy f, and by the probability that a user outside the target set (a free-rider) will be able to decrypt the multicast. We prove a new lower bound, present several new establishment key allocations, and evaluate our schemes' performance by extensive simulation.

Journal ArticleDOI
TL;DR: An on-line version of the Abry-Veitch wavelet-based estimator of the Hurst parameter has very low memory and computational requirements and scales naturally to arbitrarily high data rates, enabling its use in real-time applications such as admission control, and avoiding the need to store huge data sets for off-line analysis.
Abstract: An on-line version of the Abry-Veitch (see IEEE GLOBECOM'98, Sydney, Australia,p.3716-21, 1998) wavelet-based estimator of the Hurst parameter is presented. It has very low memory and computational requirements and scales naturally to arbitrarily high data rates, enabling its use in real-time applications such as admission control, and avoiding the need to store huge data sets for off-line analysis. The performance of the estimator as a function of the length of data processed is demonstrated using simulated data. An implementation for 10-Mb/s Ethernet based on standard hardware supporting sampling rates of 1 data point per millisecond is described, and results of its operation presented, as is an implementation for 155-Mb/s asynchronous transfer mode networks. Finally we illustrate the power of on-line measurements by collecting measurements over a period of five months, and using them to look for diurnal trends in scaling properties of the data.

Journal ArticleDOI
TL;DR: This work analyzes three types of failure propagation, called "bottleneck," "connectivity," and "multiple groups," and presents a solution based on the definition of appropriate requirements at network design and a WDM channel placement algorithm, protection interoperability for WDM (PIW).
Abstract: The failure of a single optical link or node in a wavelength division multiplexing (WDM) network may cause the simultaneous failure of several optical channels. In some cases, this simultaneity may make it impossible for the higher level (SONET or IP) to restore service. This occurs when the higher level is not aware of the internal details of network design at the WDM level. We call this phenomenon "failure propagation." We analyze three types of failure propagation, called "bottleneck," "connectivity," and "multiple groups." Then we present a solution based on the definition of appropriate requirements at network design and a WDM channel placement algorithm, protection interoperability for WDM (PIW). Our method does not require the higher level to be aware of WDM internals, but still avoids the three types of failure propagation mentioned above. We finally show the result on various network examples.