scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Network in 2000"


Journal Article•DOI•
Christophe Diot1, Brian Neil Levine, Bryan Lyles, H. Kassem, D. Balensiefen •
TL;DR: This work examines the issues that have limited the commercial deployment of IP multicast from the viewpoint of carriers, and analyzes where the model fails and what it does not offer, and discusses requirements for successful deployment of multicast services.
Abstract: IP multicast offers the scalable point-to-multipoint delivery necessary for using group communication applications on the Internet. However, the IP multicast service has seen slow commercial deployment by ISPs and carriers. The original service model was designed without a clear understanding of commercial requirements or a robust implementation strategy. The very limited number of applications and the complexity of the architectural design-which we believe is a consequence of the open service model-have deterred widespread deployment as well. We examine the issues that have limited the commercial deployment of IP multicast from the viewpoint of carriers. We analyze where the model fails and what it does not offer, and we discuss requirements for successful deployment of multicast services.

861 citations


Journal Article•DOI•
Martin Arlitt, Tai Jin1•
TL;DR: It is found that improvements in the caching architecture of the World Wide Web are changing the workloads of Web servers, but major improvements to that architecture are still necessary.
Abstract: This article presents a detailed workload characterization study of the 1998 World Cup Web site. Measurements from this site were collected over a three-month period. During this time the site received 1.35 billion requests, making this the largest Web workload analyzed to date. By examining this extremely busy site and through comparison with existing characterization studies, we are able to determine how Web server workloads are evolving. We find that improvements in the caching architecture of the World Wide Web are changing the workloads of Web servers, but major improvements to that architecture are still necessary. In particular, we uncover evidence that a better consistency mechanism is required for World Wide Web caches.

743 citations


Journal Article•DOI•
TL;DR: This article discusses traffic engineering with multiprotocol label switching (MPLS) in an Internet service provider's network, and discusses how to provide QoS in a network with MPLS.
Abstract: This article discusses traffic engineering with multiprotocol label switching (MPLS) in an Internet service provider's network. We first review MPLS, constraint-based routing, and enhanced link state interior gateway protocols to provide a background for traffic engineering. We then discuss the general issues of designing an MPLS system for traffic engineering. The design of GlobalCenter's MPLS system is presented. Based on our experiences, a generic procedure for deploying an MPLS system is proposed. We also discuss how to provide QoS in a network with MPLS. Putting these together, we present to readers the practical issues of traffic engineering and a working solution for traffic engineering with MPLS in the Internet.

451 citations


Journal Article•DOI•
TL;DR: This article takes a look at the techniques used to achieve survivability in traditional optical networks, and how those techniques are evolving to make next-generation WDM networks survivable.
Abstract: Survivability, the ability of a network to withstand and recover from failures, is one of the most important requirements of networks. Its importance is magnified in fiber optic networks with throughputs on the order of gigabits and terabits per second. This article takes a look at the techniques used to achieve survivability in traditional optical networks, and how those techniques are evolving to make next-generation WDM networks survivable.

445 citations


Journal Article•DOI•
TL;DR: This work presents a tutorial-cum-survey of the various multicast routing algorithms and their relationship with multicasts routing protocols for packet-switched-wide-area networks.
Abstract: Multicasting is the ability of a communication network to accept a single message from an application and to deliver copies of the message to multiple recipients at different locations. There has been an explosion of research literature on multicast communication. This work presents a tutorial-cum-survey of the various multicast routing algorithms and their relationship with multicast routing protocols for packet-switched-wide-area networks. Our contribution should be of particular benefit to the generic networking audience (and, to a lesser extent, to the expert on this subject).

295 citations


Journal Article•DOI•
Bin Wang1, J.C. Hou•
TL;DR: This article classifies multicast routing problems according to their optimization functions and performance constraints, present basic routing algorithms in each problem class, and discuss their strengths and weaknesses.
Abstract: Multicast services have been increasingly used by various continuous media applications. The QoS requirements of these continuous media applications prompt the necessity for QoS-driven, constraint-based multicast routing. This article provides a comprehensive overview of existing multicast routing algorithms, protocols, and their QoS extension. In particular, we classify multicast routing problems according to their optimization functions and performance constraints, present basic routing algorithms in each problem class, and discuss their strengths and weaknesses. We also categorize existing multicast routing protocols, outline the issues and challenges in providing QoS in multicast routing, and point out possible future research directions.

284 citations


Journal Article•DOI•
TL;DR: The past, present, and future of multicast are described; how the emphasis has been on developing and refining intradomain multicast routing protocols; and how multicast is being deployed in both Internet2 networks and the commodity Internet.
Abstract: Multicast communication-the one-to-many or many-to-many delivery of data-is a hot topic. It is of interest in the research community, among standards groups, and to network service providers. For all the attention multicast has received, there are still issues that have not been completely resolved. One result is that protocols are still evolving, and some standards are not yet finished. From a deployment perspective, the lack of standards has slowed progress, but efforts to deploy multicast as an experimental service are in fact gaining momentum. The question now is how long it will be before multicast becomes a true Internet service. The goal of this article is to describe the past, present, and future of multicast. Starting with the Multicast Backbone (MBone), we describe how the emphasis has been on developing and refining intradomain multicast routing protocols. Starting in the middle to late 1990s, particular emphasis has been placed on developing interdomain multicast routing protocols. We provide a functional overview of the currently deployed solution. The future of multicast may hinge on several research efforts that are working to make the provision of multicast less complex by fundamentally changing the multicast model. We survey these efforts. Finally, attempts are being made to deploy native multicast routing in both Internet2 networks and the commodity Internet. We examine how multicast is being deployed in these networks.

279 citations


Journal Article•DOI•
TL;DR: A survey on location management algorithms for next-generation personal communications networks and a number of open problems that need to be addressed for the deployment of next- generation PCNs are presented.
Abstract: This article presents a survey on location management algorithms for next-generation personal communications networks. We first describe different static and dynamic location update algorithms. Then we discuss various selective paging strategies. We also present various modeling techniques that have been used for the performance analysis of location update and terminal paging. We conclude by stating a number of open problems that need to be addressed for the deployment of next-generation PCNs.

278 citations


Journal Article•DOI•
TL;DR: A shaping scheme to set the offset, an important system parameter for OBS, between the successive data bursts of a given data stream and their associated control packets results in robust operation of the network and also facilitates traffic engineering.
Abstract: Wavelength-division multiplexing has emerged as an important physical layer technology Optical transmission provides a physical layer capable of carrying bits at the speed at the order of a gigabit per second Optical burst switching is proposed to overcome the shortcomings of conventional WDM deployment, such as lack of fine bandwidth granularity in wavelength routing and electronic speed bottlenecks in SONET/SDH We describe an architecture for IP network over the OBS WDM transmission core The use of MPLS-type technique for forwarding data bursts and the inclusion of a medium access control layer between the optical (WDM) and IP layers are the key ingredients of the proposed architecture In particular, the architecture is based on provisioning MPLS paths, also called label switched paths, of desired quality of service through the OBS WDM transmission core The MAC layer performs various OBS-specific functions, such as burst assembly, burst scheduling, and offset setting/traffic shaping While burst assembly and burst scheduling are relatively straightforward, we point out that the offset setting strategy has significant impact on the performance of IP network operating over OBS WDM core We describe a shaping scheme to set the offset, an important system parameter for OBS, between the successive data bursts of a given data stream (label switched path) and their associated control packets This scheme results in robust operation of the network and also facilitates traffic engineering Guidelines are provided for implementing various IP QoS mechanisms in the optical backbone using OBS

270 citations


Journal Article•DOI•
Anja Feldmann1, Albert Greenberg2, Carsten Lund2, Nicholas Reingold2, Jennifer Rexford2 •
TL;DR: The AT&T Labs NetScope tool is described, a unified set of software tools for managing the performance of IP backbone networks to generate global views of the network on the basis of configuration and usage data associated with the individual network elements.
Abstract: Managing large IP networks requires an understanding of the current traffic flows, routing policies, and network configuration. However, the state of the art for managing IP networks involves manual configuration of each IP router, and traffic engineering based on limited measurements. The networking industry is sorely lacking in software systems that a large Internet service provider can use to support traffic measurement and network modeling, the underpinnings of effective traffic engineering. This article describes the AT&T Labs NetScope, a unified set of software tools for managing the performance of IP backbone networks. The key idea behind NetScope is to generate global views of the network on the basis of configuration and usage data associated with the individual network elements. Having created an appropriate global view, we are able to infer and visualize the networkwide implications of local changes in traffic, configuration, and control. Using NetScope, a network provider can experiment with changes in network configuration in a simulated environment rather than the operational network. In addition, the tool provides a sound framework for additional modules for network optimization and performance debugging. We demonstrate the capabilities of the tool through an example traffic engineering exercise of locating a heavily loaded link, identifying which traffic demands flow on the link, and changing the configuration of intradomain routing to reduce the congestion.

269 citations


Journal Article•DOI•
TL;DR: This work examines the seminal work, early products, and a sample of contemporary commercial offerings in the field of transparent Web server clustering, and broadly classify transparentServer clustering into three categories.
Abstract: The exponential growth of the Internet, coupled with the increasing popularity of dynamically generated content on the World Wide Web, has created the need for more and faster Web servers capable of serving the over 100 million Internet users. Server clustering has emerged as a promising technique to build scalable Web servers. We examine the seminal work, early products, and a sample of contemporary commercial offerings in the field of transparent Web server clustering. We broadly classify transparent server clustering into three categories.

Journal Article•DOI•
TL;DR: A classification framework the different load-balancing methods and compare their performance is presented and one class of methods is evaluated in detail using a prototype implementation with instruction-level analysis of processing overhead.
Abstract: Scalable Web servers can be built using a network of workstations where server capacity can be extended by adding new workstations as the workload increases. The topic of our article is a comparison of different method to do load-balancing of HTTP traffic for scalable Web servers. We present a classification framework the different load-balancing methods and compare their performance. In addition, we evaluate in detail one class of methods using a prototype implementation with instruction-level analysis of processing overhead. The comparison is based on a trace driven simulation of traces from a large ISP (Internet Service Provider) in Norway. The simulation model is used to analyze different load-balancing schemes based on redirection of request in the network and redirection in the mapping between a canonical name (CNAME) and IP address. The latter is vulnerable to spatial and temporal locality, although for the set of traces used, the impact of locality is limited. The best performance is obtained with redirection in the network.

Journal Article•DOI•
TL;DR: Optical CDMA is shown to be competitive with other networking technologies such as WDMA and TDMA, but has the benefit of more flexibility, simpler protocols, and no need for centralized network control.
Abstract: Asynchronous, high-speed multiple-access is proposed as a natural solution to achieving asynchronous, high-speed connectivity in a local area network environment. Optical CDMA is shown to be competitive with other networking technologies such as WDMA and TDMA, but has the benefit of more flexibility, simpler protocols, and no need for centralized network control. The limitations of one-dimensional optical orthogonal codes for CDMA have motivated the idea of spectral spreading in both the temporal and wavelength domains. If the constraints on constant weight in these two-dimensional codes are relaxed, differentiated levels of service at the physical layer become possible. Areas for further research are suggested which may allow quality of service levels to be guaranteed at the physical layer.

Journal Article•DOI•
TL;DR: The article describes a versatile heuristic based on simulated annealing that may be adopted to optimize the concurrent use of IP restoration and WDM protection schemes in the same (mesh) network, taking into account topology constraints and network cost minimization.
Abstract: The exponentially growing number of Internet users armed with emerging multimedia Internet applications is continuously thirsty for more network capacity. Wavelength-division multiplexing networks that directly support IP-the so-called IP over WDM architecture-have the appropriate characteristics to quench this bandwidth thirst. As everyday life increasingly relies on telecommunication services, users become more and more demanding, and connection reliability is currently as critical as high capacity. Both IP and WDM layers can fulfil this need by providing various resilient schemes to protect users' traffic from disruptions due to network faults. This article first reviews the most common restoration and protection schemes available at the IP and WDM layers. These schemes may be present concurrently in the IP over WDM architecture, with the resilient mechanism of each connection specifically chosen as a function of the overall cost, application requirements, and management complexity. The article describes a versatile heuristic based on simulated annealing that may be adopted to optimize the concurrent use of IP restoration and WDM protection schemes in the same (mesh) network. The proposed heuristic allows varying the percentage of traffic protected by the WDM layer and that of traffic relying on IP restoration, taking into account topology constraints and network cost minimization. An additional feature of the proposed heuristic is the potential to trade solution optimality for computational time, thus yielding fast solutions in support of interactive design.

Journal Article•DOI•
TL;DR: Web proxy workloads from different levels of a caching hierarchy are used to understand how the workload characteristics change across different levels to improve the performance and scalability of the Web.
Abstract: Understanding Web traffic characteristics is key to improving the performance and scalability of the Web. In this article Web proxy workloads from different levels of a caching hierarchy are used to understand how the workload characteristics change across different levels of a caching hierarchy. The main observations of this study are that HTML and image documents account for 95 percent of the documents seen in the workload; the distribution of transfer sizes of documents is heavy-tailed, with the tails becoming heavier as one moves up the caching hierarchy; the popularity profile of documents does not precisely follow the Zipf distribution; one-timers account for approximately 70 percent of the documents referenced; concentration of references is less at proxy caches than at servers, and concentration of references diminishes as one moves up the caching hierarchy; and the modification rate is higher at higher-level proxies.

Journal Article•DOI•
TL;DR: This article discusses providing tolerance capability to the optical layer in WDM-based transport networks with a survey on restoration schemes available in the literature, explains the operation of these schemes, and discusses their performance.
Abstract: Optical networks employing wavelength-division multiplexing and wavelength routing are potential candidates for future wide area networks. Because these networks are prone to component failures and carry a large volume of traffic, maintaining a high level of service availability is an important issue. This article discusses providing tolerance capability to the optical layer in WDM-based transport networks. It presents a survey on restoration schemes available in the literature, explains the operation of these schemes, and discusses their performance.

Journal Article•DOI•
TL;DR: ODMRP is a mesh- based, rather than conventional tree-based, multicast scheme and uses a forwarding group concept (only a subset of nodes forwards the multicast packets packets via scoped flooding) to dynamically build routes and maintain multicast group membership.
Abstract: Multicasting has emerged as one of the most focused areas in the field of networking. As the technology and popularity of the Internet grow, applications such as video conferencing that require the multicast feature are becoming more widespread. Another interesting development has been the emergence of dynamically reconfigurable wireless ad hoc networks to interconnect mobile users for applications ranging from disaster recovery to distributed collaborative computing. In this article we describe the on-demand multicast routing protocol for mobile ad hoc networks. ODMRP is a mesh-based, rather than conventional tree-based, multicast scheme and uses a forwarding group concept (only a subset of nodes forwards the multicast packets packets via scoped flooding). It applies on-demand procedures to dynamically build routes and maintain multicast group membership. We also describe our implementation of the protocol in a real laptop testbed.

Journal Article•DOI•
TL;DR: A reliable multicast architecture that invokes active services at strategic locations inside the network to comprehensively address challenges such as feedback implosion, retransmission scoping, distributed loss recovery, and congestion control is presented.
Abstract: Scalability is of paramount importance in the design of reliable multicast transport protocols, and requires careful consideration of a number of problems such as feedback implosion, retransmission scoping, distributed loss recovery, and congestion control. In this article, we present a reliable multicast architecture that invokes active services at strategic locations inside the network to comprehensively address these challenges. Active services provide the ability to quickly and efficiently recover from loss at the point of loss. They also exploit the physical hierarchy for feedback aggregation and effective retransmission scoping with minimal router support. We present two protocols, one for packet loss recovery and another for congestion control, and describe an experimental testbed where these have been implemented. Analytical and experimental results are used to demonstrate that the active services architecture improves resource usage, reduces latency for loss recovery, and provides TCP-friendly congestion control.

Journal Article•DOI•
P. Aukia1, Murali Kodialam, P.V.N. Koppol, T. V. Lakshman, H. Sarin, B. Suter •
TL;DR: The path selection for LSPs is on a new minimum-interference routing algorithm aimed at making the best use of network infrastructure in an online environment where LSP requests arrive one by one with no a priori information about future requests.
Abstract: It has been suggested that one of the most significant reasons for multiprotocol label switching (MPLS) network deployment is network traffic engineering. The goal of traffic engineering is to make the best use of the network infrastructure, and this is facilitates by the explicit routing feature of MPLS, which allows many of the shortcomings associated with current IP routing schemes to be addressed. This article describes a software system called Routing and Traffic Engineering Server (RATES) developed for MPLS traffic engineering. It also describes some new routing ideas incorporated in RATES for MPLS explicit path selection. The RATES implementation consists of a policy and flow database, a browser-based interface for policy definition and entering resource provisioning requests, and a Common Open Policy Service protocol server-client implementation for communicating paths and resource information to edge routers. RATES also uses the OSPF topology database for dynamically obtaining link state information. RATES can set up bandwidth-guaranteed label-switched (LSPs) between specified ingress-egress pairs. The path selection for LSPs is on a new minimum-interference routing algorithm aimed at making the best use of network infrastructure in an online environment where LSP requests arrive one by one with no a priori information about future requests. Although developed for an MPLS application, the RATES implementation has many similarities in components to an intradomain differentiated services bandwidth broker.

Journal Article•DOI•
TL;DR: It is shown that the overheads due to SSL can make Web servers slower by a couple of orders of magnitude, and the reason for this deficiency is investigated by instrumenting the SSL protocol stack with a detailed profiling of the protocol processing components.
Abstract: The last couple of years have seen a growing momentum toward using the Internet for conducting business. Web-based electronic commerce applications are one of the fastest growing segments of the Internet today. A key enabler for e-commerce applications is the ability to setup secure private channels over a public network. The Secure Sockets Layer protocol provides this capability and is the most widely used security protocol in the Internet. We take a close look at the working principles behind SSL with an eye on performance. We benchmark two of the popular Web servers in wide use in a number of large e-commerce sites. Our results show that the overheads due to SSL can make Web servers slower by a couple of orders of magnitude. We investigate the reason for this deficiency by instrumenting the SSL protocol stack with a detailed profiling of the protocol processing components. In light of our observations, we outline architectural guidelines for large e-commerce sites.

Journal Article•DOI•
TL;DR: RMTP-II builds on a rich field of existing work, and adds to it the following novel contributions: it differentiates the roles of the nodes in the protocol, provides algorithms for smoothing and control of the return (TRACK) traffic, and provides explicit support for highly asymmetrical networks.
Abstract: This document provides an overview of the reliable multicast transport protocol II, RMTP-II. RMTP-II is a reliable multicast protocol, designed to reliably and efficiently send data from a few senders to large groups of simultaneous recipients. It works over both symmetric networks and asymmetrical network topologies such as those provided by satellite, cable modem, or ADSL carriers. Before sending, each sender must connect with a trusted top node to receive permission and control parameters for its data stream. The top node provides network managers with a single point of control for the senders, allowing them to monitor and control the traffic being sent. RMTP-II builds on a rich field of existing work, and adds to it the following novel contributions. It differentiates the roles of the nodes in the protocol, provides algorithms for smoothing and control of the return (TRACK) traffic, and provides explicit support for highly asymmetrical networks. It provides explicit network management controls through a centralized point of control, a fully distributed membership protocol that enables positive confirmation of data delivery, and fault recovery algorithms which are integrated to the reliability semantics of the protocol. It includes a novel reliability level called time bounded reliability, and offers a unique combination of TRACKs, NACKs, and FEC for increased scalability and real-time performance. Finally, it integrates distributed algorithms for RTT calculation to each receiver, and provides automatic configuration of receiver nodes.

Journal Article•DOI•
M. Molina1, P. Castelli, G. Foddis•
TL;DR: This article presents a measurement analysis methodology that, starting from packet-level traces, identifies some statistics useful to characterize Web traffic, such as page size, page request frequency, and the user's think time between the download of pages.
Abstract: This article presents a measurement analysis methodology that, starting from packet-level traces, identifies some statistics useful to characterize Web traffic, such as page size, page request frequency, and the user's think time between the download of pages. The methodology was implemented in a tool named HTML-REDUCE and applied on traces collected in CSELT's corporate network. For each identified statistic the analytic expression best approximating its empirical distribution is searched. A simplified model for bandwidth dimensioning in the server-to-client direction is then presented and validated through comparison with full protocol stack simulations. The model only requires the page request frequency of an aggregate of clients and the first two moments of page size distribution.

Journal Article•DOI•
TL;DR: Some of the new types of application and their requirements, and the need to support applications that have strict QoS requirements, the so-called critical applications are identified, and two proposals for enhancing the Internet service architecture are reviewed.
Abstract: The provision and support of new distributed multimedia services are of prime concern for telecommunications operators and suppliers. Clearly, the potential of the latest Internet protocols to contribute communications components is of considerable interest to them. In this article we first review some of the new types of application and their requirements, and identify the need to support applications that have strict QoS requirements, the so-called critical applications. We review two proposals for enhancing the Internet service architecture. In addition to the integrated services work of the IETF, we look at the proposals for differentiated services in the Internet. We then individually review protocol developments proposed to improve the Internet, and to support real-time and multimedia communications. These are IPv6 (the new version of the Internet Protocol), Resource reSerVation Protocol, and Multiprotocol Label Switching, respectively. In each case, we attempt to provide critical reviews in order to assess their suitability for this purpose. Finally, we indicate what the basis of the future infrastructure might be in order to support the full variety of application requirements.

Journal Article•DOI•
TL;DR: Several techniques are surveyed, the results of trace-based studies of a proposal based on automatic recognition of duplicated content are reported, and a variety of more complex ways in which HTTP caches can exploit locality in real reference streams are proposed.
Abstract: Computer system designers often use caches to solve performance problems. Caching in the World Wide Web has been both the subject of extensive research and the basis of a large and growing industry. Traditional Web caches store HTTP responses, in anticipation of a subsequent reference to the URL of a cached response. Unfortunately, experience with real Web users shows that there are limits to the performance of this simple caching model, because many responses are useful only once. Researchers have proposed a variety of more complex ways in which HTTP caches can exploit locality in real reference streams. This article surveys several techniques, and reports the results of trace-based studies of a proposal based on automatic recognition of duplicated content.

Journal Article•DOI•
TL;DR: A simulation model of the proposed IntServ architecture was developed, which includes models of the GPRS cellular infrastructure, network traffic, and user movement, and results show that the proposedIntServ architecture demonstrated good scalability, even for large user populations.
Abstract: The General Packet Radio Service is the current enhancement in the GSM infrastructure, capable of handling Internet protocol traffic for mobile computing and communications. A major deficiency of the current GPRS specification is the lack of adequate IP quality of service support. Two schemes for enhancing the GPRS architecture with the existing IP QoS support architectures, IntServ and DiffServ, are proposed. Solutions are proposed to the problem of establishing QoS reservations across the GPRS core network, and the required signaling enhancements and modifications in the components of the GPRS architecture are identified. Of the two proposed schemes the IntServ one requires frequent refreshing of state information and extra signaling. To quantify the effect that signaling overhead has on GPRS operation and performance, a simulation model of the proposed IntServ architecture was developed, which includes models of the GPRS cellular infrastructure, network traffic, and user movement. The obtained simulation results show that the proposed IntServ architecture demonstrated good scalability, even for large user populations.

Journal Article•DOI•
Jim Gemmell1, Jim Gray, Eve M. Schooler•
TL;DR: Fcast contributes new caching methods that improve disk throughput, and new optimizations for small file transfers, and like other FEC schemes, it uses bandwidth very efficiently.
Abstract: Reliable data multicast is problematic. ACK/NACK schemes do not scale to large audiences, and simple data replication wastes network bandwidth. Fcast, "file multicasting", combines multicast with forward error correction to address both these problems. Like classic multicast, Fcast scales to large audiences, and like other FEC schemes, it uses bandwidth very efficiently. Some of the benefits of this combination were known previously, but Fcast contributes new caching methods that improve disk throughput, and new optimizations for small file transfers. This article describes Fcast's design, implementation, and API.

Journal Article•DOI•
TL;DR: This article addresses the problem of designing capacity management and routing mechanisms to support telephony over an IP network by proposing two distinct architectural models and evaluating the performance of these two architectural models via simulations using configuration and usage data derived from operational networks.
Abstract: This article addresses the problem of designing capacity management and routing mechanisms to support telephony over an IP network. For this service, we propose two distinct architectural models. The first relies on enhancements to the basic IP infrastructure to support integrated service transport and QoS routing. The second assumes that the IP network can support an overlay virtual private network with dedicated capacity for the VoIP service, thereby allowing standard capacity management and routing mechanisms from circuit-switched networks to be reused. We evaluate the performance of these two architectural models and their-associated policies via simulations using configuration and usage data derived from operational networks.

Journal Article•DOI•
J. Dilley1•
TL;DR: It is concluded that improving cache consistency will reduce response time and allow a cache to serve more user requests.
Abstract: This report analyzes the impact of cache consistency on the response time of client requests. The analysis divides cache responses into classes according to whether or not the cache communicated with a remote server and whether or not object data was served from the cache. Analysis of traces from deployed proxy cache servers demonstrates that a round-trip to a remote server is the dominant factor for response time. This study concludes that improving cache consistency will reduce response time and allow a cache to serve more user requests.

Journal Article•DOI•
Roch Glitho1•
TL;DR: This article scrutinizes the ITU-T and IETF advanced services architectures for Internet telephony and provides a discussion of two potential alternatives: IN-based architecture and mobile-agent-based architectures.
Abstract: Advanced services are differentiating factors and crucial to service providers' survival and success. Examples are credit card calling, call forwarding, and toll-free calling. In classical telephony's early days their implementation was embedded in switching software, and this hindered fast deployment. A more modern architecture known as the intelligent network (IN) was born in the 1980s, allowing implementation in separate nodes, resulting in faster deployment of new services. Two tracks are emerging for Internet telephony: one from the ITU-T and the other from the IETF. As far as advanced services are concerned, the ITU-T track offers a rather archaic architecture, reminiscent of the early days of classical telephony. On the other hand, the IETF architecture, although more modern, does have a few pitfalls. There is plenty of room for improvement to both. This article scrutinizes the ITU-T and IETF advanced services architectures for Internet telephony. Salient features are reviewed and weaknesses pinpointed. Although these architectures are constantly evolving, alternatives may emerge. We provide a discussion of two potential alternatives: IN-based architectures and mobile-agent-based architectures.

Journal Article•DOI•
TL;DR: An FC-AL topology andFC-AL protocols for storage networks are described, in particular, channel arbitration, signaling and transmission protocols, and the fibre channel mapping protocol for SCSI.
Abstract: Fibre channel arbitrated loops offer a new approach to realizing high-speed storage interconnection networks. We describe an FC-AL topology and FC-AL protocols for storage networks. In particular, we describe channel arbitration, signaling and transmission protocols, and the fibre channel mapping protocol for SCSI. A simulation model of FC-AL storage networks is described. Performance results derived from the model are presented and are used to investigate the FC-AL storage network performance.