scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Proceedings ArticleDOI
01 Jun 2004
TL;DR: A BGP emulator is described based on an algorithm that computes the outcome of the BGP route selection process for each router in a single AS, given only a static snapshot of the network state, without simulating the complex details of BGP message passing.
Abstract: The performance of IP networks depends on a wide variety of dynamic conditions. Traffic shifts, equipment failures, planned maintenance, and topology changes in other parts of the Internet can all degrade performance. To maintain good performance, network operators must continually reconfigure the routing protocols. Operators configure BGP to control how traffic flows to neighboring Autonomous Systems (ASes), as well as how traffic traverses their networks. However, because BGP route selection is distributed, indirectly controlled by configurable policies, and influenced by complex interactions with intradomain routing protocols, operators cannot predict how a particular BGP configuration would behave in practice. To avoid inadvertently degrading network performance, operators need to evaluate the effects of configuration changes before deploying them on a live network. We propose an algorithm that computes the outcome of the BGP route selection process for each router in a single AS, given only a static snapshot of the network state, without simulating the complex details of BGP message passing. We describe a BGP emulator based on this algorithm; the emulator exploits the unique characteristics of routing data to reduce computational overhead. Using data from a large ISP, we show that the emulator correctly computes BGP routing decisions and has a running time that is acceptable for many tasks, such as traffic engineering and capacity planning.

118 citations

Proceedings Article
08 Dec 1997
TL;DR: Results with macro-encoding of query response resources from local CGI scripts and two popular search engines indicate that the approach promises a substantial reduction of network traffic, server load, and access latency for dynamic documents.
Abstract: A number of techniques are available for reducing latency and bandwidth requirements for resources on the World Wide Web, including caching, compression, and delta-encoding [12]. These approaches are limited: much data on the Web is dynamic, for which traditional caching is of limited use, and delta-encoding requires both a common version base against which to apply a delta and the complete generation of the resource prior to encoding it. In contrast to these approaches, we take an application-specific view, in which we separate the static and dynamic portions of a resource. The static portions (called the template) can then be cached, with (presumably small) dynamic portions obtained on each access. Our HTML extension, which we refer to as HPP (for HTML Pre-Processing) supports resources that contain variable number of static and dynamic elements, such as query responses. Results with macro-encoding of query response resources from local CGI scripts and two popular search engines indicate that our approach promises a substantial reduction of network traffic, server load, and access latency for dynamic documents. The size of network transfers using HPP are comparable to delta-encoding (factors of 2-8 smaller than the original resource), while the data generated by content providers is simpler, and the load on the end-servers is slightly lower.

118 citations

Proceedings ArticleDOI
26 Mar 2000
TL;DR: This paper shows how end-to end delay measurements of multicast traffic can be used to estimate packet delay variance on each link of a logical multicast tree and establishes desirable statistical properties of the estimator, namely consistency and asymptotic normality.
Abstract: End to end measurement is a common tool for network performance diagnosis, primarily because it can reflect user experience and typically requires minimal support from intervening network elements. Challenges in this approach are: (i) to identify the locale of performance degradation; and (ii) to perform measurements in a scalable manner for large and complex networks. In this paper we show how end-to end delay measurements of multicast traffic can be used to estimate packet delay variance on each link of a logical multicast tree. The method does not depend on cooperation from intervening network elements; multicast probing is bandwidth efficient. We establish desirable statistical properties of the estimator, namely consistency and asymptotic normality. We evaluate the approach through model based and network simulations. The approach extends to the estimation of higher order moments of the link delay distribution.

118 citations

Proceedings ArticleDOI
06 Jun 1999
TL;DR: This work summarizes two promising techniques for improving the statistics of the PAP of an OFDM signal and presents suboptimal strategies for combining partial transmit sequences that achieve similar performance but with reduced complexity.
Abstract: Orthogonal frequency division multiplexing (OFDM) is an attractive technique for achieving high-bit-rate wireless data transmission. However, the potentially large peak-to-average power ratio (PAP) of a multicarrier signal has limited its application. Two promising techniques for improving the statistics of the PAP of an OFDM signal have previously been proposed: the selective mapping and partial transmit sequence approaches. Here, we summarize these techniques and present suboptimal strategies for combining partial transmit sequences that achieve similar performance but with reduced complexity.

118 citations

Proceedings Article
06 Jan 1997
TL;DR: Performance measurements of the optimistic delta system demonstrate that deltas significantly reduce latency when both sides cache the old version, and optimistic delTas can reduce latency, to a lesser degree, when content-provider service times are in the range of seconds or longer.
Abstract: When a machine is connected to the Internet via a slow network, such as a 28.8 Kbps modem, the cumulative latency to communicate over the Internet to World Wide Web servers and then transfer documents over the slow network can be significant. We have built a system that optimistically transfers data that may be out of date, then sends either a subsequent confirmation that the data is current or a delta to change the older version to the current one. In addition, if both sides of the slow link already store the same older version, just the delta need be transferred to update it. Our mechanism is optimistic because it assumes that much of the time there will be sufficient idle time to transfer most or all of the older version before the newer version is available, and because it assumes that the changes between the two versions will be small relative to the actual document. Timings of retrievals of random URLs in the Internet support the former assumption, while experiments using a version repository of Web documents bear out the latter one. Performance measurements of the optimistic delta system demonstrate that deltas significantly reduce latency when both sides cache the old version, and optimistic deltas can reduce latency, to a lesser degree, when content-provider service times are in the range of seconds or longer.

117 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791