scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Book ChapterDOI
31 Mar 2005
TL;DR: Analyzing seven months of data from eight vantage points in a large Internet Service Provider (ISP) network, it is shown that routing changes are responsible for the majority of the large traffic variations.
Abstract: A traffic matrix represents the load from each ingress point to each egress point in an IP network. Although networks are engineered to tolerate some variation in the traffic matrix, large changes can lead to congested links and poor performance. The variations in the traffic matrix are caused by statistical fluctuations in the traffic entering the network and shifts in where the traffic leaves the network. For an accurate view of how the traffic matrix evolves over time, we combine fine-grained traffic measurements with a continuous view of routing, including changes in the egress points. Our approach is in sharp contrast to previous work that either inferred the traffic matrix from link-load statistics or computed it using periodic snapshots of routing tables. Analyzing seven months of data from eight vantage points in a large Internet Service Provider (ISP) network, we show that routing changes are responsible for the majority of the large traffic variations. In addition, we identify the shifts caused by internal routing changes and show that these events are responsible for the largest traffic shifts. We discuss the implications of our findings on the accuracy of previous work on traffic matrix estimation and analysis.

108 citations

Proceedings ArticleDOI
26 Mar 2000
TL;DR: This work proposes scalable deployment solutions to control the potential overhead to proxies and particularly to Web servers, and proposes simple techniques that address these factors: pre-resolving host-names (pre-performing DNS lookup), pre-connecting (prefetching TCP connections prior to issuance of HTTP request), and pre-warming.
Abstract: User-perceived latency is recognized as the central performance problem in the Web. We systematically measure factors contributing to this latency, across several locations. Our study reveals that DNS query times, TCP connection establishment, and start-of-session delays at HTTP servers, more so than transmission time, are major causes of long waits. Wait due to these factors also afflicts high-bandwidth users and has detrimental effect on perceived performance. We propose simple techniques that address these factors: (i) pre-resolving host-names (pre-performing DNS lookup); (ii) pre-connecting (prefetching TCP connections prior to issuance of HTTP request); and (iii) pre-warming (sending a "dummy" HTTP HEAD request to Web servers). Trace-based simulations demonstrate a potential to reduce perceived latency dramatically. Our techniques surpass document prefetching in performance improvement per bandwidth used and can be used with non-prefetchable URL. Deployment of these techniques at Web browsers or proxies does not require protocol modifications or the cooperation of other entities. Applicable servers can be identified, for example, by analyzing hyperlinks. Bandwidth overhead is minimal, and so is processing overhead at the user's browser. We propose scalable deployment solutions to control the potential overhead to proxies and particularly to Web servers.

108 citations

Book ChapterDOI
David McAllester1
22 Sep 1999
TL;DR: The main technical contribution consists of two meta-complexity theorems which allow, in many cases, the asymptotic running time of a bottom-up logic program to be determined by inspection.
Abstract: This paper investigates bottom-up logic programming as a formalism for expressing static analyses. The main technical contribution consists of two meta-complexity theorems which allow, in many cases, the asymptotic running time of a bottom-up logic program to be determined by inspection. It is well known that a datalog program runs in O(nk) time where k is the largest number of free variables in any single rule. The theorems given here are significantly more refined. A variety of algorithms given as bottom-up logic programs are analyzed as examples.

108 citations

Journal Article
TL;DR: In this paper, the notion of resource-fair protocols is introduced, which states that if one party learns the output of the protocol, then so can all other parties, as long as they expend roughly the same amount of resources.
Abstract: We introduce the notion of resource-fair protocols. Informally, this property states that if one party learns the output of the protocol, then so can all other parties, as long as they expend roughly the same amount of resources. As opposed to similar previously proposed definitions, our definition follows the standard simulation paradigm and enjoys strong composability properties. In particular, our definition is similar to the security definition in the universal composability (UC) framework, but works in a model that allows any party to request additional resources from the environment to deal with dishonest parties that may prematurely abort. In this model we specify the ideally fair functionality as allowing parties to invest resources in return for outputs, but in such an event offering all other parties a fair deal. (The formulation of fair dealings is kept independent of any particular functionality, by defining it using a wrapper.) Thus, by relaxing the notion of fairness, we avoid a well-known impossibility result for fair multi-party computation with corrupted majority; in particular, our definition admits constructions that tolerate arbitrary number of corruptions. We also show that, as in the UC framework, protocols in our framework may be arbitrarily and concurrently composed. Turning to constructions, we define a commit-prove-fair-open functionality and design an efficient resource-fair protocol that securely realizes it, using a new variant of a cryptographic primitive known as time-lines. With (the fairly wrapped version of) this functionality we show that some of the existing secure multi-party computation protocols can be easily transformed into resource-fair protocols while preserving their security.

108 citations

Book ChapterDOI
Steven J. Phillips1
04 Jan 2002
TL;DR: Two simple modification of K-means and related algorithms for clustering, that improve the running time without changing the output are described, and the two resulting algorithms are called Compare-mean and Sort- means.
Abstract: This paper describes two simple modification of K-means and related algorithms for clustering, that improve the running time without changing the output. The two resulting algorithms are called Compare-means and Sort-means. The time for an iteration of K-means is reduced from O(ndk), where n is the number of data points, k the number of clusters and d the dimension, to O(nd? + k2d + k2 log k) for Sort-means. Here ? ? k is the average over all points p of the number of means that are no more than twice as far as p is from the mean p was assigned to in the previous iteration. Compare-means performs a similar number of distance calculations as Sort-means, and is faster when the number of means is very large. Both modifications are extremely simple, and could easily be added to existing clustering implementations.We investigate the empirical performance of the algorithms on three datasets drawn from practical applications. As a primary test case, we use the Isodata variant of K-means on a sample of 2.3 million 6-dimensional points drawn from a Landsat-7 satellite image. For this dataset, ? quickly drops to less than log2 k, and the running time decreases accordingly. For example, a run with k = 100 drops from an hour and a half to sixteen minutes for Compare-means and six and a half minutes for Sortmeans. Further experiments show similar improvements on datasets derived from a forestry application and from the analysis of BGP updates in an IP network.

108 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791