scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Proceedings ArticleDOI
01 Oct 1998
TL;DR: An end-to-end approach to improving Web performance by collectively examining the Web components --- clients, proxies, servers, and the network to reduce user-perceived latency and the number of TCP connections, improve cache coherency and cache replacement, and enable prefetching of resources that are likely to be accessed in the near future is offered.
Abstract: The rapid growth of the World Wide Web has caused serious performance degradation on the Internet. This paper offers an end-to-end approach to improving Web performance by collectively examining the Web components --- clients, proxies, servers, and the network. Our goal is to reduce user-perceived latency and the number of TCP connections, improve cache coherency and cache replacement, and enable prefetching of resources that are likely to be accessed in the near future. In our scheme, server response messages include piggybacked information customized to the requesting proxy. Our enhancement to the existing request-response protocol does not require per-proxy state at a server, and a very small amount of transient per-server state at the proxy, and can be implemented without changes to HTTP 1.1. The server groups related resources into volumes (based on access patterns and the file system's directory structure) and applies a proxy-generated filter (indicating the type of information of interest to the proxy) to tailor the piggyback information. We present efficient data structures for constructing server volumes and applying proxy filters, and a transparent way to perform volume maintenance and piggyback generation at a router along the path between the proxy and the server. We demonstrate the effectiveness of our end-to-end approach by evaluating various volume construction and filtering techniques across a collection of large client and server logs.

174 citations

Journal ArticleDOI
TL;DR: In this paper, threshold sampling is introduced as a sampling scheme that optimally controls the expected volume of samples and the variance of estimators over any classification of flows over a large stream.
Abstract: This paper deals with sampling objects from a large stream. Each object possesses a size, and the aim is to be able to estimate the total size of an arbitrary subset of objects whose composition is not known at the time of sampling. This problem is motivated from network measurements in which the objects are flow records exported by routers and the sizes are the number of packet or bytes reported in the record. Subsets of interest could be flows from a certain customer or flows from a worm attack. This paper introduces threshold sampling as a sampling scheme that optimally controls the expected volume of samples and the variance of estimators over any classification of flows. It provides algorithms for dynamic control of sample volumes and evaluates them on flow data gathered from a commercial Internet Protocol (IP) network. The algorithms are simple to implement and robust to variation in network conditions. The work reported here has been applied in the measurement infrastructure of the commercial IP network. To not have employed sampling would have entailed an order of magnitude greater capital expenditure to accommodate the measurement traffic and its processing.

173 citations

Journal ArticleDOI
TL;DR: PARADISE (PARAdigm for DIalogue System Evaluation), a general framework for evaluating and comparing the performance of spoken dialogue agents, is presented and can be used both for making predictions about future versions of an agent, and as feedback to the agent so that the agent can learn to optimize its behaviour based on its experiences with users over time.

173 citations

Journal ArticleDOI
Andrew Odlyzko1
01 Mar 2000
TL;DR: A brief survey of the current state of the art in discrete logs is presented.
Abstract: The first practical public key cryptosystem to be published, the Diffie–Hellman key exchange algorithm, was based on the assumption that discrete logarithms are hard to compute. This intractability hypothesis is also the foundation for the presumed security of a variety of other public key schemes. While there have been substantial advances in discrete log algorithms in the last two decades, in general the discrete log still appears to be hard, especially for some groups, such as those from elliptic curves. Unfortunately no proofs of hardness are available in this area, so it is necessary to rely on experience and intuition in judging what parameters to use for cryptosystems. This paper presents a brief survey of the current state of the art in discrete logs.

173 citations

Journal ArticleDOI
Julia Hirschberg1
TL;DR: Some areas in which progress has been made, and some in which more might be made, with particular emphasis upon text-to-speech synthesis and spoken dialogue systems are suggested.

173 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791