scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Proceedings ArticleDOI
Mihai Patrascu1
05 Jun 2010
TL;DR: This work describes a carefully-chosen dynamic version of set disjointness (the "multiphase problem"), and conjecture that it requires n^Omega(1) time per operation, and forms the first nonalgebraic reduction from 3SUM, which allows3SUM-hardness results for combinatorial problems.
Abstract: We consider a number of dynamic problems with no known poly-logarithmic upper bounds, and show that they require nΩ(1) time per operation, unless 3SUM has strongly subquadratic algorithms. Our result is modular: (1) We describe a carefully-chosen dynamic version of set disjointness (the "multiphase problem"), and conjecture that it requires n^Omega(1) time per operation. All our lower bounds follow by easy reduction. (2) We reduce 3SUM to the multiphase problem. Ours is the first nonalgebraic reduction from 3SUM, and allows 3SUM-hardness results for combinatorial problems. For instance, it implies hardness of reporting all triangles in a graph. (3) It is plausible that an unconditional lower bound for the multiphase problem can be established via a number-on-forehead communication game.

286 citations

Journal ArticleDOI
TL;DR: This paper provides methods that use flow statistics formed from sampled packet stream to infer the frequencies of the number of packets per flow in the unsampled stream, and by exploiting protocol level detail reported in flow records.
Abstract: Passive traffic measurement increasingly employs sampling at the packet level. Many high-end routers form flow statistics from a sampled substream of packets. Sampling controls the consumption of resources by the measurement operations. However, knowledge of the statistics of flows in the unsampled stream remains useful, for understanding both characteristics of source traffic, and consumption of resources in the network. This paper provides methods that use flow statistics formed from sampled packet stream to infer the frequencies of the number of packets per flow in the unsampled stream. A key task is to infer the properties of flows of original traffic that evaded sampling altogether. We achieve this through statistical inference, and by exploiting protocol level detail reported in flow records. We investigate the impact on our results of different versions of packet sampling.

285 citations

Proceedings ArticleDOI
01 May 1999
TL;DR: This paper presents a join signature scheme based on tug-ofwar signatures that probvides guarantees on join size estimation as a function of t:he self-join sizes of the joining relations; this scheme can significantly improve upon the sampling scheme.
Abstract: Query optimizers rely on fast, high-quality estimates of result sizes in order to select between various join plans. Selfjoin sizes of relations provide bounds on the join size of any pairs of such relations. It also indicates the degree of skew in the data, and has been advocated for several estimation procedures. Exact computation of the self-join size requires storage proportional to, the number of distinct attribute values, which may be prohibitively large. In this paper, we study algorithms for tracking (approximate) self-join sizes in limited storage in the presence of insertions and deletions to the relations. Such algorithms detect changes in the degree of skew without an expensive recomputation from the base data. We show that an algorithm based on a tug-ofwar approach provides a more accurate estimation than one based on a sample-and-count approach which is in turn more accurate than a sampling-only approach. Next, we study algorithms for tracking (approximate) join sizes in limited storage; the goal is to maintain a small signature of each relation such that join sizes can be accurately estimated between any pairs of relations. We show that taking random samples for join signatures can lead to inaccurate estimation unless the sample size is quite large; moreover, by a lower bound we show, no other signature scheme can significantly improve upon sampling without further assumptions. These negative results are shown to hold even in the presence of sanity bounds. On the other hand, we present a join signature scheme based on tug-ofwar signatures that probvides guarantees on join size estimation as a function of t:he self-join sizes of the joining relations; this scheme can significantly improve upon the sampling scheme.

284 citations

Book ChapterDOI
23 Feb 1998
TL;DR: PolicyMaker trust management system, a general tool for addressing the trust management problem of emerging electronic commerce services that use public-key cryptography on a mass-market scale, is described.
Abstract: Emerging electronic commerce services that use public-key cryptography on a mass-market scale require sophisticated mechanisms for managing trust. For example, any service that receives a signed request for action is forced to answer the central question “Is the key used to sign this request authorized to take this action?” In some services, this question reduces to “Does this key belong to this person?” In others, the authorization question is more complicated, and resolving it requires techniques for formulating security policies and security credentials, determining whether particular sets of credentials satisfy the relevant policies, and deferring trust to third parties. Blaze, Feigenbaum, and Lacy [1] identified this trust management problem as a distinct and important component of network services and described a general tool for addressing it, the PolicyMaker trust management system.

284 citations

Proceedings ArticleDOI
04 May 1997
TL;DR: It is shown how to transform algorithms that assume that all experts are always awake to algorithms that do not require this assumption, and how to derive corresponding loss bounds.
Abstract: We study online learning algorithms that predict by com- bining the predictions of several subordinate prediction a lgorithms, sometimes called "experts." These simple algorithms belon g to the multiplicative weights family of algorithms. The performance of these algorithms degrades only logarithmically with the number of experts, making them particularly useful in applications where the number of experts is very large. However, in applications such as text categorization, it is often natural for some of the ex perts to abstain from making predictions on some of the instances. We show how to transform algorithms that assume that all experts are always awake to algorithms that do not require this assumption. We also show how to derive corresponding loss bounds. Our method is very general, and can be applied to a large family of online learning algorithms. We also give applications to various prediction models including decision graphs and "switching" experts.

284 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791