scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Book ChapterDOI
01 Aug 1997
TL;DR: A simple decomposition of the expected distortion is shown, showing that K-means (and its extension for inferring general parametric densities from unlabeled sample data) must implicitly manage a trade-off between how similar the data assigned to each cluster are, and how the data are balanced among the clusters.
Abstract: Assignment methods are at the heart of many algorithms for unsupervised learning and clustering -- in particular, the well-known K-means and Expectation-Maximizatian (EM) algorithms. In this work, we study several different methods of assignment, including the "hard" assignments used by K-means and the "soft" assignments used by EM. While it is known that K-means minimizes the distortion on the data and EM maximizes the likelihood, little is known about the systematic differences of behavior between the two algorithms. Here we shed light on these differences via an information-theoretic analysis. The cornerstone of our results is a simple decomposition of the expected distortion, showing that K-means (and its extension for inferring general parametric densities from unlabeled sample data) must implicitly manage a trade-off between how similar the data assigned to each cluster are, and how the data are balanced among the clusters. How well the data are balanced is measured by the entropy of the partition defined by the hard assignments. In addition to letting us predict and verify systematic differences between K-means and EM on specific examples, the decomposition allows us to give a rather general argument showing that K-means will consistently find densities with less "overlap" than EM. We also study a third natural assignment method that we call posterior assignment, that is close in spirit to the soft assignments of EM, but leads to a surprisingly different algorithm.

142 citations

Proceedings ArticleDOI
01 Oct 2001
TL;DR: This work has approached the challenge of implementing a wide-area, in-building AR system in two different ways, which combines sparse position measurements from the Bat system with more frequent rotational information from an inertial tracker to render annotations and virtual objects that relate to or coexist with the real world.
Abstract: Augmented reality (AR) both exposes and supplements the user's view of the real world. Previous AR work has focussed on the close registration of real and virtual objects, which requires very accurate real-time estimates of head position and orientation. Most of these systems have been tethered and restricted to small volumes. In contrast, we have chosen to concentrate on allowing the AR user to roam freely within an entire building. At AT&T Laboratories Cambridge we provide personnel with AR services using data from an ultrasonic tracking system, called the Bat system, which has been deployed building-wide. We have approached the challenge of implementing a wide-area, in-building AR system in two different ways. The first uses a head-mounted display connected to a laptop, which combines sparse position measurements from the Bat system with more frequent rotational information from an inertial tracker to render annotations and virtual objects that relate to or coexist with the real world. The second uses a PDA to provide a convenient portal with which the user can quickly view the augmented world. These systems can be used to annotate the world in a more-or-less seamless way, allowing a richer interaction with both real and virtual objects.

142 citations

Journal ArticleDOI
TL;DR: This work analyzes the performance of top?down algorithms for decision tree learning and proves that some popular and empirically successful heuristics that are base on first principles meet the criteria of an independently motivated theoretical model.

142 citations

Book ChapterDOI
K. G. Coffman1, Andrew Odlyzko1
01 Jan 2002
TL;DR: Internet traffic is approximately doubling each year as discussed by the authors, which is similar to "Moore's Law" in semiconductors, but is slower than the frequently heard claims of a doubling of traffic every three or four months.
Abstract: Internet traffic is approximately doubling each year. This growth rate applies not only to the entire Internet, but to a large range of individual institutions. For a few places we have records going back several years that exhibit this regular rate of growth. Even when there are no obvious bottlenecks, traffic tends not to grow much faster. This reflects complicated interactions of technology, economics, and sociology, similar to, but more delicate than those that have produced "Moore's Law" in semiconductors.A doubling of traffic each year represents extremely fast growth, much faster than the increases in other communication services. If it continues, data traffic will surpass voice traffic around the year 2002. However, this rate of growth is slower than the frequently heard claims of a doubling of traffic every three or four months. Such spectacular growth rates apparently did prevail over a two-year period 1995-6. Ever since, though, growth appears to have reverted to the Internet's historical pattern of a single doubling each year.Progress in transmission technology appears sufficient to double network capacity each year for about the next decade. However, traffic growth faster than a tripling each year could probably not be sustained for more than a few years. Since computing and storage capacities will also be growing, as predicted by the versions of "Moore's Law" appropriate for those technologies, we can expect demand for data transmission to continue increasing. A doubling in Internet traffic each year appears a likely outcome.If Internet traffic continues to double each year, we will have yet another form of "Moore's Law." Such a growth rate would have several important implications. In the intermediate run, there would be neither a clear "bandwidth glut" nor a "bandwidth scarcity," but a more balanced situation, with supply and demand growing at comparable rates. Also, computer and network architectures would be strongly affected, since most data would stay local. Programs such as Napster would play an increasingly important role. Transmission would likely continue to be dominated by file transfers, not by real time streaming media.

141 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791