scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A three-stage framework forNR QE is described that encompasses the range of potential use scenarios for the NR QE and allows knowledge of the human visual system to be incorporated throughout, and the measurement stage is surveyed, considering methods that rely on bitstream, pixels, or both.
Abstract: This paper reviews the basic background knowledge necessary to design effective no-reference (NR) quality estimators (QEs) for images and video. We describe a three-stage framework for NR QE that encompasses the range of potential use scenarios for the NR QE and allows knowledge of the human visual system to be incorporated throughout. We survey the measurement stage of the framework, considering methods that rely on bitstream, pixels, or both. By exploring both the accuracy requirements of potential uses as well as evaluation criteria to stress-test a QE, we set the stage for our community to make substantial future improvements to the challenging problem of NR quality estimation.

166 citations

Proceedings Article
15 Jun 2009
TL;DR: This work argues that cloud computing has a great potential to change how enterprises run and manage their IT systems, but that to achieve this, more comprehensive control over network resources and security need to be provided for users.
Abstract: Cloud computing platforms such as Amazon EC2 provide customers with flexible, on demand resources at low cost. However, while existing offerings are useful for providing basic computation and storage resources, they fail to provide the security and network controls that many customers would like. In this work we argue that cloud computing has a great potential to change how enterprises run and manage their IT systems, but that to achieve this, more comprehensive control over network resources and security need to be provided for users. Towards this goal, we propose CloudNet, a cloud platform architecture which utilizes virtual private networks to securely and seamlessly link cloud and enterprise sites.

165 citations

Proceedings ArticleDOI
09 Jun 2003
TL;DR: This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length, and introduces new condensers that have constant seed length (and retain a constant fraction of the min-entropy in the random source).
Abstract: This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length. More precisely, for every n,k, our extractor uses a random seed of length O(log n) to transform any random source on n bits with (min-)entropy k, into a distribution on (1-α)k bits that is e-close to uniform. Here α and e can be taken to be any positive constants. (In fact, e can be almost polynomially small.Our improvements are obtained via three new techniques, each of which may be of independent interest. The first is a general construction of mergers [22] from locally decodable error-correcting codes. The second introduces new condensers that have constant seed length (and retain a constant fraction of the min-entropy in the random source). The third is a way to augment the "win-win repeated condensing" paradigm of [17] with error reduction techniques like [15] so that the our constant seed-length condensers can be used without error accumulation.

165 citations

Journal ArticleDOI
TL;DR: The concept of tangent vectors is introduced, which compactly represent the essence of these transformation invariances, and two classes of algorithms, tangent distance and tangent propagation, which make use of theseinvariances to improve performance.
Abstract: In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. We introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, tangent distance and tangent propagation, which make use of these invariances to improve performance. © 2001 John Wiley & Sons, Inc. Int J Imaging Syst Technol 11, 181–197, 2000

165 citations

Journal ArticleDOI
TL;DR: An overview of MONA and a selection of implementation "secrets" that have been discovered and tested over the years, including formula reductions, DAGification, guided tree automata, three-valued logic, eager minimization, BDD-based automata representations, and cache-conscious data structures are presented.
Abstract: The MONA tool provides an implementation of automaton-based decision procedures for the logics WS1S and WS2S. It has been used for numerous applications, and it is remarkably efficient in practice, even though it faces a theoretically non-elementary worst-case complexity. The implementation has matured over a period of six years. Compared to the first naive version, the present tool is faster by several orders of magnitude. This speedup is obtained from many different contributions working on all levels of the compilation and execution of formulas. We present an overview of MONA and a selection of implementation "secrets" that have been discovered and tested over the years, including formula reductions, DAGification, guided tree automata, three-valued logic, eager minimization, BDD-based automata representations, and cache-conscious data structures. We describe these techniques and quantify their respective effects by experimenting with separate versions of the MONA tool that in turn omit each of them.

165 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791