scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: For the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints and economic considerations.
Abstract: Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet.

190 citations

Proceedings ArticleDOI
20 May 2003
TL;DR: This paper adopts the widely used and established cosine similarity metric from the information retrieval field in order to identify potential string matches across web sources and implements the join inside an RDBMS, using SQL queries, for scalability and robustness reasons.
Abstract: The integration of data produced and collected across autonomous, heterogeneous web services is an increasingly important and challenging problem. Due to the lack of global identifiers, the same entity (e.g., a product) might have different textual representations across databases. Textual data is also often noisy because of transcription errors, incomplete information, and lack of standard formats. A fundamental task during data integration is matching of strings that refer to the same entity. In this paper, we adopt the widely used and established cosine similarity metric from the information retrieval field in order to identify potential string matches across web sources. We then use this similarity metric to characterize this key aspect of data integration as a join between relations on textual attributes, where the similarity of matches exceeds a specified threshold. Computing an exact answer to the text join can be expensive. For query processing efficiency, we propose a sampling-based join approximation strategy for execution in a standard, unmodified relational database management system (RDBMS), since more and more web sites are powered by RDBMSs with a web-based front end. We implement the join inside an RDBMS, using SQL queries, for scalability and robustness reasons. Finally, we present a detailed performance evaluation of an implementation of our algorithm within a commercial RDBMS, using real-life data sets. Our experimental results demonstrate the efficiency and accuracy of our techniques.

190 citations

Book ChapterDOI
17 Aug 1997
TL;DR: Efficient techniques for three (or more) parties to jointly generate an RSA key and each party holds a share of the private exponent that enables threshold decryption.
Abstract: We describe efficient techniques for three (or more) parties to jointly generate an RSA key. At the end of the protocol an RSA modulus N = pq is publicly known. None of the parties know the factorization of N. In addition a public encryption exponent is publicly known and each party holds a share of the private exponent that enables threshold decryption. Our protocols are efficient in computation and communication.

189 citations

Journal ArticleDOI
TL;DR: In this article, single carrier based multi-level and multi-dimensional coding (ML-MDC) technologies have been demonstrated for spectrally efficient 100-Gb/s transmission.
Abstract: We review and study several single carrier based multi-level and multi-dimensional coding (ML-MDC) technologies recently demonstrated for spectrally-efficient 100-Gb/s transmission. These include 16-ary PDM-QPSK, 64-ary PDM-8PSK, 64-ary PDM-8QAM as well as 256-ary PDM-16 QAM. We show that high-speed QPSK, 8PSK, 8QAM, and 16QAM can all be generated using commercially available optical modulators using only binary electrical drive signals through novel synthesis methods, and that all of these modulation formats can be detected using a universal receiver front-end and digital coherent detection. We show that the constant modulus algorithm (CMA), which is highly effective for blind polarization recovery of PDM-QPSK and PDM-8PSK signals, is much less effective for PDM-8QAM and PDM-16 QAM. We then present a recently proposed, cascaded multi-modulus algorithm for these cases. In addition to the DSP algorithms used for constellation recovery, we also describe a DSP algorithm to improve the performance of a coherent receiver using single-ended photo-detection. The system impact of ASE noise, laser phase noise, narrowband optical filtering and fiber nonlinear effects has been investigated. For high-level modulation formats using full receiver-side digital compensation, it is shown that the requirement on LO phase noise is more stringent than the signal laser. We also show that RZ pulse shaping significantly improves filter- and fiber-nonlinear tolerance. Finally we present three high-spectral-efficiency and high-speed DWDM transmission experiments implementing these ML-MDC technologies.

189 citations

Proceedings Article
29 Nov 1999
TL;DR: A new method for multivariate density estimation is developed based on the Support Vector Method (SVM) solution of inverse ill-posed problems that compared favorably to both Parzen's method and the Gaussian Mixture Model method.
Abstract: A new method for multivariate density estimation is developed based on the Support Vector Method (SVM) solution of inverse ill-posed problems. The solution has the form of a mixture of densities. This method with Gaussian kernels compared favorably to both Parzen's method and the Gaussian Mixture Model method. For synthetic data we achieve more accurate estimates for densities of 2, 6, 12, and 40 dimensions.

188 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791