scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Journal ArticleDOI
31 Oct 2000
TL;DR: In this paper, the authors present mmdump, a tool that parses messages from RTSP, H323 and similar multimedia session control protocols to set up and tear down packet filters as needed to gather traces of multimedia sessions.
Abstract: Internet multimedia traffic is increasing as applications like streaming media and packet telephony grow in popularity It is important to monitor the volume and characteristics of this traffic, particularly because its behavior in the face of network congestion differs from that of the currently dominant TCP traffic To monitor traffic on a high-speed link for extended periods, it is not practical to blindly capture all packets that traverse the link We present mmdump, a tool that parses messages from RTSP, H323 and similar multimedia session control protocols to set up and tear down packet filters as needed to gather traces of multimedia sessions Unlike tcpdump, dynamic packet filters are necessary because these protocols dynamically negotiate TCP and UDP port numbers to carry the media content Our tool captures only packets of interest for optional storage and further analysis, thus greatly reducing resource requirements This paper presents the design and implementation of mmdump and demonstrates its utility in monitoring live RTSP and H323 traffic on a commercial IP network The preliminary results obtained from these measurements are presented

95 citations

Proceedings ArticleDOI
26 Mar 2012
TL;DR: This work proposes a general framework for computing the summary directly from the input data, without materializing the vast noisy data, and shows that this is a highly practical solution, which releases a compact summary of the noisy data.
Abstract: Differential privacy is fast becoming the method of choice for releasing data under strong privacy guarantees. A standard mechanism is to add noise to the counts in contingency tables derived from the dataset. However, when the dataset is sparse in its underlying domain, this vastly increases the size of the published data, to the point of making the mechanism infeasible.We propose a general framework to overcome this problem. Our approach releases a compact summary of the noisy data with the same privacy guarantee and with similar utility. Our main result is an efficient method for computing the summary directly from the input data, without materializing the vast noisy data. We instantiate this general framework for several summarization methods. Our experiments show that this is a highly practical solution: The summaries are up to 1000 times smaller, and can be computed in less than 1% of the time compared to standard methods. Finally, our framework works with various data transformations, such as wavelets or sketches.

95 citations

Journal ArticleDOI
26 Jun 2006
TL;DR: This work designs novel inference techniques that, by statistically correlating SNMP link loads and sampled NetFlow records, allow for much more accurate estimation of traffic matrices than obtainable from either information source alone, even when sampled Net Flow records are available at only a subset of ingress.
Abstract: Estimation of traffic matrices, which provide critical input for network capacity planning and traffic engineering, has recently been recognized as an important research problem. Most of the previous approaches infer traffic matrix from either SNMP link loads or sampled NetFlow records. In this work, we design novel inference techniques that, by statistically correlating SNMP link loads and sampled NetFlow records, allow for much more accurate estimation of traffic matrices than obtainable from either information source alone, even when sampled NetFlow records are available at only a subset of ingress. Our techniques are practically important and useful since both SNMP and NetFlow are now widely supported by vendors and deployed in most of the operational IP networks. More importantly, this research leads us to a new insight that SNMP link loads and sampled NetFlow records can serve as "error correction codes" to each other. This insight helps us to solve a challenging open problem in traffic matrix estimation, "How to deal with dirty data (SNMP and NetFlow measurement errors due to hardware/software/transmission problems)?" We design techniques that, by comparing notes between the above two information sources, identify and remove dirty data, and therefore allow for accurate estimation of the traffic matrices with the cleaned dat.We conducted experiments on real measurement data obtained from a large tier-1 ISP backbone network. We show that, when full deployment of NetFlow is not available, our algorithm can improve estimation accuracy significantly even with a small fraction of NetFlow data. More importantly, we show that dirty data can contaminate a traffic matrix, and identifying and removing them can reduce errors in traffic matrix estimation by up to an order of magnitude. Routing changes is another a key factor that affects estimation accuracy. We show that using them as the a priori, the traffic matrices can be estimated much more accurately than those omitting the routing change. To the best of our knowledge, this work is the first to offer a comprehensive solution which fully takes advantage of using multiple readily available data sources. Our results provide valuable insights on the effectiveness of combining flow measurement and link load measurement.

95 citations

Proceedings ArticleDOI
06 Jun 2010
TL;DR: This paper presents communication-efficient protocols for sampling (both with and without replacement) from k distributed streams, and shows that they use minimal or near minimal time to process each new item, and space to operate.
Abstract: A fundamental problem in data management is to draw a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distributed sites. The challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol and track parameters of the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low for each participant. In this paper, we present communication-efficient protocols for sampling (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most recent items, or arrivals within the last w time units. We show that our protocols are optimal, not just in terms of the communication used, but also that they use minimal or near minimal (up to logarithmic factors) time to process each new item, and space to operate.

95 citations

Journal ArticleDOI
TL;DR: This work considers the setting of a network providing differentiated services and analyzes and compares different queue policies for this problem using the competitive analysis approach, where thebenefit of the online policy is compared to the benefit of an optimal offline policy.

95 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791