scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Proceedings ArticleDOI
06 Nov 2002
TL;DR: A case study on the eharacteristics and dynamics of LSA traffic for a large enterprise network, which consists of several hundred routers, distributed in tens of OSPF areas, and connected by LANs and private lines.
Abstract: Open Shortest Path First (OSPF) is widely deployed in IP networks to manage intra-domain routing. OSPF is a link-state protocol, in which routers reliably flood "Link State Advertisements" (LSAs), enabling each to build a consistent, global view of the routing topology. Reliable performance hinges on routing stability, yet the behavior of large operational OSPF networks is not well understood. In this paper, we provide a case study on the eharacteristics and dynamics of LSA traffic for a large enterprise network. This network consists of several hundred routers, distributed in tens of OSPF areas, and connected by LANs and private lines. For this network, we focus on LSA traffic and analyze: (a) the class of LSAs triggered by OSPF's soft-state refresh, (b) the class of LSAs triggered by events that change the status of the network, and (c) a class of "duplicate" LSAs received due to redundancy in OSPF's reliable LSA flooding mechanism. We derive the baseline rate of refresh-triggered LSAs automatically from network configuration information. We also investigate finer time scale statistical properties of this traffic, including burstiness, periodicity, and synchronization. We discuss root causes of event-triggered and duplicate LSA traffic, as well as steps identified to reduce this traffic (e.g., localizing a failing router or changing the OSPF configuration).

99 citations

Journal ArticleDOI
01 Jan 2004
TL;DR: This work argues the migrating an existing network to automated configuration is a rich and challenging research problem rooted in data analysis and in the modeling of network protocols and operational practices and proposes a novel, bottom-up approach that proceeds in three phases.
Abstract: Human error in configuring routers undermines attempts to provide reliable, predictable end-to-end performance on IP networks. Manual configuration, while expensive and error-prone, is the dominant mode of operation, especially for large enterprise networks. These networks often lack the basic building blocks---an accurate equipment inventory, a debugged initial configuration, and a specification of local configuration policies---to support the holy grail of automation. We argue the migrating an existing network to automated configuration is a rich and challenging research problem rooted in data analysis and in the modeling of network protocols and operational practices. We propose a novel, bottom-up approach that proceeds in three phases: (i)analysis of configuration data to summarize the existing network state and uncover configuration problems; (ii) data mining to identify the network's local configuration policies and violations of these policies; and ultimately (iii)boot-strapping of a database to drive future configuration changes. The first stage reduces the number of errors, the second normalizes the local policies, and the third prevents new errors and reduces the manpower needed to configure the network. We describe the architecture of our EDGE tool for steps (i) and (ii), and present some examples from our experiences applying the tool to several large enterprise networks.

99 citations

Proceedings ArticleDOI
04 Nov 2009
TL;DR: An in-depth study on several intrinsic characteristics of IPTV user activities by analyzing the real data collected from an operational nation-wide IPTV system, and a series of models for capturing both the probability distribution and time-dynamics of user activities are developed.
Abstract: Internet Protocol Television (IPTV) has emerged as a new delivery method for TV. In contrast with native broadcast in traditional cable and satellite TV system, video streams in IPTV are encoded in IP packets and distributed using IP unicast and multicast. This new architecture has been strategically embraced by ISPs across the globe, recognizing the opportunity for new services and its potential toward a more interactive style of TV watching experience in the future. Since user activities such as channel switches in IPTV impose workload beyond local TV or set-top box (different from broadcast TV systems), it becomes essential to characterize and model the aggregate user activities in an IPTV network to support various system design and performance evaluation functions such as network capacity planning. In this work, we perform an in-depth study on several intrinsic characteristics of IPTV user activities by analyzing the real data collected from an operational nation-wide IPTV system. We further generalize the findings and develop a series of models for capturing both the probability distribution and time-dynamics of user activities. We then combine theses models to design an IPTV user activity workload generation tool called SIMUL WATCH, which takes a small number of input parameters and generates synthetic workload traces that mimic a set of real users watching IPTV. We validate all the models and the prototype of SIMUL WATCH using the real traces. In particular, we show that SIMUL WATCH can estimate the unicast and multicast traffic accurately, proving itself as a useful tool in driving the performance study in IPTV systems.

99 citations

Journal ArticleDOI
TL;DR: In this paper, a normalized cumulative sum of squares test statistic using critical levels for the null hypothesis of white noise yields approximately the same null hypothesis rejection rates when applied to the discrete wavelet transform of samples from a fractionally differenced process.
Abstract: [1] We consider the problem of testing for homogeneity of variance in a time series with long memorystructure.Wedemonstratethat atest whose nullhypothesis isdesigned tobewhite noisecan, in fact, be applied, on a scale by scale basis, to the discrete wavelet transform of long memory processes. In particular, we show that evaluating a normalized cumulative sum of squares test statistic using critical levels for the null hypothesis of white noise yields approximately the same null hypothesis rejection rates when applied to the discrete wavelet transform of samples from a fractionally differenced process. The point at which the test statistic, using a nondecimated version of the discrete wavelet transform, achieves its maximum value can be used to estimate the time of the unknown variance change. We apply our proposed test statistic on five time series derived from the historical record of Nile River yearly minimum water levels covering 622–1922 A.D., each series exhibiting various degrees of serial correlation including long memory. In the longest subseries, spanning 622–1284 A.D., the test confirms an inhomogeneity of variance at short time scales and identifies the change point around 720 A.D., which coincides closely with the construction of a new device around 715 A.D. for measuring the Nile River. The test also detects a change in variance for a record of only 36 years. INDEX TERMS: 1829 Hydrology: Groundwater hydrology; 1869 Hydrology: Stochastic processes; 3299 Mathematical Geophysics: General or miscellaneous;

99 citations

Book ChapterDOI
20 Aug 2002
TL;DR: This paper presents efficient algorithms to approximately answer these RNNA queries over data streams with error guarantees, and provides analytical proofs of constant factor approximations for manyRNNA queries, and complement the analyses with experimental evidence of the accuracy of the techniques.
Abstract: Reverse Nearest Neighbor (RNN) queries have been studied for finite, stored data sets and are of interest for decision support. However, in many applications such as fixed wireless telephony access and sensor-based highway traffic monitoring, the data arrives in a stream and cannot be stored. Exploratory analysis on this data stream can be formalized naturally using the notion of RNN aggregates (RNNAs), which involve the computation of some aggregate (such as C0UNT or MAX DISTANCE) over the set of reverse nearest neighbor "clients" associated with each "server". In this paper, we introduce and investigate the problem of computing three types of RNNA queries over data streams of "client" locations: (i) Max-RNNA: given K servers, return the maximum RNNA over all clients to their closest servers; (ii) List-RNNA: given K servers, return a list of RNNAs over all clients to each of the K servers; and (iii) Opt-RNNA: find a subset of at most K servers for which their RNNAs are below a given threshold. While exact computation of these queries is not possible in the data stream model, we present efficient algorithms to approximately answer these RNNA queries over data streams with error guarantees. We provide analytical proofs of constant factor approximations for many RNNA queries, and complement our analyses with experimental evidence of the accuracy of our techniques.

99 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791