scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Proceedings ArticleDOI
24 May 2007
TL;DR: The Match operator is heuristic, making use of both static and behavioural properties of the models to improve the accuracy of matching, and the Merge operator preserves the hierarchical structure of the input models, and handles differences in behaviour through parameterization.
Abstract: Model Management addresses the problem of managing an evolving collection of models, by capturing the relationships between models and providing well-defined operators to manipulate them. In this paper, we describe two such operators for manipulating hierarchical Statecharts: Match, for finding correspondences between models, and Merge, for combining models with respect to known correspondences between them. Our Match operator is heuristic, making use of both static and behavioural properties of the models to improve the accuracy of matching. Our Merge operator preserves the hierarchical structure of the input models, and handles differences in behaviour through parameterization. In this way, we automatically construct merges that preserve the semantics of Statecharts models. We illustrate and evaluate our work by applying our operators to AT&T telecommunication features.

304 citations

Journal ArticleDOI
Jack Harriman Winters1
TL;DR: The results show that transmit diversity with M transmit antennas provides a diversity gain within 0.1 dB of that with M receive antennas for any number of antennas, and that the same diversity benefit can be obtained at the remotes and base stations using multiple base-station antennas only.
Abstract: In this paper, we study the ability of transmit diversity to provide diversity benefit to a receiver in a Rayleigh fading environment. With transmit diversity, multiple antennas transmit delayed versions of a signal to create frequency-selective fading at a single antenna at the receiver, which uses equalization to obtain diversity gain against fading. We use Monte Carlo simulation to study transmit diversity for the case of independent Rayleigh fading from each transmit antenna to the receive antenna and maximum likelihood sequence estimation for equalization at the receiver. Our results show that transmit diversity with M transmit antennas provides a diversity gain within 0.1 dB of that with M receive antennas for any number of antennas. Thus, we can obtain the same diversity benefit at the remotes and base stations using multiple base-station antennas only.

303 citations

Proceedings ArticleDOI
01 Jul 2004
TL;DR: A negative binomial regression model using information from previous releases has been developed and used to predict the numbers of faults for a large industrial inventory system, and was extremely accurate.
Abstract: The ability to predict which files in a large software system are most likely to contain the largest numbers of faults in the next release can be a very valuable asset. To accomplish this, a negative binomial regression model using information from previous releases has been developed and used to predict the numbers of faults for a large industrial inventory system. The files of each release were sorted in descending order based on the predicted number of faults and then the first 20% of the files were selected. This was done for each of fifteen consecutive releases, representing more than four years of field usage. The predictions were extremely accurate, correctly selecting files that contained between 71% and 92% of the faults, with the overall average being 83%. In addition, the same model was used on data for the same system's releases, but with all fault data prior to integration testing removed. The prediction was again very accurate, ranging from 71% to 93%, with the average being 84%. Predictions were made for a second system, and again the first 20% of files accounted for 83% of the identified faults. Finally, a highly simplified predictor was considered which correctly predicted 73% and 74% of the faults for the two systems.

303 citations

Journal ArticleDOI
TL;DR: The thesis is Web users should have the ability to limit what information is revealed about them and to whom it is revealed.
Abstract: eb server log files are riddled with information about the users who visit them. Obviously, a server can record the content that each visitor accesses. In addition, however, the server can record the user’s IP address—and thus often the user’s Internet domain name, workplace, and/or approximate location—the type of computing platform she is using, the Web page that referred her to this site and, with some effort, the server that she visits next [7]. Even when the user’s IP address changes between browsing sessions (for example, the IP address is assigned dynamically using DHCP [4]), a Web server can link multiple sessions by the same user by planting a unique cookie in the user’s browser during the first browsing session, and retrieving that cookie in subsequent sessions. Moreover, virtually the same monitoring capabilities are available to other parties, for example, the user’s ISP or local gateway administrator who can observe all communication in which the user participates. The user profiling made possible by such monitoring capabilities is viewed as a useful tool by many businesses and consumers; it makes it possible for a Web server to personalize its content for its users, and for businesses to monitor employee activities. However, the negative ramifications for user privacy are considerable. While a lack of privacy has, in principle, always characterized Internet communication, never before has a type of Internet communication been logged so universally and revealed so much about the personal tastes of its users. Thus, our thesis is Web users should have the ability to limit what information is revealed about them and to whom it is revealed. Some with Crowds

301 citations

Book ChapterDOI
01 Jan 2002
TL;DR: This paper is an annotated bibliography of the GRASP literature from 1989 to 2001, covering a wide range of combinatorial optimization problems, ranging from scheduling and routing to drawing and turbine balancing.
Abstract: A greedy randomized adaptive search procedure (GRASP) is a metaheuristic for combinatorial optimization. It is a multi-start or iterative process, in which each GRASP iteration consists of two phases, a construction phase, in which a feasible solution is produced, and a local search phase, in which a local optimum in the neighborhood of the constructed solution is sought. Since 1989, numerous papers on the basic aspects of GRASP, as well as enhancements to the basic metaheuristic have appeared in the literature. GRASP has been applied to a wide range of combinatorial optimization problems, ranging from scheduling and routing to drawing and turbine balancing. This paper is an annotated bibliography of the GRASP literature from 1989 to 2001.

300 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791