scispace - formally typeset
Search or ask a question
Institution

AT&T Labs

Company
About: AT&T Labs is a based out in . It is known for research contribution in the topics: Network packet & The Internet. The organization has 1879 authors who have published 5595 publications receiving 483151 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, it was shown that for any diagram D representing the unknot, there is a sequence of 2 c 1n Reidemeister moves that will convert it to a trivial knot diagram, where n is the number of crossings in D.
Abstract: There is a positive constant c1 such that for any diagram D representing the unknot, there is a sequence of at most 2 c1n Reidemeister moves that will convert it to a trivial knot diagram, where n is the number of crossings in D. A similar result holds for elementary moves on a polygonal knot K embedded in the 1-skeleton of the interior of a compact, orientable, triangulated P L 3-manifold M. There is a positive constant c2 such that for each t � 1, if M consists of t tetrahedra, and K is unknotted, then there is a sequence of at most 2 c 2t elementary moves in M which transforms K to a triangle contained inside one tetrahedron of M. We obtain explicit values for c1 and c2.

97 citations

Journal ArticleDOI
R. Rajan1, Dinesh C. Verma, Sanjay Damodar Kamat, E. Felstaine, S. Herzog 
TL;DR: The article provides an overview of requirements for QoS policies, alternative policy architectures that can be deployed in a network, different protocols that could be used to exchange policy information, and exchange of policy information among different administrative domains.
Abstract: We examine the issues that arise in the definition, deployment, and management of policies related to QoS in an IP network. The article provides an overview of requirements for QoS policies, alternative policy architectures that can be deployed in a network, different protocols that can be used to exchange policy information, and exchange of policy information among different administrative domains. We discuss current issues being examined in IETF and other standards bodies, as well as issues explored in ongoing policy-related research at different universities and research laboratories.

97 citations

Book ChapterDOI
20 Aug 2002
TL;DR: This paper devise efficient algorithms that optimally determine when the recursive check can be eliminated, and when it can be simplified to just a local check on the element's attributes, without violating the access control policy.
Abstract: The rapid emergence of XML as a standard for data exchange over the Web has led to considerable interest in the problem of securing XML documents. In this context, query evaluation engines need to ensure that user queries only use and return XML data the user is allowed to access. These added access control checks can considerably increase query evaluation time. In this paper, we consider the problem of optimizing the secure evaluation of XML twig queries. We focus on the simple, but useful, multi-level access control model, where a security level can be either specified at an XML element, or inherited from its parent. For this model, secure query evaluation is possible by rewriting the query to use a recursive function that computes an element's security level. Based on security information in the DTD, we devise efficient algorithms that optimally determine when the recursive check can be eliminated, and when it can be simplified to just a local check on the element's attributes, without violating the access control policy. Finally, we experimentally evaluate the performance benefits of our techniques using a variety of XML data and queries.

97 citations

Proceedings ArticleDOI
28 Apr 2010
TL;DR: Maranello is the first partial packet recovery design to be implemented in commonly available firmware and compares Maranello to alternative recovery protocols using a trace-driven simulation and to 802.11 using a live implementation under various channel conditions.
Abstract: Partial packet recovery protocols attempt to repair corrupted packets instead of retransmitting them in their entirety. Recent approaches have used physical layer confidence estimates or additional error detection codes embedded in each transmission to identify corrupt bits, or have applied forward error correction to repair without such explicit knowledge. In contrast to these approaches, our goal is a practical design that simultaneously: (a) requires no extra bits in correct packets, (b) reduces recovery latency, except in rare instances, (c) remains compatible with existing 802.11 devices by obeying timing and backoff standards, and (d) can be incrementally deployed on widely available access points and wireless cards.In this paper, we design, implement, and evaluate Maranello, a novel partial packet recovery mechanism for 802.11. In Maranello, the receiver computes checksums over blocks in corrupt packets and bundles these checksums into a negative acknowledgment sent when the sender expects to receive an acknowledgment. The sender then retransmits only those blocks for which the checksum is incorrect, and repeats this partial retransmission until it receives an acknowledgment. Successful transmissions are not burdened by additional bits and the receiver needs not infer which bits were corrupted. We implemented Maranello using OpenFWWF (open source firmware for Broadcom wireless cards) and deployed it in a small testbed. We compare Maranello to alternative recovery protocols using a trace-driven simulation and to 802.11 using a live implementation under various channel conditions. To our knowledge, Maranello is the first partial packet recovery design to be implemented in commonly available firmware.

97 citations

Journal ArticleDOI
01 Jun 2000
TL;DR: The results show that the HTTP/1.1 protocol, particularly with pipelining, is indeed an improvement over existing practice, but that servers serving a small number of objects or closing a persistent connection without explicit notification can reduce or eliminate any performance improvement.
Abstract: Web performance impacts the popularity of a particular Web site or service as well as the load on the network, but there have been no publicly available end-to-end measurements that have focused on a large number of popular Web servers examining the components of delay or the effectiveness of the recent changes to the HTTP protocol. In this paper we report on an extensive study carried out from many client sites geographically distributed around the world to a collection of over 700 servers to which a majority of Web traffic is directed. Our results show that the HTTP/1.1 protocol, particularly with pipelining, is indeed an improvement over existing practice, but that servers serving a small number of objects or closing a persistent connection without explicit notification can reduce or eliminate any performance improvement. Similarly, use of caching and multi-server content distribution can also improve performance if done effectively.

97 citations


Authors

Showing all 1881 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Scott Shenker150454118017
Paul Shala Henry13731835971
Peter Stone130122979713
Yann LeCun121369171211
Louis E. Brus11334763052
Jennifer Rexford10239445277
Andreas F. Molisch9677747530
Vern Paxson9326748382
Lorrie Faith Cranor9232628728
Ward Whitt8942429938
Lawrence R. Rabiner8837870445
Thomas E. Graedel8634827860
William W. Cohen8538431495
Michael K. Reiter8438030267
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20225
202133
202069
201971
2018100
201791