Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article, it was shown that for any diagram D representing the unknot, there is a sequence of 2 c 1n Reidemeister moves that will convert it to a trivial knot diagram, where n is the number of crossings in D.
Abstract: There is a positive constant c1 such that for any diagram D representing the unknot, there is a sequence of at most 2 c1n Reidemeister moves that will convert it to a trivial knot diagram, where n is the number of crossings in D. A similar result holds for elementary moves on a polygonal knot K embedded in the 1-skeleton of the interior of a compact, orientable, triangulated P L 3-manifold M. There is a positive constant c2 such that for each t � 1, if M consists of t tetrahedra, and K is unknotted, then there is a sequence of at most 2 c 2t elementary moves in M which transforms K to a triangle contained inside one tetrahedron of M. We obtain explicit values for c1 and c2.
97 citations
••
TL;DR: The article provides an overview of requirements for QoS policies, alternative policy architectures that can be deployed in a network, different protocols that could be used to exchange policy information, and exchange of policy information among different administrative domains.
Abstract: We examine the issues that arise in the definition, deployment, and management of policies related to QoS in an IP network. The article provides an overview of requirements for QoS policies, alternative policy architectures that can be deployed in a network, different protocols that can be used to exchange policy information, and exchange of policy information among different administrative domains. We discuss current issues being examined in IETF and other standards bodies, as well as issues explored in ongoing policy-related research at different universities and research laboratories.
97 citations
••
20 Aug 2002TL;DR: This paper devise efficient algorithms that optimally determine when the recursive check can be eliminated, and when it can be simplified to just a local check on the element's attributes, without violating the access control policy.
Abstract: The rapid emergence of XML as a standard for data exchange over the Web has led to considerable interest in the problem of securing XML documents. In this context, query evaluation engines need to ensure that user queries only use and return XML data the user is allowed to access. These added access control checks can considerably increase query evaluation time. In this paper, we consider the problem of optimizing the secure evaluation of XML twig queries.
We focus on the simple, but useful, multi-level access control model, where a security level can be either specified at an XML element, or inherited from its parent. For this model, secure query evaluation is possible by rewriting the query to use a recursive function that computes an element's security level. Based on security information in the DTD, we devise efficient algorithms that optimally determine when the recursive check can be eliminated, and when it can be simplified to just a local check on the element's attributes, without violating the access control policy. Finally, we experimentally evaluate the performance benefits of our techniques using a variety of XML data and queries.
97 citations
••
28 Apr 2010TL;DR: Maranello is the first partial packet recovery design to be implemented in commonly available firmware and compares Maranello to alternative recovery protocols using a trace-driven simulation and to 802.11 using a live implementation under various channel conditions.
Abstract: Partial packet recovery protocols attempt to repair corrupted packets instead of retransmitting them in their entirety. Recent approaches have used physical layer confidence estimates or additional error detection codes embedded in each transmission to identify corrupt bits, or have applied forward error correction to repair without such explicit knowledge. In contrast to these approaches, our goal is a practical design that simultaneously: (a) requires no extra bits in correct packets, (b) reduces recovery latency, except in rare instances, (c) remains compatible with existing 802.11 devices by obeying timing and backoff standards, and (d) can be incrementally deployed on widely available access points and wireless cards.In this paper, we design, implement, and evaluate Maranello, a novel partial packet recovery mechanism for 802.11. In Maranello, the receiver computes checksums over blocks in corrupt packets and bundles these checksums into a negative acknowledgment sent when the sender expects to receive an acknowledgment. The sender then retransmits only those blocks for which the checksum is incorrect, and repeats this partial retransmission until it receives an acknowledgment. Successful transmissions are not burdened by additional bits and the receiver needs not infer which bits were corrupted. We implemented Maranello using OpenFWWF (open source firmware for Broadcom wireless cards) and deployed it in a small testbed. We compare Maranello to alternative recovery protocols using a trace-driven simulation and to 802.11 using a live implementation under various channel conditions. To our knowledge, Maranello is the first partial packet recovery design to be implemented in commonly available firmware.
97 citations
••
01 Jun 2000TL;DR: The results show that the HTTP/1.1 protocol, particularly with pipelining, is indeed an improvement over existing practice, but that servers serving a small number of objects or closing a persistent connection without explicit notification can reduce or eliminate any performance improvement.
Abstract: Web performance impacts the popularity of a particular Web site or service as well as the load on the network, but there have been no publicly available end-to-end measurements that have focused on a large number of popular Web servers examining the components of delay or the effectiveness of the recent changes to the HTTP protocol. In this paper we report on an extensive study carried out from many client sites geographically distributed around the world to a collection of over 700 servers to which a majority of Web traffic is directed. Our results show that the HTTP/1.1 protocol, particularly with pipelining, is indeed an improvement over existing practice, but that servers serving a small number of objects or closing a persistent connection without explicit notification can reduce or eliminate any performance improvement. Similarly, use of caching and multi-server content distribution can also improve performance if done effectively.
97 citations
Authors
Showing all 1881 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yoshua Bengio | 202 | 1033 | 420313 |
Scott Shenker | 150 | 454 | 118017 |
Paul Shala Henry | 137 | 318 | 35971 |
Peter Stone | 130 | 1229 | 79713 |
Yann LeCun | 121 | 369 | 171211 |
Louis E. Brus | 113 | 347 | 63052 |
Jennifer Rexford | 102 | 394 | 45277 |
Andreas F. Molisch | 96 | 777 | 47530 |
Vern Paxson | 93 | 267 | 48382 |
Lorrie Faith Cranor | 92 | 326 | 28728 |
Ward Whitt | 89 | 424 | 29938 |
Lawrence R. Rabiner | 88 | 378 | 70445 |
Thomas E. Graedel | 86 | 348 | 27860 |
William W. Cohen | 85 | 384 | 31495 |
Michael K. Reiter | 84 | 380 | 30267 |