scispace - formally typeset
Search or ask a question

Showing papers by "Thomas Anderson published in 1999"


Proceedings ArticleDOI
30 Aug 1999
TL;DR: A measurement-based study comparing the performance seen using the "default" path taken in the Internet with the potential performance available using some alternate path, finding that in 30-80% of the cases, there is an alternate path with significantly superior quality.
Abstract: The path taken by a packet traveling across the Internet depends on a large number of factors, including routing protocols and per-network routing policies. The impact of these factors on the end-to-end performance experienced by users is poorly understood. In this paper, we conduct a measurement-based study comparing the performance seen using the "default" path taken in the Internet with the potential performance available using some alternate path. Our study uses five distinct datasets containing measurements of "path quality", such as round-trip time, loss rate, and bandwidth, taken between pairs of geographically diverse Internet hosts. We construct the set of potential alternate paths by composing these measurements to form new synthetic paths. We find that in 30-80% of the cases, there is an alternate path with significantly superior quality. We argue that the overall result is robust and we explore two hypotheses for explaining it.

535 citations


Journal ArticleDOI
05 Oct 1999
TL;DR: This paper demonstrates that there are simple attacks that allow a misbehaving receiver to drive a standard TCP sender arbitrarily fast, without losing end-to-end reliability, and shows that it is possible to modify TCP to eliminate this undesirable behavior entirely.
Abstract: In this paper, we explore the operation of TCP congestion control when the receiver can misbehave, as might occur with a greedy Web client. We first demonstrate that there are simple attacks that allow a misbehaving receiver to drive a standard TCP sender arbitrarily fast, without losing end-to-end reliability. These attacks are widely applicable because they stem from the sender behavior specified in RFC 2581 rather than implementation bugs. We then show that it is possible to modify TCP to eliminate this undesirable behavior entirely, without requiring assumptions of any kind about receiver behavior. This is a strong result: with our solution a receiver can only reduce the data transfer rate by misbehaving, thereby eliminating the incentive to do so.

254 citations


Journal ArticleDOI
TL;DR: The inefficiencies in routing and transport protocols in the modern Internet are described and a prototype, called Detour, a virtual Internet is constructed, in which routers tunnel packets over the commodity Internet instead of using dedicated links.
Abstract: Despite its obvious success, the Internet suffers from end-to-end performance and availability problems. We believe that intelligent routers at key access and interchange points could improve Internet behavior by actively managing traffic. We describe the inefficiencies in routing and transport protocols in the modern Internet. We are constructing a prototype, called Detour, a virtual Internet, in which routers tunnel packets over the commodity Internet instead of using dedicated links.

251 citations


Proceedings ArticleDOI
01 Aug 1999
TL;DR: In this paper, the authors outline a vision of the future and identify research problems that will require our attention in the areas of user interfaces, distributed services, and networking infrastructure, as well as identifying research problems for user interfaces and distributed services.
Abstract: Computing and telecommunications are maturing, and the next century promises a shift away from technology-driven general-purpose devices. Instead, we will focus on the needs of consumers: easy-to-use, low-maintenance, portable, ubiquitous, and ultra-reliable task-specific devices. Such devices, although not as limited by computational speed or communication bandwidth, will instead be constrained by new limits on size, form-factor, and power consumption. Data that they generate will need to be injected into the Internet and find its way to the services to which the user has subscribed. This is not simply a problem of ad-hoc networking, but one that requires re-thinking our basic assumptions regarding network transactions and challenges us to develop entirely new models for distributed services. Network topologies will be intermittent and services will have to be discovered independently of user guidance. In fact, data transfers from user interfaces to services and back, will need to become invisible to the user and guided by the task rather than explicit commands. This paper outlines a vision of this future and identifies research problems that will require our attention in the areas of user interfaces, distributed services, and networking infrastructure.

142 citations


Proceedings ArticleDOI
22 Feb 1999
TL;DR: This paper presents the design of a variation of a log-structured file system based on the concept of a virtual log, which supports fast small transactional writes without extra hardware support and shows that random synchronous updates on an unmodified UFS execute up to an order of magnitude faster on avirtual log than on a conventional disk.
Abstract: In this paper, we study how to minimize the latency of small transactional writes to disk. The basic approach is to write to free sectors that are near the current disk head location by leveraging the embedded processor core inside the disk. We develop a number of analytical models to demonstrate the performance potential of this approach. We then present the design of a variation of a log-structured file system based on the concept of a virtual log, which supports fast small transactional writes without extra hardware support. We compare our approach against traditional update-in-place and logging systems by modifying the Solaris kernel to serve as a simulation engine. Our evaluations show that random synchronous updates on an unmodified UFS execute up to an order of magnitude faster on a virtual log than on a conventional disk. The virtual log can also significantly improve LFS in cases where delaying small writes is not an option or on-line cleaning would degrade performance. If the current trends of disk technology continue, we expect the performance advantage of this approach to become even more pronounced in the future.

140 citations


Proceedings Article
11 Oct 1999
TL;DR: A set of algorithms to control how mobile Active Name programs are mapped onto available wide-area resources to optimize performance and availability are developed.
Abstract: In this paper, we explore flexible name resolution as a way of supporting extensibility for wide-area distributed services. Our approach, called Active Names, maps names to a chain of mobile programs that can customize how a service is located and how its results are transformed and transported back to the client. To illustrate the properties of our system, we implement prototypes of server selection based on end-to-end performance measurements, location-independent data transformation, and caching of composable active objects and demonstrate up to a five-fold performance improvement to end users. We show how these new services are developed, composed, and secured in our framework. Finally, we develop a set of algorithms to control how mobile Active Name programs are mapped onto available wide-area resources to optimize performance and availability.

106 citations



Proceedings ArticleDOI
28 Mar 1999
TL;DR: This work proposes an incremental approach to the problem of congestion control, in which congestion information is shared among many co-located hosts and transport protocols make informed congestion control decisions, and argues that the resulting system can potentially improve the performance experienced by each network user as well as the overall efficiency of the network.
Abstract: Wide-area distributed applications are frequently limited by the performance of Internet data transfers. We argue that the principle cause of this effect is the poor interaction between host-centric congestion control algorithms and the realities of today's Internet traffic and infrastructure. In particular when the duration of a network flow is short, then using end-to-end feedback to determine network conditions will be extremely inefficient. We propose an incremental approach to the problem, in which congestion information is shared among many co-located hosts and transport protocols make informed congestion control decisions. We argue that the resulting system can potentially improve the performance experienced by each network user as well as the overall efficiency of the network.

36 citations


Proceedings Article
12 Jul 1999
TL;DR: It is shown that much of the latency experienced during application startup can be avoided by more efficiently packing application code pages and combining demand paging with code reordering can improve application startup latency by more than 58%.
Abstract: Application startup latency has become a performance problem for both desktop applications and web applications In this paper, we show that much of the latency experienced during application startup can be avoided by more efficiently packing application code pages To take advantage of more efficient packing, we describe the implementation of demand paging for web applications Finally, we show that combining demand paging with code reordering can improve application startup latency by more than 58%

17 citations


01 Jan 1999
TL;DR: In this paper, the authors present modifications to the log-structured file system that allow it to provide robust write performance in a wide range of environments and also present a dynamic reorganization algorithm that makes disk layout responsive to read patterns.
Abstract: My thesis is that the systematic application of simple adaptive methods to file system design can produce systems that are significantly more robust to changing hardware and diverse workloads than existing systems. I present modifications to the log-structured file system that allow it to provide robust write performance in a wide range of environments. I also present a dynamic reorganization algorithm that makes disk layout responsive to read patterns. I evaluate these adaptive algorithms with trace driven simulation on a combination of synthetic and measured traces. I find that simple adaptive algorithms can dramatically improve worst case performance and can allow average case performance to scale with improvements in disk technology.

11 citations


ReportDOI
01 Oct 1999
TL;DR: The goal of the NOW project was to explore and demonstrate a fundamental change in approach to the design and construction of large scale computing systems, motivated by the desire to deploy powerful systems very rapidly and to scale them incrementally, as required to fully utilize commercial technologies that are advancing at a high rate.
Abstract: : The goal of the NOW project was to explore and demonstrate a fundamental change in approach to the design and construction of large scale computing systems. This was motivated by the desire to deploy powerful systems very rapidly and to scale them incrementally, as is required to fully utilize commercial technologies that are advancing at a high rate, to meet new service demands that are increasing on internet time, and to address emergency or military situations. The key enabling technology for the project was the emergence of scalable, low latency, high bandwidth VLSI switches, pioneered in massively parallel processors and transferred into system area network (SAN) configurations. With SAN technology, it became feasible to construct powerful, integrated systems by literally plugging together many state of the art commercial workstations or PCs to form a high performance cluster. The project demonstrated the design approach, the solution to core challenges, and novel design opportunities by building and utilizing a cluster of over one hundred Ultrasparc workstations interconnected by a multigigabyte Myricom network.