scispace - formally typeset
Search or ask a question

Showing papers by "Thomas Anderson published in 2008"


Journal Article•DOI•
31 Mar 2008
TL;DR: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day, based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries.
Abstract: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too

9,138 citations


Proceedings Article•DOI•
14 Sep 2008
TL;DR: In this setting, the notion of interference cancellation for unmanaged networks - the ability for a single receiver to disambiguate and successfully receive simultaneous overlapping transmissions from multiple unsynchronized sources - is explored, and it is found that techniques can reduce packet loss rate and substantially increase spatial reuse.
Abstract: A fundamental problem with unmanaged wireless networks is high packet loss rates and poor spatial reuse, especially with bursty traffic typical of normal use. To address these limitations, we explore the notion of interference cancellation for unmanaged networks - the ability for a single receiver to disambiguate and successfully receive simultaneous overlapping transmissions from multiple unsynchronized sources. We describe a practical algorithm for interference cancellation, and implement it for ZigBee using software radios. In this setting, we find that our techniques can reduce packet loss rate and substantially increase spatial reuse. With carrier sense set to prevent concurrent sends, our approach reduces the packet loss rate during collisions from 14% to 8% due to improved handling of hidden terminals. Conversely, disabling carrier sense reduces performance for only 7% of all pairs of links and increases the delivery rate for the median pair of links in our testbed by a factor of 1.8 due to improved spatial reuse.

329 citations


Proceedings Article•
16 Apr 2008
TL;DR: It is found that limited propagation improves performance and incentives relative to BitTorrent, and the design and implementation of a new, one hop reputation protocol for P2P networks is motivated.
Abstract: An emerging paradigm in peer-to-peer (P2P) networks is to explicitly consider incentives as part of the protocol design in order to promote good (or discourage bad) behavior. However, effective incentives are hampered by the challenges of a P2P environment, e.g. transient users and no central authority. In this paper, we quantify these challenges, reporting the results of a month-long measurement of millions of users of the BitTorrent file sharing system. Surprisingly, given BitTorrent's popularity, we identify widespread performance and availability problems. These measurements motivate the design and implementation of a new, one hop reputation protocol for P2P networks. Unlike digital currency systems, where contribution information is globally visible, or tit-for-tat, where no propagation occurs, one hop reputations limit propagation to at most one intermediary. Through trace-driven analysis and measurements of a deployment on PlanetLab, we find that limited propagation improves performance and incentives relative to BitTorrent.

154 citations


Proceedings Article•
16 Apr 2008
TL;DR: The extent of reachability problems, both in number and duration, is much greater than the expected, with problems persisting for hours and even days, and many of the problems do not correlate with BGP updates.
Abstract: We present Hubble, a system that operates continuously to find Internet reachability problems in which routes exist to a destination but packets are unable to reach the destination. Hubble monitors at a 15 minute granularity the data-path to prefixes that cover 89% of the Internet's edge address space. Key enabling techniques include a hybrid passive/active monitoring approach and the synthesis of multiple information sources that include historical data. With these techniques, we estimate that Hubble discovers 85% of the reachability problems that would be found with a pervasive probing approach, while issuing only 5.5% as many probes. We also present the results of a three week study conducted with Hubble. We find that the extent of reachability problems, both in number and duration, is much greater than we expected, with problems persisting for hours and even days, and many of the problems do not correlate with BGP updates. In many cases, a multi-homed AS is reachable through one provider, but probes through another terminate; using spoofed packets, we isolated the direction of failure in 84% of cases we analyzed and found all problems to be exclusively on the forward path from the provider to the destination. A snapshot of the problems Hubble is currently monitoring can be found at http://hubble.cs.washington.edu.

151 citations


Journal Article•DOI•
TL;DR: The capability approach to network denial-of-service (DoS) attacks is motivated, the Traffic Validation Architecture (TVA) architecture is evaluated, and an implementation on Click router is evaluated to evaluate the computational costs of TVA.
Abstract: We motivate the capability approach to network denial-of-service (DoS) attacks, and evaluate the traffic validation architecture (TVA) architecture which builds on capabilities. With our approach, rather than send packets to any destination at any time, senders must first obtain ldquopermission to sendrdquo from the receiver, which provides the permission in the form of capabilities to those senders whose traffic it agrees to accept. The senders then include these capabilities in packets. This enables verification points distributed around the network to check that traffic has been authorized by the receiver and the path in between, and hence to cleanly discard unauthorized traffic. To evaluate this approach, and to understand the detailed operation of capabilities, we developed a network architecture called TVA. TVA addresses a wide range of possible attacks against communication between pairs of hosts, including spoofed packet floods, network and host bottlenecks, and router state exhaustion. We use simulations to show the effectiveness of TVA at limiting DoS floods, and an implementation on Click router to evaluate the computational costs of TVA. We also discuss how to incrementally deploy TVA into practice.

148 citations


Proceedings Article•
16 Apr 2008
TL;DR: A fitted bed sheet and a method of making the sheet from a rectangular blank of sheet material are disclosed.
Abstract: Internet routing protocols (BGP, OSPF, RIP) have traditionally favored responsiveness over consistency. A router applies a received update immediately to its forwarding table before propagating the update to other routers, including those that potentially depend upon the outcome of the update. Responsiveness comes at the cost of routing loops and blackholes--a router A thinks its route to a destination is via B but B disagrees. By favoring responsiveness (a liveness property) over consistency (a safety property), Internet routing has lost both. Our position is that consistent state in a distributed system makes its behavior more predictable and securable. To this end, we present consensus routing, a consistency-first approach that cleanly separates safety and liveness using two logically distinct modes of packet delivery: a stable mode where a route is adopted only after all dependent routers have agreed upon it, and a transient mode that heuristically forwards the small fraction of packets that encounter failed links. Somewhat surprisingly, we find that consensus routing improves overall availability when used in conjunction with existing transient mode heuristics such as backup paths, deflections, or detouring. Experiments on the Internet's AS-level topology show that consensus routing eliminates nearly all transient disconnectivity in BGP.

136 citations


Proceedings Article•
01 Jan 2008
TL;DR: In this paper, a fitted bed sheet and a method of making the sheet from a rectangular blank of sheet material are disclosed, where resiliently extensibile strips having catches positioned at predetermined locations along the strips are enclosed within longitudinal, open-ended sheaths extending along opposite marginal edges of the blank.
Abstract: A fitted bed sheet and a method of making the sheet from a rectangular blank of sheet material are disclosed. Resiliently extensibile strips having catches positioned at predetermined locations along the strips are enclosed within longitudinal, open-ended sheaths extending along opposite marginal edges of the blank. One end of each strip extends out of the open end of the sheath and the other end is anchored to the sheath. The blank is folded inwardly upon itself along fold lines to dispose the sheaths in a spaced-apart, generally parallel relationship on the blank to form laterally opposed double-layered panels. Seams are then formed diagonally across the corners of the panels from adjacent the end of the sheaths to points along the fold lines so that a fitted sheet is formed. When the sheet is placed on a mattress, it is drawn into fitting engagement by pulling on the extended ends of the extensible strips and placing the catches over edges of the open ends of the sheaths.

89 citations


Proceedings Article•
16 Apr 2008
TL;DR: The goal is to define a system that could be deployed in the next few years to address the danger from present-day massive botnets, called Phalanx, which leverages the power of swarms to combat DoS.
Abstract: Large-scale distributed denial of service (DoS) attacks are an unfortunate everyday reality on the Internet. They are simple to execute and with the growing prevalence and size of botnets more effective than ever. Although much progress has been made in developing techniques to address DoS attacks, no existing solution is unilaterally deployable, works with the Internet model of open access and dynamic routes, and copes with the large numbers of attackers typical of today's botnets. In this paper, we present a novel DoS prevention scheme to address these issues. Our goal is to define a system that could be deployed in the next few years to address the danger from present-day massive botnets. The system, called Phalanx, leverages the power of swarms to combat DoS. Phalanx makes only the modest assumption that the aggregate capacity of the swarm exceeds that of the botnet. A client communicating with a destination bounces its packets through a random sequence of end-host mailboxes; because an attacker doesn't know the sequence, they can disrupt at most only a fraction of the traffic, even for end-hosts with low bandwidth access links. We use PlanetLab to show that this approach can be both efficient and capable of withstanding attack. We further explore scalability with a simulator running experiments on top of measured Internet topologies.

85 citations


01 Jan 2008
TL;DR: This dissertation designs, builds, and evaluates an information plane for the Internet, called iPlane, that enables distributed applications to discover information about Internet paths without explicit measurement, and uses information fromiPlane to drive path selection in three representative distributed applications.
Abstract: Over the last few years, several new applications have emerged on the Internet that are distributed at a scale previously unseen. Examples include peer-to-peer filesharing, content distribution networks, and voice-over-IP. This new class of distributed applications can make intelligent choices among the several paths available to them to optimize their performance. However, as a result of the best-effort packet forwarding interface exported by the Internet, applications need to explicitly measure the network to discover information about any path. Not only does measurement at the time of communication impose a significant overhead on most applications, but it is also redundant to have every application reimplement an Internet measurement component. In this dissertation, I design, build, and evaluate an information plane for the Internet, called iPlane, that enables distributed applications to discover information about Internet paths without explicit measurement. iPlane efficiently performs measurements from end-hosts under its control to predict path properties on the Internet between arbitrary end-hosts. I pursue a structural approach in issuing and synthesizing measurements—instead of using only end-to-end measurements and thus treating the Internet as a blackbox, I discover the Internet's routing topology and then compose measurements of links and path segments. This structural approach enables iPlane to predict multiple path properties, such as latency and loss rate. My evaluation of iPlane shows that iPlane's predictions of paths and path properties are accurate, significantly better than previous approaches for some of the sub-problems that iPlane tackles. Also, I used information from iPlane to drive path selection in three representative distributed applications—content distribution, peer-to-peer filesharing, and voice-over-IP. In each case, the use of iPlane helped significantly improve application performance.

10 citations


Journal Article•DOI•
01 Jul 2008
TL;DR: The Workshop on Organizing Workshops, Conferences, and Symposia for Computer Systems (WOWCS) was organized to "bring together conference organizers and other interested people to discuss the issues they confront."
Abstract: The Workshop on Organizing Workshops, Conferences, and Symposia for Computer Systems (WOWCS) was organized to "bring together conference organizers (past, present, and future) and other interested people to discuss the issues they confront." In conjunction with WOWCS, we survey some previous publications that discuss open issues related to organizing computer systems conferences, especially concerning conduct and management of the review process. We also list some topics about which we wish WOWCS had received submissions, but did not; these could be good topics for future articles.

9 citations


Proceedings Article•
15 Apr 2008
TL;DR: A model of computer systems research is developed as a way of helping explain to prospective authors the often obscure workings of conference program committees to motivate several recent changes in conference design.
Abstract: This paper develops a model of computer systems research as a way of helping explain to prospective authors the often obscure workings of conference program committees. While our goal is primarily descriptive, we use the model to motivate several recent changes in conference design and to suggest some further potential improvements.

Proceedings Article•
15 Apr 2008
TL;DR: The Workshop on Organizing Workshops, Conferences, and Symposia for Computer Systems was organized to "bring together conference organizers (past, present, and future) and other interested people to discuss the issues they confront."
Abstract: The Workshop on Organizing Workshops, Conferences, and Symposia for Computer Systems (WOWCS) was organized to "bring together conference organizers (past, present, and future) and other interested people to discuss the issues they confront." In addition to the position papers submitted to the workshop, the WOWCS program committee has collected a bibliography of previous publications in this area. We also list some topics about which we wish we had received submissions, but did not; these could be good topics for future articles.

01 Jan 2008
TL;DR: This work believes it has created a practical solution that handles noncacheable, works with the internet model of open access and dynamic routes, and copes with the large numbers of attackers typical of today's botnets.
Abstract: Large-scale distributed denial of Service (DoS) attacks are an unfortunate everyday reality on the Internet. They are simple to execute and, with the growing size of botnets, more effective than ever. Although much progress has been made in developing techniques to address DoS attacks, no existing solution handles noncacheable, works with the internet model of open access and dynamic routes, and copes with the large numbers of attackers typical of today's botnets. We belive we have created a practical solution.