scispace - formally typeset
Search or ask a question
Author

Randy Bush

Bio: Randy Bush is an academic researcher from Internet Initiative Japan. The author has contributed to research in topics: The Internet & Border Gateway Protocol. The author has an hindex of 25, co-authored 67 publications receiving 2318 citations.


Papers
More filters
01 Jul 1997
TL;DR: This document considers some areas that have been identified as problems with the specification of the Domain Name System, and proposes remedies for the defects identified.
Abstract: This document considers some areas that have been identified as problems with the specification of the Domain Name System, and proposes remedies for the defects identified. Eight separate issues are considered:

219 citations

Proceedings ArticleDOI
06 Nov 2002
TL;DR: An examination of BGP's behavior during the Code Red/Nimda attack on September 18, 2001 concludes that BGP exhibited no significant abnormality, and that over 40% of the observed updates can be attributed to the monitoring artifact in current BGP measurement settings.
Abstract: Despite BGP's critical importance as the de-facto Internet inter-domain routing protocol, there is little understanding of how BGP actually performs under stressful conditions when dependable routing is most needed. In this paper, we examine BGP's behavior during one stressful period, the Code Red/Nimda attack on September 18, 2001. The attack was correlated with a 30-fold increase in the BGP update messages at a monitoring point which peers with a number of Internet service providers. Our examination of BGP's behavior during the event concludes that BGP exhibited no significant abnormality, and that over 40% of the observed updates can be attributed to the monitoring artifact in current BGP measurement settings. Our analysis, however, does reveal several weak points in both the protocol and its implementation, such as BGP's sensitivity to the transport session reliability, its inability to avoid the global propagation of small local changes, and its certain implementation features whose otherwise benign effects only get amplified under stressful conditions. We also identify areas for improvement in the current network measurement and monitoring effort.

179 citations

Journal ArticleDOI
TL;DR: By presenting a BGP-focused state-of-the-art treatment of the aspects that are critical for a rigorous study of this inter-domain topology, this paper demystify in this paper many "controversial" observations reported in the existing literature and illustrate the benefits and richness of new scientific approaches to measuring, modeling, and analyzing the inter- domain topology.
Abstract: Formally, the Internet inter-domain routing system is a collection of networks, their policies, peering relationships and organizational affiliations, and the addresses they advertize. It also includes components like Internet exchange points. By its very definition, each and every aspect of this system is impacted by BGP, the de-facto standard inter-domain routing protocol. The element of this inter-domain routing system that has attracted the single-most attention within the research community has been the "inter-domain topology". Unfortunately, almost from the get go, the vast majority of studies of this topology, from definition, to measurement, to modeling and analysis, have ignored the central role of BGP in this problem. The legacy is a set of specious findings, unsubstantiated claims, and ill-conceived ideas about the Internet as a whole. By presenting a BGP-focused state-of-the-art treatment of the aspects that are critical for a rigorous study of this inter-domain topology, we demystify in this paper many "controversial" observations reported in the existing literature. At the same time, we illustrate the benefits and richness of new scientific approaches to measuring, modeling, and analyzing the inter-domain topology that are faithful to the BGP-specific nature of this problem domain.

171 citations

Journal ArticleDOI
11 Aug 2006
TL;DR: This work conducts extensive measurement that involves both controlled routing updates through two tier-1 ISPs and active probes of a diverse set of end-to-end paths on the Internet and finds that routing changes contribute to end- to-end packet loss significantly.
Abstract: Extensive measurement studies have shown that end-to-end Internet path performance degradation is correlated with routing dynamics. However, the root cause of the correlation between routing dynamics and such performance degradation is poorly understood. In particular, how do routing changes result in degraded end-to-end path performance in the first place? How do factors such as topological properties, routing policies, and iBGP configurations affect the extent to which such routing events can cause performance degradation? Answers to these questions are critical for improving network performance.In this paper, we conduct extensive measurement that involves both controlled routing updates through two tier-1 ISPs and active probes of a diverse set of end-to-end paths on the Internet. We find that routing changes contribute to end-to-end packet loss significantly. Specifically, we study failover events in which a link failure leads to a routing change and recovery events in which a link repair causes a routing change. In both cases, it is possible to experience data plane performance degradation in terms of increased long loss burst as well as forwarding loops. Furthermore, we find that common routing policies and iBGP configurations of ISPs can directly affect the end-to-end path performance during routing changes. Our work provides new insights into potential measures that network operators can undertake to enhance network performance.

166 citations

01 Dec 2002
TL;DR: The Simplicity Principle is described, which states that complexity is the primary mechanism that impedes efficient scaling, and its implications on the architecture, design and engineering issues found in large scale Internet backbones.
Abstract: This document extends RFC 1958 by outlining some of the philosophical guidelines to which architects and designers of Internet backbone networks should adhere. We describe the Simplicity Principle, which states that complexity is the primary mechanism that impedes efficient scaling, and discuss its implications on the architecture, design and engineering issues found in large scale Internet backbones.

154 citations


Cited by
More filters
ReportDOI
01 Mar 2012
TL;DR: This document specifies the IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL), which provides a mechanism whereby multipoint-to-point traffic from devices inside the LLN towards a central control point as well as point- to- multipoint traffic from the central control points to the devices insideThe LLN are supported.
Abstract: Low-Power and Lossy Networks (LLNs) are a class of network in which both the routers and their interconnect are constrained. LLN routers typically operate with constraints on processing power, memory, and energy (battery power). Their interconnects are characterized by high loss rates, low data rates, and instability. LLNs are comprised of anything from a few dozen to thousands of routers. Supported traffic flows include point-to-point (between devices inside the LLN), point- to-multipoint (from a central control point to a subset of devices inside the LLN), and multipoint-to-point (from devices inside the LLN towards a central control point). This document specifies the IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL), which provides a mechanism whereby multipoint-to-point traffic from devices inside the LLN towards a central control point as well as point-to- multipoint traffic from the central control point to the devices inside the LLN are supported. Support for point-to-point traffic is also available. [STANDARDS-TRACK]

2,551 citations

01 Oct 2000
TL;DR: This document describes the Stream Control Transmission Protocol (SCTP), which is designed to transport PSTN signaling messages over IP networks, but is capable of broader applications.
Abstract: This document describes the Stream Control Transmission Protocol (SCTP). SCTP is designed to transport PSTN signaling messages over IP networks, but is capable of broader applications.

2,270 citations

Book
05 Mar 2012
TL;DR: Computer Networking: A Top-Down Approach Featuring the Internet explains the engineering problems that are inherent in communicating digital information from point to point, and presents the mathematics that determine the best path, show some code that implements those algorithms, and illustrate the logic by using excellent conceptual diagrams.
Abstract: Certain data-communication protocols hog the spotlight, but all of them have a lot in common. Computer Networking: A Top-Down Approach Featuring the Internet explains the engineering problems that are inherent in communicating digital information from point to point. The top-down approach mentioned in the subtitle means that the book starts at the top of the protocol stack--at the application layer--and works its way down through the other layers, until it reaches bare wire. The authors, for the most part, shun the well-known seven-layer Open Systems Interconnection (OSI) protocol stack in favor of their own five-layer (application, transport, network, link, and physical) model. It's an effective approach that helps clear away some of the hand waving traditionally associated with the more obtuse layers in the OSI model. The approach is definitely theoretical--don't look here for instructions on configuring Windows 2000 or a Cisco router--but it's relevant to reality, and should help anyone who needs to understand networking as a programmer, system architect, or even administration guru.The treatment of the network layer, at which routing takes place, is typical of the overall style. In discussing routing, authors James Kurose and Keith Ross explain (by way of lots of clear, definition-packed text) what routing protocols need to do: find the best route to a destination. Then they present the mathematics that determine the best path, show some code that implements those algorithms, and illustrate the logic by using excellent conceptual diagrams. Real-life implementations of the algorithms--including Internet Protocol (both IPv4 and IPv6) and several popular IP routing protocols--help you to make the transition from pure theory to networking technologies. --David WallTopics covered: The theory behind data networks, with thorough discussion of the problems that are posed at each level (the application layer gets plenty of attention). For each layer, there's academic coverage of networking problems and solutions, followed by discussion of real technologies. Special sections deal with network security and transmission of digital multimedia.

1,079 citations

Proceedings ArticleDOI
18 Nov 2002
TL;DR: This paper provides a careful analysis of Code Red propagation by accounting for two factors: one is the dynamic countermeasures taken by ISPs and users; the other is the slowed down worm infection rate because Code Red rampant propagation caused congestion and troubles to some routers.
Abstract: The Code Red worm incident of July 2001 has stimulated activities to model and analyze Internet worm propagation. In this paper we provide a careful analysis of Code Red propagation by accounting for two factors: one is the dynamic countermeasures taken by ISPs and users; the other is the slowed down worm infection rate because Code Red rampant propagation caused congestion and troubles to some routers. Based on the classical epidemic Kermack-Mckendrick model, we derive a general Internet worm model called the two-factor worm model. Simulations and numerical solutions of the two-factor worm model match the observed data of Code Red worm better than previous models do. This model leads to a better understanding and prediction of the scale and speed of Internet worm spreading.

829 citations

Book
01 Jan 2005
TL;DR: It is shown that a link with n flows requires no more than B = (overlineRTT x C) √n, for long-lived or short-lived TCP flows, because of the large number of flows multiplexed together on a single backbone link.
Abstract: All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = overlineRTT x C, where overlineRTT is the average round-trip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms x 10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the rule-of-thumb (B = (overlineRTT x C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with n flows requires no more than B = (overlineRTT x C) √n, for long-lived or short-lived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, on-chip SRAM.

801 citations