scispace - formally typeset
Search or ask a question
Author

Keith Sklower

Bio: Keith Sklower is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Testbed & Wireless network. The author has an hindex of 9, co-authored 14 publications receiving 857 citations.

Papers
More filters
01 Jan 1991
TL;DR: This data structure is general enough to encompass protocol to link layer address translation such as the Address Resolution Protocol (ARP), and the End System to Intermediate System Protocol (ES−IS), and should apply to any hierarchical routing scheme, such as source and quality-of-service routing, or choosing between multiple Datakits on a single system.
Abstract: Packet forwarding for OSI poses strong challenges for routing lookups: the algorithm must be able to efficiently accommodate variable length, and potentially very long addresses. The 4.3 Reno release of Berkeley UNIX† uses a reduced radix tree to make decisions about forwarding packets. This data structure is general enough to encompass protocol to link layer address translation such as the Address Resolution Protocol (ARP), and the End System to Intermediate System Protocol (ES−IS), and should apply to any hierarchical routing scheme, such as source and quality-of-service routing, or choosing between multiple Datakits on a single system. The system uses a message oriented mechanism to communicate between the kernel and user processes to maintain the routing database, inform user processes of spontaneous events such as redirects, routing lookup failures, and suspected timeouts through gateways.

178 citations

Proceedings ArticleDOI
05 Jul 2006
TL;DR: The DETER testbed provides unique resources and a focus of activity for an open community of academic, industry, and government researchers working toward better defenses against malicious attacks on the authors' networking infrastructure, especially critical infrastructure.
Abstract: The DETER testbed is shared infrastructure designed for medium-scale repeatable experiments in computer security, especially those experiments that involve malicious code. The testbed provides unique resources and a focus of activity for an open community of academic, industry, and government researchers working toward better defenses against malicious attacks on our networking infrastructure, especially critical infrastructure. This paper presents our experience with the deployment and operation of the testbed, highlights some of the research conducted on the testbed, and discusses our plans for continued development, expansion, and replication of the testbed facility.

166 citations

01 Nov 1994
TL;DR: This work was originally motivated by the desire to exploit multiple bearer channels in ISDN, but is equally applicable to any situation in which multiple PPP links connect two systems, including async links.
Abstract: This document proposes a method for splitting, recombining and sequencing datagrams across multiple logical data links. This work was originally motivated by the desire to exploit multiple bearer channels in ISDN, but is equally applicable to any situation in which multiple PPP links connect two systems, including async links. This is accomplished by means of new PPP [2] options and protocols.

163 citations

Book ChapterDOI
26 Aug 2002
TL;DR: This paper presents a layered reference model for composition based on a classification of different kinds of composition, and discusses the different overarching mechanisms necessary for the successful deployment of such an architecture through a variety of case-studies involving composition.
Abstract: Services are capabilities that enable applications and are of crucial importance to pervasive computing in next-generation networks. Service Composition is the construction of complex services from primitive ones; thus enabling rapid and flexible creation of new services. The presence of multiple independent service providers poses new and significant challenges. Managing trust across providers and verifying the performance of the components in composition become essential issues. Adapting the composed service to network and user dynamics by choosing service providers and instances is yet another challenge. In SAHARA, we are developing a comprehensive architecture for the creation, placement, and management of services for composition across independent providers. In this paper, we present a layered reference model for composition based on a classification of different kinds of composition. We then discuss the different overarching mechanisms necessary for the successful deployment of such an architecture through a variety of case-studies involving composition.

121 citations

Journal ArticleDOI
01 Jul 2000
TL;DR: A new timer is proposed, named the Eifel retransmission timer, that eliminates four major problems of TCP-Lite's retransmissions timer and is developed through model-based analysis and measurements in a real network that yield the same results.
Abstract: We analyze two alternative retransmission timers for the Transmission Control Protocol (TCP). We first study the retransmission timer of TCP-Lite which is considered to be the current de facto standard for TCP implementations. After revealing four major problems of TCP-Lite's retransmission timer, we propose a new timer, named the Eifel retransmission timer, that eliminates these. The strength of our work lies in its hybrid analysis methodology. We develop models of both retransmission timers for the class of network-limited TCP bulk data transfers in steady state. Using those models, we predict the problems of TCP-Lite's retransmission timer and develop the Eifel retransmission timer. We then validate our model-based analysis through measurements in a real network that yield the same results.

82 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service.
Abstract: The paradigmatic shift from a Web of manual interactions to a Web of programmatic interactions driven by Web services is creating unprecedented opportunities for the formation of online business-to-business (B2B) collaborations. In particular, the creation of value-added services by composition of existing ones is gaining a significant momentum. Since many available Web services provide overlapping or identical functionality, albeit with different quality of service (QoS), a choice needs to be made to determine which services are to participate in a given composite service. This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service. Two selection approaches are described and compared: one based on local (task-level) selection of services and the other based on global allocation of tasks to services using integer programming.

2,872 citations

Journal ArticleDOI
P. Bender1, Peter J. Black1, M. Grob1, Roberto Padovani1, N. Sindhushyana, S. Viterbi1 
TL;DR: The network architecture, based on Internet protocols adapted to the mobile environment, is described, followed by a discussion of economic considerations in comparison to cable and DSL services.
Abstract: This article presents an approach to providing very high-data-rate downstream Internet access by nomadic users within the current CDMA physical layer architecture. A means for considerably increasing the throughput by optimizing packet data protocols and by other network and coding techniques are presented and supported by simulations and laboratory measurements. The network architecture, based on Internet protocols adapted to the mobile environment, is described, followed by a discussion of economic considerations in comparison to cable and DSL services.

1,385 citations

01 Jan 1997
TL;DR: This document describes a protocol for carrying authentication, authorization, and configuration information between a Network Access Server which desires to authenticate its links and a shared Authentication Server.
Abstract: This document describes a protocol for carrying authentication, authorization, and configuration information between a Network Access Server which desires to authenticate its links and a shared Authentication Server.

1,289 citations

Proceedings Article
14 Aug 2013
TL;DR: ZMap is introduced, a modular, open-source network scanner specifically architected to perform Internet-wide scans and capable of surveying the entire IPv4 address space in under 45 minutes from user space on a single machine, approaching the theoretical maximum speed of gigabit Ethernet.
Abstract: Internet-wide network scanning has numerous security applications, including exposing new vulnerabilities and tracking the adoption of defensive mechanisms, but probing the entire public address space with existing tools is both difficult and slow. We introduce ZMap, a modular, open-source network scanner specifically architected to perform Internet-wide scans and capable of surveying the entire IPv4 address space in under 45 minutes from user space on a single machine, approaching the theoretical maximum speed of gigabit Ethernet. We present the scanner architecture, experimentally characterize its performance and accuracy, and explore the security implications of high speed Internet-scale network surveys, both offensive and defensive. We also discuss best practices for good Internet citizenship when performing Internet-wide surveys, informed by our own experiences conducting a long-term research survey over the past year.

677 citations

Journal ArticleDOI
TL;DR: A survey of state-of-the-art IP address lookup algorithms is presented and their performance in terms of lookup speed, scalability, and update overhead is compared.
Abstract: Due to the rapid growth of traffic in the Internet, backbone links of several gigabits per second are commonly deployed. To handle gigabit-per-second traffic rates, the backbone routers must be able to forward millions of packets per second on each of their ports. Fast IP address lookup in the routers, which uses the packet's destination address to determine for each packet the next hop, is therefore crucial to achieve the packet forwarding rates required. IP address lookup is difficult because it requires a longest matching prefix search. In the last couple of years, various algorithms for high-performance IP address lookup have been proposed. We present a survey of state-of-the-art IP address lookup algorithms and compare their performance in terms of lookup speed, scalability, and update overhead.

577 citations