scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The click modular router

01 Aug 2000-ACM Transactions on Computer Systems (ACM)-Vol. 18, Iss: 3, pp 263-297
TL;DR: On conventional PC hardware, the Click IP router achieves a maximum loss-free forwarding rate of 333,000 64-byte packets per second, demonstrating that Click's modular and flexible architecture is compatible with good performance.
Abstract: Clicks is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queuing, scheduling, and interfacing with network devices. A router configurable is a directed graph with elements at the vertices; packets flow along the edges of the graph. Several features make individual elements more powerful and complex configurations easier to write, including pull connections, which model packet flow drivn by transmitting hardware devices, and flow-based router context, which helps an element locate other interesting elements. Click configurations are modular and easy to extend. A standards-compliant Click IP router has 16 elements on its forwarding path; some of its elements are also useful in Ethernet switches and IP tunnelling configurations. Extending the IP router to support dropping policies, fairness among flows, or Differentiated Services simply requires adding a couple of element at the right place. On conventional PC hardware, the Click IP router achieves a maximum loss-free forwarding rate of 333,000 64-byte packets per second, demonstrating that Click's modular and flexible architecture is compatible with good performance.
Citations
More filters
Journal ArticleDOI
31 Mar 2008
TL;DR: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day, based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries.
Abstract: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too

9,138 citations

Proceedings ArticleDOI
14 Sep 2003
TL;DR: Measurements taken from a 29-node 802.11b test-bed demonstrate the poor performance of minimum hop-count, illustrate the causes of that poor performance, and confirm that ETX improves performance.
Abstract: This paper presents the expected transmission count metric (ETX), which finds high-throughput paths on multi-hop wireless networks. ETX minimizes the expected total number of packet transmissions (including retransmissions) required to successfully deliver a packet to the ultimate destination. The ETX metric incorporates the effects of link loss ratios, asymmetry in the loss ratios between the two directions of each link, and interference among the successive links of a path. In contrast, the minimum hop-count metric chooses arbitrarily among the different paths of the same minimum length, regardless of the often large differences in throughput among those paths, and ignoring the possibility that a longer path might offer higher throughput.This paper describes the design and implementation of ETX as a metric for the DSDV and DSR routing protocols, as well as modifications to DSDV and DSR which allow them to use ETX. Measurements taken from a 29-node 802.11b test-bed demonstrate the poor performance of minimum hop-count, illustrate the causes of that poor performance, and confirm that ETX improves performance. For long paths the throughput improvement is often a factor of two or more, suggesting that ETX will become more useful as networks grow larger and paths become longer.

3,656 citations


Cites methods from "The click modular router"

  • ...The routing system in which ETX is implemented has four main parts: the Click toolkit [19], and Click-based implementations of DSDV, DSR, and the ETX link measurement algorithms....

    [...]

Journal ArticleDOI
17 Aug 2008
TL;DR: This paper shows how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements and argues that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions.
Abstract: Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance.In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP.

3,549 citations

Proceedings ArticleDOI
Michael Isard1, Mihai Budiu1, Yuan Yu1, Andrew Birrell1, Dennis Fetterly1 
21 Mar 2007
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.
Abstract: Dryad is a general-purpose distributed execution engine for coarse-grain data-parallel applications. A Dryad application combines computational "vertices" with communication "channels" to form a dataflow graph. Dryad runs the application by executing the vertices of this graph on a set of available computers, communicating as appropriate through flies, TCP pipes, and shared-memory FIFOs.The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run simultaneously on multiple computers, or on multiple CPU cores within a computer. The application can discover the size and placement of data at run time, and modify the graph as the computation progresses to make efficient use of the available resources.Dryad is designed to scale from powerful multi-core single computers, through small clusters of computers, to data centers with thousands of computers. The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.

2,867 citations

Journal ArticleDOI
28 Jul 2014
TL;DR: This paper proposes P4 as a strawman proposal for how OpenFlow should evolve in the future, and describes how to use P4 to configure a switch to add a new hierarchical label.
Abstract: P4 is a high-level language for programming protocol-independent packet processors. P4 works in conjunction with SDN control protocols like OpenFlow. In its current form, OpenFlow explicitly specifies protocol headers on which it operates. This set has grown from 12 to 41 fields in a few years, increasing the complexity of the specification while still not providing the flexibility to add new headers. In this paper we propose P4 as a strawman proposal for how OpenFlow should evolve in the future. We have three goals: (1) Reconfigurability in the field: Programmers should be able to change the way switches process packets once they are deployed. (2) Protocol independence: Switches should not be tied to any specific network protocols. (3) Target independence: Programmers should be able to describe packet-processing functionality independently of the specifics of the underlying hardware. As an example, we describe how to use P4 to configure a switch to add a new hierarchical label.

2,214 citations


Cites background or methods from "The click modular router"

  • ...The Click modular router [10] supports flexible packet processing in software, but does not map programs to a variety of target hardware switches....

    [...]

  • ...We could use a language such as Click [10], which builds switches from modules composed of arbitrary C++....

    [...]

References
More filters
Book
01 Jan 1986
TL;DR: This book discusses the design of a Code Generator, the role of the Lexical Analyzer, and other topics related to code generation and optimization.
Abstract: 1 Introduction 1.1 Language Processors 1.2 The Structure of a Compiler 1.3 The Evolution of Programming Languages 1.4 The Science of Building a Compiler 1.5 Applications of Compiler Technology 1.6 Programming Language Basics 1.7 Summary of Chapter 1 1.8 References for Chapter 1 2 A Simple Syntax-Directed Translator 2.1 Introduction 2.2 Syntax Definition 2.3 Syntax-Directed Translation 2.4 Parsing 2.5 A Translator for Simple Expressions 2.6 Lexical Analysis 2.7 Symbol Tables 2.8 Intermediate Code Generation 2.9 Summary of Chapter 2 3 Lexical Analysis 3.1 The Role of the Lexical Analyzer 3.2 Input Buffering 3.3 Specification of Tokens 3.4 Recognition of Tokens 3.5 The Lexical-Analyzer Generator Lex 3.6 Finite Automata 3.7 From Regular Expressions to Automata 3.8 Design of a Lexical-Analyzer Generator 3.9 Optimization of DFA-Based Pattern Matchers 3.10 Summary of Chapter 3 3.11 References for Chapter 3 4 Syntax Analysis 4.1 Introduction 4.2 Context-Free Grammars 4.3 Writing a Grammar 4.4 Top-Down Parsing 4.5 Bottom-Up Parsing 4.6 Introduction to LR Parsing: Simple LR 4.7 More Powerful LR Parsers 4.8 Using Ambiguous Grammars 4.9 Parser Generators 4.10 Summary of Chapter 4 4.11 References for Chapter 4 5 Syntax-Directed Translation 5.1 Syntax-Directed Definitions 5.2 Evaluation Orders for SDD's 5.3 Applications of Syntax-Directed Translation 5.4 Syntax-Directed Translation Schemes 5.5 Implementing L-Attributed SDD's 5.6 Summary of Chapter 5 5.7 References for Chapter 5 6 Intermediate-Code Generation 6.1 Variants of Syntax Trees 6.2 Three-Address Code 6.3 Types and Declarations 6.4 Translation of Expressions 6.5 Type Checking 6.6 Control Flow 6.7 Backpatching 6.8 Switch-Statements 6.9 Intermediate Code for Procedures 6.10 Summary of Chapter 6 6.11 References for Chapter 6 7 Run-Time Environments 7.1 Storage Organization 7.2 Stack Allocation of Space 7.3 Access to Nonlocal Data on the Stack 7.4 Heap Management 7.5 Introduction to Garbage Collection 7.6 Introduction to Trace-Based Collection 7.7 Short-Pause Garbage Collection 7.8 Advanced Topics in Garbage Collection 7.9 Summary of Chapter 7 7.10 References for Chapter 7 8 Code Generation 8.1 Issues in the Design of a Code Generator 8.2 The Target Language 8.3 Addresses in the Target Code 8.4 Basic Blocks and Flow Graphs 8.5 Optimization of Basic Blocks 8.6 A Simple Code Generator 8.7 Peephole Optimization 8.8 Register Allocation and Assignment 8.9 Instruction Selection by Tree Rewriting 8.10 Optimal Code Generation for Expressions 8.11 Dynamic Programming Code-Generation 8.12 Summary of Chapter 8 8.13 References for Chapter 8 9 Machine-Independent Optimizations 9.1 The Principal Sources of Optimization 9.2 Introduction to Data-Flow Analysis 9.3 Foundations of Data-Flow Analysis 9.4 Constant Propagation 9.5 Partial-Redundancy Elimination 9.6 Loops in Flow Graphs 9.7 Region-Based Analysis 9.8 Symbolic Analysis 9.9 Summary of Chapter 9 9.10 References for Chapter 9 10 Instruction-Level Parallelism 10.1 Processor Architectures 10.2 Code-Scheduling Constraints 10.3 Basic-Block Scheduling 10.4 Global Code Scheduling 10.5 Software Pipelining 10.6 Summary of Chapter 10 10.7 References for Chapter 10 11 Optimizing for Parallelism and Locality 11.1 Basic Concepts 11.2 Matrix Multiply: An In-Depth Example 11.3 Iteration Spaces 11.4 Affine Array Indexes 11.5 Data Reuse 11.6 Array Data-Dependence Analysis 11.7 Finding Synchronization-Free Parallelism 11.8 Synchronization Between Parallel Loops 11.9 Pipelining 11.10 Locality Optimizations 11.11 Other Uses of Affine Transforms 11.12 Summary of Chapter 11 11.13 References for Chapter 11 12 Interprocedural Analysis 12.1 Basic Concepts 12.2 Why Interprocedural Analysis? 12.3 A Logical Representation of Data Flow 12.4 A Simple Pointer-Analysis Algorithm 12.5 Context-Insensitive Interprocedural Analysis 12.6 Context-Sensitive Pointer Analysis 12.7 Datalog Implementation by BDD's 12.8 Summary of Chapter 12 12.9 References for Chapter 12 A A Complete Front End A.1 The Source Language A.2 Main A.3 Lexical Analyzer A.4 Symbol Tables and Types A.5 Intermediate Code for Expressions A.6 Jumping Code for Boolean Expressions A.7 Intermediate Code for Statements A.8 Parser A.9 Creating the Front End B Finding Linearly Independent Solutions Index

8,437 citations


"The click modular router" refers methods in this paper

  • ...elementclass MaybeChecksum { $checksum_p | input -> sw :: StaticSwitch($checksum_p); sw[0] -> CheckIPHeader2 -> output; sw[1] -> CheckIPHeader -> output; }; c1 :: MaybeChecksum(0); // uses CheckIPHeader2, skips checksum c2 :: MaybeChecksum(1); // uses CheckIPHeader, checks checksum...

    [...]

  • ...Click-align calculates alignments using a data flow analysis resembling availability analysis [1]....

    [...]

  • ...); rr :: RoundRobinSched; input -> hash; hash[0] -> Queue -> [0]rr; hash[1] -> Queue -> [1]rr; rr -> output; } HashSwitch(....

    [...]

  • ...elementclass VerboseCheckIPHeader { input -> c :: CheckIPHeader -> output; c[1] -> Print(CheckIPHeader) -> Discard; || input -> c :: CheckIPHeader -> output; c[1] -> Print(CheckIPHeader) -> [1]output; }...

    [...]

  • ...); rr :: RoundRobinSched; input -> hash; hash[0] -> Queue -> [0]rr; hash[1] -> Queue -> [1]rr; rr -> output; }...

    [...]

01 Dec 1998
TL;DR: An architecture for implementing scalable service differentiation in the Internet achieves scalability by aggregating traffic classification state which is conveyed by means of IP-layer packet marking using the DS field [DSFIELD].
Abstract: This document defines an architecture for implementing scalable service differentiation in the Internet. This architecture achieves scalability by aggregating traffic classification state which is conveyed by means of IP-layer packet marking using the DS field [DSFIELD]. Packets are classified and marked to receive a particular per-hop forwarding behavior on nodes along their path. Sophisticated classification, marking, policing, and shaping operations need only be implemented at network boundaries or hosts. Network resources are allocated to traffic streams by service provisioning policies which govern how traffic is marked and conditioned upon entry to a differentiated services-capable network, and how that traffic is forwarded within that network. A wide variety of services can be implemented on top of these building blocks.

6,226 citations


"The click modular router" refers background in this paper

  • ...Extending the IP router to support dropping policies, fairness among flows, or Differen­tiated Services simply requires adding a couple of elements at the right place....

    [...]

  • ...Definition of the Differentiated Services field (DS field) in the IPv4 and IPv6 headers....

    [...]

  • ...…fundamental properties like packet dropping policies are still under active research [Floyd and Jacobson 1993; Lakshman et al. 1996; Cisco Corporation 1999], and initiatives like Differentiated Services [Blake et al. 1998] bring the need for flexibility close to the core of the Internet....

    [...]

  • ...IP router extensions for scheduling and dropping policies, complex queuing, and Differentiated Services simply require adding and rearranging a couple of elements....

    [...]

  • ...6.5 Differentiated Services Evaluation Section 4.4 showed that Click can conveniently model Differentiated Ser­vices configurations; this section shows that Click can enforce diffserv policies using only packet scheduling elements like PrioSched....

    [...]

Journal ArticleDOI
TL;DR: Red gateways are designed to accompany a transport-layer congestion control protocol such as TCP and have no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time.
Abstract: The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a present threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways. >

6,198 citations

Journal ArticleDOI
TL;DR: A new algorithm is introduced that attains efficiency by inferentially eliminating successor nodes in the tree search by means of a brute-force tree-search enumeration procedure and a parallel asynchronous logic-in-memory implementation of a vital part of the algorithm is described.
Abstract: Subgraph isomorphism can be determined by means of a brute-force tree-search enumeration procedure. In this paper a new algorithm is introduced that attains efficiency by inferentially eliminating successor nodes in the tree search. To assess the time actually taken by the new algorithm, subgraph isomorphism, clique detection, graph isomorphism, and directed graph isomorphism experiments have been carried out with random and with various nonrandom graphs. A parallel asynchronous logic-in-memory implementation of a vital part of the algorithm is also described, although this hardware has not actually been built. The hardware implementation would allow very rapid determination of isomorphism.

2,319 citations


"The click modular router" refers background in this paper

  • ...Luckily, the patterns and router configurations seen in practice are well-served by Ullmann’s algorithm for subgraph polymorphism [49], and click-xform’s observed performance is good, taking about one minute to make hundreds of replacements on router graphs with thousands of elements (and much less on normal-sized routers)....

    [...]

01 Sep 1981
TL;DR: Along with TCP, IP represents the heart of the Internet protocols and has two primary responsibilities: providing connectionless, best-effort delivery of datagrams through an internetwork; and providing fragmentation and reassembly of data links to support data links with different maximum transmission unit (MTU) sizes.
Abstract: IP is a network layer (Layer 3) protocol that contains addressing information and some control information that enables packets to be routed. IP is documented in RFC 791 and is the primary network layer protocol in the Internet protocol suite. Along with TCP, IP represents the heart of the Internet protocols. IP has two primary responsibilities: providing connectionless, best-effort delivery of datagrams through an internetwork; and providing fragmentation and reassembly of datagrams to support data links with different maximum transmission unit (MTU) sizes.

1,967 citations


"The click modular router" refers background in this paper

  • ...AN IP ROUTER This section presents a real Click router configuration: an IP router that forwards unicast packets in nearly full compliance with the standards [Baker 1995; Postel 1981a; 1981b]....

    [...]

  • ...This section presents a real Click router configuration: an IP router that forwards unicast packets in nearly full compliance with the standards [Baker 1995; Postel 1981a; Postel 1981b]....

    [...]