scispace - formally typeset
Search or ask a question
Author

Mihai Dobrescu

Bio: Mihai Dobrescu is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Multi-core processor & Router. The author has an hindex of 7, co-authored 10 publications receiving 912 citations.

Papers
More filters
Proceedings ArticleDOI
11 Oct 2009
TL;DR: This work proposes a software router architecture that parallelizes router functionality both across multiple servers and across multiple cores within a single server, and demonstrates a 35Gbps parallel router prototype.
Abstract: We revisit the problem of scaling software routers, motivated by recent advances in server technology that enable high-speed parallel processing--a feature router workloads appear ideally suited to exploit. We propose a software router architecture that parallelizes router functionality both across multiple servers and across multiple cores within a single server. By carefully exploiting parallelism at every opportunity, we demonstrate a 35Gbps parallel router prototype; this router capacity can be linearly scaled through the use of additional servers. Our prototype router is fully programmable using the familiar Click/Linux environment and is built entirely from off-the-shelf, general-purpose server hardware.

619 citations

Proceedings Article
25 Apr 2012
TL;DR: This work presents a general-purpose packet-processing system that combines ease of programmability with predictable performance, while running a diverse set of applications and serving multiple clients with different needs, and constitutes the first evidence that, when designing software network equipment, flexibility and predictability are not mutually exclusive goals.
Abstract: To become a credible alternative to specialized hardware, general-purpose networking needs to offer not only flexibility, but also predictable performance. Recent projects have demonstrated that general-purpose multicore hard-ware is capable of high-performance packet processing, but under a crucial simplifying assumption of uniformity: all processing cores see the same type/amount of traffic and run identical code, while all packets incur the same type of conventional processing (e.g., IP forwarding). Instead, we present a general-purpose packet-processing system that combines ease of programmability with predictable performance, while running a diverse set of applications and serving multiple clients with different needs. Offering predictability in this context is considered a hard problem because software processes contend for shared hardware resources--caches, memory controllers, buses--in unpredictable ways. Still, we show that, in our system, (a) the way in which resource contention affects performance is predictable and (b) the overall performance depends little on how different processes are scheduled on different cores. To the best of our knowledge, our results constitute the first evidence that, when designing software network equipment, flexibility and predictability are not mutually exclusive goals.

113 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a domain-specific verification tool for packet-processing software, called Click, to verify that a network device behaves and performs as expected, for example, does not crash or enter an infinite loop.
Abstract: The industry is in the mood for programmable networks, where an operator can dynamically deploy network functions on network devices, akin to how one deploys virtual machines on physical machines in a cloud environment Such flexibility brings along the threat of unpredictable behavior and performance What are the minimum restrictions that we need to impose on network functionality such that we are able to verify that a network device behaves and performs as expected, for example, does not crash or enter an infinite loop? We present the result of working iteratively on two tasks: designing a domain-specific verification tool for packet-processing software, while trying to identify a minimal set of restrictions that packet-processing software must satisfy in order to be verification-friendly Our main insight is that packet-processing software is a good candidate for domain-specific verification, for example, because it typically consists of distinct pieces of code that share limited mutable state; we can leverage this and other properties to sidestep fundamental verification challenges We apply our ideas on Click packet-processing software; we perform complete and sound verification of an IP router and two simple middleboxes within tens of minutes, whereas a state-of-the-art general-purpose tool fails to complete the same task within several hours

80 citations

Proceedings ArticleDOI
02 Apr 2014
TL;DR: The main insight is that packet-processing software is a good candidate for domain-specific verification, for example, because it typically consists of distinct pieces of code that share limited mutable state; it can leverage this and other properties to sidestep fundamental verification challenges.
Abstract: Software dataplanes are emerging as an alternative to traditional hardware switches and routers, promising programmability and short time to market. These advantages are set against the risk of disrupting the network with bugs, unpredictable performance, or security vulnerabilities. We explore the feasibility of verifying software dataplanes to ensure smooth network operation. For general programs, verifiability and performance are competing goals; we argue that software dataplanes are different--we can write them in a way that enables verification and preserves performance. We present a verification tool that takes as input a software dataplane, written in a way that meets a given set of conditions, and (dis)proves that the dataplane satisfies crash-freedom, bounded-execution, and filtering properties. We evaluate our tool on stateless and simple stateful Click pipelines; we perform complete and sound verification of these pipelines within tens of minutes, whereas a state-of-the-art general-purpose tool fails to complete the same task within several hours.

54 citations

Proceedings ArticleDOI
30 Nov 2010
TL;DR: This work identifies the main factors that affect packet-processing performance on a modern multicore general-purpose server and formulates an optimization problem that determines how to parallelize the router's functionality across the available cores so as to maximize its throughput.
Abstract: Software routers promise to enable the fast deployment of new, sophisticated kinds of packet processing without the need to buy and deploy expensive new equipment. The challenge is offering such programmability while at the same time achieving a competitive level of performance. Recent work has demonstrated that software routers are capable of high performance, but only for conventional, simple workloads (like packet forwarding and IP routing) and, even that, after careful manual calibration. In contrast, we are interested in achieving high performance in the context of a software router running multiple sophisticated packet-processing applications. In particular: first, we identify the main factors that affect packet-processing performance on a modern multicore general-purpose server---cache misses, cache contention, load-balancing across processing cores; then, we formulate an optimization problem that takes as input a particular server architecture and a packet processing flow, and determines how to parallelize the router's functionality across the available cores so as to maximize its throughput.

27 citations


Cited by
More filters
Proceedings ArticleDOI
04 Oct 2010
TL;DR: Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability.
Abstract: Computer networks lack a general control paradigm, as traditional networks do not provide any network-wide management abstractions. As a result, each new function (such as routing) must provide its own state distribution, element discovery, and failure recovery mechanisms. We believe this lack of a common control platform has significantly hindered the development of flexible, reliable and feature-rich network control planes.To address this, we present Onix, a platform on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives provided by the platform. Thus Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability.

1,463 citations

Journal ArticleDOI
TL;DR: The existing technologies and a wide array of past and state-of-the-art projects on network virtualization are surveyed followed by a discussion of major challenges in this area.

1,235 citations

Proceedings ArticleDOI
15 Aug 2011
TL;DR: DevoFlow is designed and evaluated, a modification of the OpenFlow model which gently breaks the coupling between control and global visibility, in a way that maintains a useful amount of visibility without imposing unnecessary costs.
Abstract: OpenFlow is a great concept, but its original design imposes excessive overheads. It can simplify network and traffic management in enterprise and data center environments, because it enables flow-level control over Ethernet switching and provides global visibility of the flows in the network. However, such fine-grained control and visibility comes with costs: the switch-implementation costs of involving the switch's control-plane too often and the distributed-system costs of involving the OpenFlow controller too frequently, both on flow setups and especially for statistics-gathering.In this paper, we analyze these overheads, and show that OpenFlow's current design cannot meet the needs of high-performance networks. We design and evaluate DevoFlow, a modification of the OpenFlow model which gently breaks the coupling between control and global visibility, in a way that maintains a useful amount of visibility without imposing unnecessary costs. We evaluate DevoFlow through simulations, and find that it can load-balance data center traffic as well as fine-grained solutions, without as much overhead: DevoFlow uses 10--53 times fewer flow table entries at an average switch, and uses 10--42 times fewer control messages.

1,132 citations

Proceedings ArticleDOI
27 Aug 2013
TL;DR: The RMT (reconfigurable match tables) model is proposed, a new RISC-inspired pipelined architecture for switching chips, and the essential minimal set of action primitives to specify how headers are processed in hardware are identified.
Abstract: In Software Defined Networking (SDN) the control plane is physically separate from the forwarding plane. Control software programs the forwarding plane (e.g., switches and routers) using an open interface, such as OpenFlow. This paper aims to overcomes two limitations in current switching chips and the OpenFlow protocol: i) current hardware switches are quite rigid, allowing ``Match-Action'' processing on only a fixed set of fields, and ii) the OpenFlow specification only defines a limited repertoire of packet processing actions. We propose the RMT (reconfigurable match tables) model, a new RISC-inspired pipelined architecture for switching chips, and we identify the essential minimal set of action primitives to specify how headers are processed in hardware. RMT allows the forwarding plane to be changed in the field without modifying hardware. As in OpenFlow, the programmer can specify multiple match tables of arbitrary width and depth, subject only to an overall resource limit, with each table configurable for matching on arbitrary fields. However, RMT allows the programmer to modify all header fields much more comprehensively than in OpenFlow. Our paper describes the design of a 64 port by 10 Gb/s switch chip implementing the RMT model. Our concrete design demonstrates, contrary to concerns within the community, that flexible OpenFlow hardware switch implementations are feasible at almost no additional cost or power.

929 citations

Proceedings ArticleDOI
11 Oct 2009
TL;DR: This work investigates a new OS structure, the multikernel, that treats the machine as a network of independent cores, assumes no inter-core sharing at the lowest level, and moves traditional OS functionality to a distributed system of processes that communicate via message-passing.
Abstract: Commodity computer systems contain more and more processor cores and exhibit increasingly diverse architectural tradeoffs, including memory hierarchies, interconnects, instruction sets and variants, and IO configurations. Previous high-performance computing systems have scaled in specific cases, but the dynamic nature of modern client and server workloads, coupled with the impossibility of statically optimizing an OS for all workloads and hardware variants pose serious challenges for operating system structures.We argue that the challenge of future multicore hardware is best met by embracing the networked nature of the machine, rethinking OS architecture using ideas from distributed systems. We investigate a new OS structure, the multikernel, that treats the machine as a network of independent cores, assumes no inter-core sharing at the lowest level, and moves traditional OS functionality to a distributed system of processes that communicate via message-passing.We have implemented a multikernel OS to show that the approach is promising, and we describe how traditional scalability problems for operating systems (such as memory management) can be effectively recast using messages and can exploit insights from distributed systems and networking. An evaluation of our prototype on multicore systems shows that, even on present-day machines, the performance of a multikernel is comparable with a conventional OS, and can scale better to support future hardware.

926 citations