scispace - formally typeset

Journal ArticleDOI

Abstractions for network update

13 Aug 2012-Computer Communication Review (ACMPUB27New York, NY, USA)-

TL;DR: Configuration changes are a common source of instability in networks, leading to outages, performance disruptions, and security vulnerabilities, even when the initial and final configurations are identical.

AbstractConfiguration changes are a common source of instability in networks, leading to outages, performance disruptions, and security vulnerabilities. Even when the initial and final configurations are c...

Summary (3 min read)

1. INTRODUCTION

  • “Nothing endures but change.” —Heraclitus Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
  • Moreover, to ensure that the network behaves correctly during the transition, they must worry about the properties of every possible intermediate state during the update, and the effects on any packets already in flight through the network.
  • To analyze their abstractions and mechanisms, the authors develop a simple, formal model that captures the essential features of OpenFlow networks.
  • This paper makes the following contributions: Update abstractions:.
  • The authors propose per-packet and per-flow consistency as canonical, general abstractions for specifying network updates (§2,§7). Update mechanisms:.

2. EXAMPLE

  • To illustrate the challenges surrounding network updates, consider an example network with one ingress switch I and three “filtering” switches F1, F2, and F3, each sitting between I and the rest of the Internet, as shown on the left side of Figure 1.
  • Switch F1 monitors (and denies) SSH packets and allows all other packets to pass through, while F2 and F3 simply let all packets pass through.
  • The policy pol itself is expressed in a common specification language called CTL and is verified with the help of a model checker.
  • Nevertheless, one can always achieve a per-packet consistent update using a two-phase update supported by configuration versioning.
  • It also shows that all of these complexities can be hidden from the programmer, leaving only the simplest of interfaces for global network update.

3. THE NETWORK MODEL

  • This section presents a simple mathematical model of the essential features of SDNs.
  • Dropping packets occurs by explicitly forwarding a single packet to the Drop port.
  • The formal definition of the network semantics is given by the relations defined in Figure 2(b), which describe how the network transitions from one state to the next one.
  • In a packet-processing transition, a packet is retrieved from the queue for some port, processed using the switch function S and topology function T , and the newly generated packets are enqueued onto the appropriate port queues.

5. PER-PACKET MECHANISMS

  • Depending on the network topology and the specifics of the configurations involved, there may be several ways to implement a perpacket consistent update.
  • They may be combined with other per-packet consistent updates to great effect using the following theorem.
  • It then updates the ingress ports one-by-one to stamp packets with the new version number.
  • Per-packet consistency requires that the active paths in the network come from either of the configurations.

6. CHECKING PROPERTY INVARIANCE

  • As per-packet consistent updates preserve all trace properties, programmers can turn any trace property checker that verifies individual, static network configurations into a verification engine that verifies the invariance of trace properties as configurations evolve over time.
  • Temporal logic, which describes temporal paths through a space, is a natural fit for the specification of trace properties.
  • On all paths (AF) or on some path (EF) from the current position, φ holds on some future position.
  • Read aloud, this formula says, “On all paths, and at all future positions on those paths, the current port is never Drop.”.
  • As in the traditional software development cycle, the judicious use of static analyses in network programming can pinpoint bugs before deployment, reducing time spent diagnosing performance problems and patching security vulnerabilities.

7. PER-FLOW CONSISTENCY

  • Per-packet consistency, while simple and powerful, is not always enough.
  • Per-flow consistency guarantees that all packets in the same flow are handled by the same version of the configuration.
  • The authors system implements the first of the three; the latter two, while promising, depend upon technology that is not yet available in OpenFlow.
  • Then, on ingress switches, the controller sets soft timeouts on the rules for the old configuration and installs the new configuration at lower priority.
  • To ensure rules expire in a timely fashion, the controller can refine the old rules to cover a progressively smaller portion of the flow space.

8. IMPLEMENTATION AND EVALUATION

  • The authors have built a system called Kinetic that implements the update abstractions introduced in this paper, and evaluated its performance on small but canonical example applications.
  • The authors measure the number of OpenFlow operations required for the deployment of the new configuration, as well as the overhead of installing extra rules to ensure per-packet consistency.
  • Two-phase update requires approximately 100% overhead, because it leaves the old configuration on the switch as it installs the new one.
  • The first routing scenario, where hosts are added or removed, demonstrates the potential of their optimizations.
  • Because the rules for the new routes do not affect traffic between existing hosts, they can be installed without modifying or reinstalling the existing rules.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

Abstractions for Network Update
Mark Reitblatt
Cornell
Nate Foster
Cornell
Jennifer Rexford
Princeton
Cole Schlesinger
Princeton
David Walker
Princeton
ABSTRACT
Configuration changes are a common source of instability in net-
works, leading to outages, performance disruptions, and security
vulnerabilities. Even when the initial and final configurations are
correct, the update process itself often steps through intermediate
configurations that exhibit incorrect behaviors. This paper intro-
duces the notion of consistent network updates—updates that are
guaranteed to preserve well-defined behaviors when transitioning
between configurations. We identify two distinct consistency lev-
els, per-packet and per-flow, and we present general mechanisms
for implementing them in Software-Defined Networks using switch
APIs like OpenFlow. We develop a formal model of OpenFlow net-
works, and prove that consistent updates preserve a large class of
properties. We describe our prototype implementation, including
several optimizations that reduce the overhead required to perform
consistent updates. We present a verification tool that leverages
consistent updates to significantly reduce the complexity of check-
ing the correctness of network control software. Finally, we de-
scribe the results of some simple experiments demonstrating the
effectiveness of these optimizations on example applications.
Categories and Subject Descriptors
C.2.1 [Computer-Communication Networks]: Distributed Sys-
tems—Network Operating Systems
General Terms
Design, Languages, Theory
Keywords
Consistency, planned change, software-defined networking, Open-
Flow, network programming languages, Frenetic.
1. INTRODUCTION
“Nothing endures but change.
—Heraclitus
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGCOMM’12, August 13–17, 2012, Helsinki, Finland.
Copyright 2012 ACM 978-1-4503-1419-0/12/08 ...$15.00.
Networks exist in a constant state of flux. Operators frequently
modify routing tables, adjust link weights, and change access con-
trol lists to perform tasks from planned maintenance, to traffic en-
gineering, to patching security vulnerabilities, to migrating virtual
machines in a datacenter. But even when updates are planned well
in advance, they are difficult to implement correctly, and can result
in disruptions such as transient outages, lost server connections, un-
expected security vulnerabilities, hiccups in VoIP calls, or the death
of a player’s favorite character in an online game.
To address these problems, researchers have proposed a number
of extensions to protocols and operational practices that aim to pre-
vent transient anomalies [8, 2, 9, 3, 5]. However, each of these so-
lutions is limited to a specific protocol (e.g., OSPF and BGP) and a
specific set of properties (e.g., freedom from loops and blackholes)
and increases the complexity of the system considerably. Hence, in
practice, network operators have little help when designing a new
protocol or trying to ensure an additional property not covered by
existing techniques. A list of example applications and their prop-
erties is summarized in Table 1.
We believe that, instead of relying on point solutions for network
updates, the networking community needs foundational principles
for designing solutions that are applicable to a wide range of pro-
tocols and properties. These solutions should come with two parts:
(1) an abstract interface that offers strong, precise, and intuitive
semantic guarantees, and (2) concrete mechanisms that faithfully
implement the semantics specified in the abstract interface. Pro-
grammers can use the interface to build robust applications on top
of a reliable foundation. The mechanisms, while possibly complex,
would be implemented once by experts, tuned and optimized, and
used over and over, much like register allocation or garbage collec-
tion in a high-level programming language.
Software-defined networks. The emergence of Software De-
fined Networks (SDN) presents a tremendous opportunity for de-
veloping general abstractions for managing network updates. In an
SDN, a program running on a logically-centralized controller man-
ages the network directly by configuring the packet-handling mech-
anisms in the underlying switches. For example, the OpenFlow API
allows a controller to install rules that each specify a pattern that
matches on bits in the packet header, actions performed on match-
ing packets (such as drop, forward, or divert to the controller), a pri-
ority (to disambiguate between overlapping patterns), and timeouts
(to allow the switch to remove stale rules) [10]. Hence, whereas
today network operators have (at best) indirect control over the dis-
tributed implementations of routing, access control, and load bal-
ancing, SDN platforms like OpenFlow provide programmers with
direct control over the processing of packets in the network.
However, despite the conceptual appeal of centralized control,

Example Application Policy Change Desired Property Practical Implications
Stateless firewall Changing access control list No security holes Admitting malicious traffic
Planned maintenance [1, 2, 3] Shut down a node/link No loops/blackholes Packet/bandwidth loss
Traffic engineering [1, 3] Changing a link weight No loops/blackholes Packet/bandwidth loss
VM migration [4] Move server to new location No loops/blackholes Packet/bandwidth loss
IGP migration [5] Adding route summarization No loops/blackholes Packet/bandwidth loss
Traffic monitoring Changing traffic partitions Consistent counts Inaccurate measurements
Server load balancing [6, 7] Changing load distribution Connection affinity Broken connections
NAT or stateful firewall Adding/replacing equipment Connection affinity Outages, broken connections
Table 1: Example changes to network configuration, and the desired update properties.
an OpenFlow network is still a distributed system, with inevitable
delays between the switches and the controller. To implement a
transition from one configuration to another, programmers must is-
sue a painstaking sequence of low-level install and uninstall com-
mands that work rule by rule and switch by switch. Moreover, to
ensure that the network behaves correctly during the transition, they
must worry about the properties of every possible intermediate state
during the update, and the effects on any packets already in flight
through the network. This often results in a combinatorial explo-
sion of possible behaviors—too many for a programmer to manage
by hand, even in a small network. A recent study on testing Open-
Flow applications shows that programmers often introduce subtle
bugs when handling network updates [11].
Our approach. This paper describes a different alternative. In-
stead of requiring SDN programmers to implement configuration
changes using today’s low-level interfaces, our high-level, abstract
operations allow the programmer to update the configuration of the
entire network in one fell swoop. The libraries implementing these
abstractions provide strong semantic guarantees about the observ-
able effects of the global updates, and handle all of the details of
transitioning between old and new configurations efficiently.
Our central abstraction is per-packet consistency, the guaran-
tee that every packet traversing the network is processed by ex-
actly one consistent global network configuration. When a net-
work update occurs, this guarantee persists: each packet is pro-
cessed either using the configuration in place prior to the update,
or the configuration in place after the update, but never a mixture
of the two. Note that this consistency abstraction is more powerful
than an “atomic” update mechanism that simultaneously updates all
switches in the network. Such a simultaneous update could easily
catch many packets in flight in the middle of the network, and such
packets may wind up traversing a mixture of configurations, caus-
ing them to be dropped or sent to the wrong destination. We also
introduce per-flow consistency, a generalization of per-packet con-
sistency that guarantees all packets in the same flow are processed
with the same configuration. This stronger guarantee is needed in
applications such as HTTP load balancers, which need to ensure
that all packets in the same TCP connection reach the same server
replica to avoid breaking connections.
To support these abstractions, we develop several update mecha-
nisms that use features commonly available on OpenFlow switches.
Our most general mechanism, which enables transition between
any two configurations, performs a two-phase update of the rules in
the new configuration onto the switches. The other mechanisms are
optimizations that achieve better performance under circumstances
that arise often in practice. These optimizations transition to new
configurations in less time, update fewer switches, or modify fewer
rules.
To analyze our abstractions and mechanisms, we develop a sim-
ple, formal model that captures the essential features of OpenFlow
networks. This model allows us to define a class of network prop-
erties, called trace properties, that characterize the paths individual
packets take through the network. The model also allows us to
prove a remarkable result: if any trace property P holds of a net-
work configuration prior to a per-packet consistent update as well
as after the update, then P also holds continuously throughout the
update process. This illustrates the true power of our abstractions:
programmers do not need to specify which trace properties our sys-
tem must maintain during an update, because a per-packet consis-
tent update preserves all of them! For example, if the old and new
configurations are free from forwarding loops, then the network
will be loop-free before, during, and after the update. In addition to
the proof sketch included in this paper, this result has been formally
verified in the Coq proof assistant [12].
An important and useful corollary of these observations is that
it is possible to take any verification tool that checks trace prop-
erties of static network configurations and transform it into a tool
that checks invariance of trace properties as the network configu-
rations evolve dynamically—it suffices to check the static policies
before and after the update. We illustrate the utility of this idea
concretely by deploying the NuSMV model checker [13] to ver-
ify invariance of a variety of important trace properties that arise
in practice. However, other tools, such as the recently-proposed
header space analysis tool [14], also benefit from our approach.
Contributions. This paper makes the following contributions:
Update abstractions: We propose per-packet and per-flow
consistency as canonical, general abstractions for specifying
network updates (§2,§7).
Update mechanisms: We describe OpenFlow-compatible
implementation mechanisms, based on two-phase update, and
several optimizations tailored to common scenarios (§5,§8).
Theoretical model: We develop a simple mathematical model
that captures the essential behavior of SDNs, and we prove
that the mechanisms correctly implement the abstractions (§3).
We have formalized the model and proved the main theorems
in the Coq proof assistant.
Verification tools: We show how to exploit the power of
our abstractions by building a tool for verifying properties of
network control programs (§6).
Implementation: We describe a prototype implementation
on top of the OpenFlow/NOX platform (§8).
Experiments: We present results from experiments run on
small, but canonical applications that compare the total num-

F2
I
Internet
F1 F3
Configuration I
Type Action
I U, G Forward F
1
S Forward F
2
F Forward F
3
F
1
SSH Monitor
Allow
F
2
Allow
F
3
Allow
Configuration II
Type Action
I U Forward F
1
G Forward F
2
S, F Forward F
3
F
1
SSH Monitor
Allow
F
2
SSH Monitor
Allow
F
3
Allow
Figure 1: Access control example.
ber of control messages and rule overhead needed to imple-
ment updates in each of these applications (§8).
2. EXAMPLE
To illustrate the challenges surrounding network updates, con-
sider an example network with one ingress switch I and three “fil-
tering” switches F
1
, F
2
, and F
3
, each sitting between I and the
rest of the Internet, as shown on the left side of Figure 1. Several
classes of traffic are connected to I: untrustworthy packets from
Unknown and Guest hosts, and trustworthy packets from Student
and F aculty hosts. At all times, the network should enforce a se-
curity policy that denies SSH traffic from untrustworthy hosts, but
allows all other traffic to pass through the network unmodified. We
assume that any of the filtering switches have the capability to per-
form the requisite monitoring, blocking, and forwarding.
There are several ways to implement this policy, and depending
on the traffic load, one may be better than another. Suppose that
initially we configure the switches as shown in the leftmost table
in Figure 1: switch I sends traffic from U and G hosts to F
1
, from
S hosts to F
2
, and from F hosts to F
3
. Switch F
1
monitors (and
denies) SSH packets and allows all other packets to pass through,
while F
2
and F
3
simply let all packets pass through.
Now, suppose the load shifts, and we need more resources to
monitor the untrustworthy traffic. We might reconfigure the net-
work as shown in the table on the right of Figure 1, where the task
of monitoring traffic from untrustworthy hosts is divided between
F
1
and F
2
, and all traffic from trustworthy hosts is forwarded to F
3
.
Because we cannot update the network all at once, the individual
switches need to be reconfigured one-by-one. However, if we are
not careful, making incremental updates to the individual switches
can lead to intermediate configurations that violate the intended se-
curity policy. For instance, if we start by updating F
2
to deny SSH
traffic, we interfere with traffic sent by trustworthy hosts. If, on the
other hand, we start by updating switch I to forward traffic accord-
ing to the new configuration (sending U traffic to F
1
, G traffic to
F
2
, and S and F traffic to F
3
), then SSH packets from untrustwor-
thy hosts will incorrectly be allowed to pass through the network.
There is one valid transition plan:
1. Update I to forward S traffic to F
3
, while continuing to for-
ward U and G traffic to F
1
and F traffic to F
3
.
2. Wait until in-flight packets have been processed by F
2
.
3. Update F
2
to deny SSH packets.
4. Update I to forward G traffic to F
2
, while continuing to for-
ward U traffic to F
1
and S and F traffic to F
3
.
But finding this ordering and verifying that it behaves correctly
requires performing intricate reasoning about a sequence of inter-
mediate configurations—something that is tedious and error-prone,
even for this simple example. Even worse, in some examples it is
impossible to find an ordering that implements the transition sim-
ply by adding one part of the new configuration at a time (e.g., if we
swap the roles of F
1
and F
3
while enforcing the intended security
policy). In general, more powerful update mechanisms are needed.
We believe that any energy the programmer devotes to navigat-
ing this space would be better spent in other ways. The tedious job
of finding a safe sequence of commands that implement an update
should be factored out, optimized, and reused across many applica-
tions. This is the main achievement of this paper. To implement the
update using our abstractions, the programmer would simply write:
per_packet_update(config2)
Here config2 represents the new global network configuration.
The per-packet update library analyzes the configuration and topol-
ogy and selects a suitable mechanism to implement the update.
Note that the programmer does not write any tricky code, does not
need to consider how to synchronize switch update commands, and
does not need to consider the packets in flight across the network.
The per_packet_update library handles all of the low-level de-
tails, and even attempts to select a mechanism that minimizes the
cost of implementing the update.
Further, suppose the programmer knows that the security pol-
icy holds initially but wants to be sure it is enforced continuously
through the update process and also afterwards when config2 is
in force. In this case, the programmer can execute an additional
command:
ok = verify(config2, topo, pol)
If the boolean ok is true, then the security policy represented by pol
holds continuously. If not, the programmer has made a mistake and
can work on debugging it. The policy pol itself is expressed in
a common specification language called CTL and is verified with
the help of a model checker. We supply a library of common net-
work properties such as loop-freeness for use with our system and
programmers can write their own custom properties.
To implement the update, the library could use the safe, switch-
update ordering described above. However, in general, it is not
always possible to find such an ordering. Nevertheless, one can al-
ways achieve a per-packet consistent update using a two-phase up-
date supported by configuration versioning. Intuitively, this univer-
sal update mechanism works by stamping every incoming packet
with a version number (e.g., stored in a VLAN tag) and modify-
ing every configuration so that it only processes packets with a set
version number. To change from one configuration to the next, it
first populates the switches in the middle of the network with new
configurations guarded by the next version number. Once that is
complete, it enables the new configurations by installing rules at
the perimeter of the network that stamp packets with that next ver-
sion number. Though this general mechanism is somewhat heavy-
weight, our libraries identify and apply lightweight optimizations.
This short example illustrates some of the challenges that arise
when implementing a network update with strong semantic guar-
antees. However, it also shows that all of these complexities can
be hidden from the programmer, leaving only the simplest of in-
terfaces for global network update. We believe this simplicity will
lead to a more reliable and secure network infrastructure. The fol-
lowing sections describe our approach in more detail.
3. THE NETWORK MODEL
This section presents a simple mathematical model of the essen-
tial features of SDNs. This model is defined by a relation that de-
scribes the fine-grained, step-by-step execution of a network. We

Bit b ::= 0 | 1
Packet pk ::= [b
1
, ..., b
k
]
Port p ::= 1 | · · · | k | Drop | World
Located Pkt lp ::= (p, pk)
Trace t ::= [lp
1
, ..., lp
n
]
Update u LocatedPkt * LocatedPkt list
Switch Func. S LocatedPkt LocatedPkt list
Topology Func. T Port Port
Port Queue Q Port (Packet × Trace) list
Configuration C ::= (S, T )
Network State N ::= (Q, C)
(a)
T-PROCESS
if p is any port (1)
and Q(p) = [(pk
1
, t
1
), (pk
2
, t
2
), ..., (pk
j
, t
j
)] (2)
and C = (S, T ) (3)
and S(p, pk
1
) = [(p
0
1
, pk
0
1
), ..., (p
0
k
, pk
0
k
)] (4)
and T (p
0
i
) = p
00
i
, for i from 1 to k (5)
and t
0
1
= t
1
++ [(p, pk
1
)] (6)
and Q
0
0
= override(Q, p 7→ [(pk
2
, t
2
), ..., (pk
j
, t
j
)]) (7)
and Q
0
1
= override(Q
0
0
, p
00
1
7→ Q(p
00
1
) ++ [(pk
0
1
, t
0
1
)])
.
.
.
and Q
0
k
= override(Q
0
k1
, p
00
k
7→ Q(p
00
k
) ++ [(pk
0
k
, t
0
1
)])
then (Q, C) (Q
0
k
, C) (8)
T-UPDATE
if S
0
= override(S, u) (9)
then (Q, (S, T ))
u
(Q, (S
0
, T )) (10)
(b)
Figure 2: The network model: (a) syntax and (b) semantics.
write the relation using the notation N
us
?
N
0
, where N is the
network at the beginning of an execution, N
0
is the network after
some number of steps of execution, and us is a list of “observa-
tions” that are made during the execution.
1
Intuitively, an obser-
vation should be thought of as a message between the controller
and the network. In this paper, we are interested in a single kind
of message—a message u that directs a particular switch in the
network to update its forwarding table with some new rules. The
formal system could easily be augmented with other kinds of ob-
servations, such as topology changes or failures. For the sake of
brevity, we elide these features in this paper.
The main purpose of the model is to compute the traces, or paths,
that a packet takes through a network that is configured in a partic-
ular way. These traces in turn define the properties, be they access
control or connectivity or others, that a network configuration sat-
isfies. Our end goal is to use this model and the traces it generates
to prove that, when we update a network, the properties satisfied by
the initial and final configurations are preserved. The rest of this
section will make these ideas precise.
Notation. We use standard notation for types. In particular, the
type T
1
T
2
denotes the set of total functions that take arguments
of type T
1
and produce results of type T
2
, while T
1
* T
2
denotes
the set of partial functions from T
1
to T
2
; the type T
1
× T
2
denotes
the set of pairs with components of type T
1
and T
2
; and T list
denotes the set of lists with elements of type T .
We also use standard notation to construct tuples: (x
1
, x
2
) is a
pair of items x
1
and x
2
. For lists, we use the notation [x
1
, ..., x
n
]
for the list of n elements x
1
through x
n
, [ ] for the empty list, and
xs
1
++ xs
2
for the concatenation of the two lists xs
1
and xs
2
.
Notice that if x is some sort of object, we will typically use xs as
the variable for a list of such objects. For example, we use u to
represent a single update and us to represent a list of updates.
Basic Structures. Figure 2(a) defines the syntax of the elements
of the network model. A packet pk is a sequence of bits, where a
1
When a network takes a series of steps and there are no observa-
tions (i.e., no updates happen), we omit the list above the arrow,
writing N
?
N
0
instead.
bit b is either 0 or 1. A port p represents a location in the net-
work where packets may be waiting to be processed. We distin-
guish two kinds of ports: ordinary ports numbered uniquely from
1 to k, which correspond to the physical input and output ports
on switches, and two special ports, Drop and World . Intuitively,
packets queued at the Drop port represent packets that have been
dropped, while packets queued at the World port represent packets
that have been forwarded beyond the confines of the network. Each
ordinary port will be located on some switch in the network. How-
ever, we will leave the mapping from ports to switches unspecified,
as it is not needed for our primary analyses.
Switch and Topology Functions. A network is a packet pro-
cessor that forwards packets and optionally modifies the contents
of those packets on each hop. Following Kazemian et al. [14], we
model packet processing as the composition of two simpler behav-
iors: (1) forwarding a packet across a switch and (2) moving pack-
ets from one end of a link to the other end. The switch function
S takes a located packet lp (a pair of a packet and a port) as input
and returns a list of located packets as a result. In many applica-
tions, a switch function only produces a single located packet, but
in applications such as multicast, it may produce several. To drop a
packet, a switch function maps the packet to the special Drop port.
The topology function T maps one port to another if the two ports
are connected by a link in the network. Given a topology func-
tion T , we define an ordinary port p to be an ingress port if for all
other ordinary ports p
0
we have T(p
0
) 6= p. Similarly, we define an
ordinary port p to be an internal port if it is not an ingress port.
To ensure that switch and topology functions are reasonable, we
impose the following conditions:
(1) For all packets pk, S(Drop, pk) = [(Drop, pk)] and
S(World , pk) = [(World , pk)];
(2) T (Drop) = Drop and T (World) = World; and
(3) For all ports p and packets pk
if S(p, pk) = [(p
1
, pk
1
), ..., (p
k
, pk
k
)] we have k 1.
Taken together, the first and second conditions state that once a
packet is dropped or forwarded beyond the perimeter of the net-
work, it must stay dropped or beyond the perimeter of the network

and never return. If our network forwards a packet out to another
network and that other network forwards the packet back to us, we
treat the return packet as a “fresh” packet—i.e., we do not explicitly
model inter-domain forwarding. The third condition states that ap-
plying the forwarding function to a port and a packet must produce
at least one packet. This third condition means that the network
cannot drop a packet simply by not forwarding it anywhere. Drop-
ping packets occurs by explicitly forwarding a single packet to the
Drop port. This feature makes it possible to state network proper-
ties that require packets either be dropped or not.
Configurations and Network States. A trace t is a list of
located packets that keeps track of the hops that a packet takes as it
traverses the network. A port queue Q is a total function from ports
to lists of packet-trace pairs. These port queues record the packets
waiting to be processed at each port in the network, along with the
full processing history of that packet. Several of our definitions
require modifying the state of a port queue. We do this by building
a new function that overrides the old queue with a new mapping for
one of its ports: override(Q, p 7→ l) produces a new port queue
Q
0
that maps p to l and like Q otherwise.
override(Q, p 7→ l) = Q
0
where Q
0
(p
0
) =
(
l if p = p
0
Q(p
0
) otherwise
A configuration C comprises a switch function S and a topology
function T . A network state N is a pair (Q, C) containing a port
queue Q and configuration C.
Transitions. The formal definition of the network semantics is
given by the relations defined in Figure 2(b), which describe how
the network transitions from one state to the next one. The sys-
tem has two kinds of transitions: packet-processing transitions and
update transitions. In a packet-processing transition, a packet is
retrieved from the queue for some port, processed using the switch
function S and topology function T , and the newly generated pack-
ets are enqueued onto the appropriate port queues. More formally,
packet-processing transitions are defined by the T-PROCESS case
in Figure 2(b). Lines 1-8 may be read roughly as follows:
(1) If p is any port,
(2) a list of packets is waiting on p,
(3) the configuration C is a pair of a switch function S and topol-
ogy function T ,
(4) the switch function S forwards the chosen packet to a sin-
gle output port, or several ports in the case of multicast, and
possibly modifies the packet
(5) the topology function T connects each of the output ports to
input ports across a link,
(6) a new trace t
0
1
, which extends the old trace and records the
current hop, is generated,
(7) a new set of queues Q
0
k
is generated by moving packets across
links as specified in steps (4), (5) and (6),
(8) then (Q, C) can step to (Q
0
k
, C).
In an update transition, the switch forwarding function is up-
dated with new behavior. We represent an update u as a partial
function from located packets to lists of located packets (i.e., an
update is just a “part” of a global (distributed) switch function).
To apply an update to a switch function, we overwrite the function
using all of the mappings contained in the update. More formally,
override(S, u) produces a new function S
0
that behaves like u on
located packets in the domain
2
of u, and like S otherwise.
override(S, u) = S
0
where S
0
(p, pk) =
(
u(p, pk) if (p, pk) dom(u)
S(p, pk) otherwise
Update transitions are defined formally by the T-UPDATE case in
Figure 2(b). Lines 9-10 may be read as follows: if S
0
is obtained
by applying update u to a switch in the network then network state
(Q, (S, T )) can step to new network state (Q, (S
0
, T )).
Network Semantics. The overall semantics of a network in our
model is defined by allowing the system to take an arbitrary number
of steps starting from an initial state in which the queues of all in-
ternal ports as well as World and Drop are empty, and the queues
of external ports are filled with pairs of packets and the empty trace.
The reflexive and transitive closure of the single-step transition re-
lation N
us
?
N
0
is defined in the usual way, where the sequence
of updates recorded in the label above the arrow is obtained by con-
catenating all of the updates in the underlying transitions in order.
3
A network generates a trace t if and only if there exists an initial
state Q such that (Q, C)
?
(Q
0
, C) and t appears in Q
0
. Note
that no updates may occur when generating a trace.
Properties. In general, there are many properties a network might
satisfy—e.g., access control, connectivity, in-order delivery, qual-
ity of service, fault tolerance, to name a few. In this paper, we will
primarily be interested in trace properties, which are prefix-closed
sets of traces. Trace properties characterize the paths (and the state
of the packet at each hop) that an individual packet is allowed to
take through the network. Many network properties, including ac-
cess control, connectivity, routing correctness, loop-freedom, cor-
rect VLAN tagging, and waypointing can be expressed using trace
properties. For example, topological loop-freedom can be speci-
fied using a set that contain all traces except those in which some
ordinary port p appears twice. In contrast, network timing prop-
erties and relations between multiple packets including quality of
service, congestion control, in-order delivery, or flow affinity are
not trace properties.
We say that a port queue Q satisfies a trace property P if all of
the traces that appear in Q also appear in the set P . We say that a
network configuration C satisfies a trace property P if for all initial
port queues Q and all (update-free) executions (Q, C)
?
(Q
0
, C),
it is the case that Q
0
satisfies P .
4. PER-PACKET ABSTRACTION
One reason that network updates are difficult to get right is that
they are a form of concurrent programming. Concurrent program-
ming is hard because programmers must consider the interleaving
of every operation in every thread and this leads to a combinato-
rial explosion of possible outcomes—too many outcomes for most
programmers to manage. Likewise, when performing a network
update, a programmer must consider the interleaving of switch up-
date operations with every kind of packet that might be traversing
their network. Again, the number of possibilities explodes.
Per-packet consistent updates reduce the number of scenarios a
programmer must consider to just two: for every packet, it is as if
2
Domain of an update is the set of located packets it’s defined upon.
3
The semantics of the network is defined from the perspective of an
omniscient observer, so there is an order in which the steps occur.

Citations
More filters

Journal ArticleDOI
TL;DR: The challenges facing the large scale deployment of OpenFlow-based networks are described, the future research directions of this technology are discussed and it is discussed that software-based traffic analysis, centralized control, dynamic updating of forwarding rules and flow abstraction are to be considered.
Abstract: OpenFlow is currently the most commonly deployed Software Defined Networking (SDN) technology. SDN consists of decoupling the control and data planes of a network. A software-based controller is responsible for managing the forwarding information of one or more switches; the hardware only handles the forwarding of traffic according to the rules set by the controller. OpenFlow is an SDN technology proposed to standardize the way that a controller communicates with network devices in an SDN architecture. It was proposed to enable researchers to test new ideas in a production environment. OpenFlow provides a specification to migrate the control logic from a switch into the controller. It also defines a protocol for the communication between the controller and the switches. As discussed in this survey paper, OpenFlow-based architectures have specific capabilities that can be exploited by researchers to experiment with new ideas and test novel applications. These capabilities include software-based traffic analysis, centralized control, dynamic updating of forwarding rules and flow abstraction. OpenFlow-based applications have been proposed to ease the configuration of a network, to simplify network management and to add security features, to virtualize networks and data centers and to deploy mobile systems. These applications run on top of networking operating systems such as Nox, Beacon, Maestro, Floodlight, Trema or Node.Flow. Larger scale OpenFlow infrastructures have been deployed to allow the research community to run experiments and test their applications in more realistic scenarios. Also, studies have measured the performance of OpenFlow networks through modelling and experimentation. We describe the challenges facing the large scale deployment of OpenFlow-based networks and we discuss future research directions of this technology.

497 citations


Journal ArticleDOI
TL;DR: The Frenetic project is designing simple and intuitive abstractions for programming the three main stages of network management: monitoring network traffic, specifying and composing packet forwarding policies, and updating policies in a consistent way to reach SDNs full potential.
Abstract: Modern computer networks perform a bewildering array of tasks, from routing and traffic monitoring, to access control and server load balancing. However, managing these networks is unnecessarily complicated and error-prone, due to a heterogeneous mix of devices (e.g., routers, switches, firewalls, and middleboxes) with closed and proprietary configuration interfaces. Softwaredefined networks are poised to change this by offering a clean and open interface between networking devices and the software that controls them. In particular, many commercial switches support the OpenFlow protocol, and a number of campus, data center, and backbone networks have deployed the new technology. However, while SDNs make it possible to program the network, they does not make it easy. Today's OpenFlow controllers offer low-level APIs that mimic the underlying switch hardware. To reach SDNs full potential, we need to identify the right higher-level abstractions for creating (and composing) applications. In the Frenetic project, we are designing simple and intuitive abstractions for programming the three main stages of network management: monitoring network traffic, specifying and composing packet forwarding policies, and updating policies in a consistent way. Overall, these abstractions make it dramatically easier for programmers to write and reason about SDN applications.

184 citations


Proceedings ArticleDOI
24 Aug 2015
TL;DR: It is shown that the ability of an f-resilient distributed control plane to process concurrent policy updates depends on the tag complexity, i.e., the number of policy labels available to the controllers, and a CPC protocol with optimal tag complexity f + 2 is described.
Abstract: Software-defined networking (SDN) is a novel paradigm that outsources the control of programmable network switches to a set of software controllers. The most fundamental task of these controllers is the correct implementation of the network policy, i.e., the intended network behavior. In essence, such a policy specifies the rules by which packets must be forwarded across the network. This paper studies a distributed SDN control plane that enables concurrent and robust policy implementation. We introduce a formal model describing the interaction between the data plane and a distributed control plane (consisting of a collection of fault-prone controllers). Then we formulate the problem of consistent composition of concurrent network policy updates (termed the CPC Problem). To anticipate scenarios in which some conflicting policy updates must be rejected, we enable the composition via a natural transactional interface with all-or-nothing semantics. We show that the ability of an f-resilient distributed control plane to process concurrent policy updates depends on the tag complexity, i.e., the number of policy labels (a.k.a. tags) available to the controllers, and describe a CPC protocol with optimal tag complexity f + 2.

108 citations


Journal ArticleDOI
TL;DR: This survey paper surveys latest researches on multiple controllers of SDN, dwelling on the detailed design principles and architectures ofSDN with multiple controllers and suggested open research directions.
Abstract: Compared with traditional network, Software Defined Networking (SDN) decouples control plane and data plane, providing programmability to configure the network. In spite of such capability, one of the criticisms of SDN is that the SDN controller is a single point of failure and hence the controller decreases overall network availability. Having multiple controllers improves reliability of the network because the data plane can continue to operate if one controller fails. Furthermore, a single controller of SDN has many limitations on both performance and scalability. Thus, multiple controllers are required and critical for large-scale networks. However, multiple controllers increase network complexity dramatically and impose many new challenges to the management and schedule of SDN. This paper surveys latest researches on multiple controllers of SDN. Benefits and challenges of multiple controllers are discussed after giving an overview of SDN and OpenFlow in the paper. Afterward, we dwell on the detailed design principles and architectures of SDN with multiple controllers. Following that, current research works on multiple controllers placement and scheduling are carefully summarized and analyzed. Finally, we conclude this survey paper with some future works and suggested open research directions.

105 citations


Proceedings ArticleDOI
10 Apr 2016
TL;DR: It is proved that deciding what flows need to be removed is an NP-hard optimization problem with no PTAS possible unless P = NP, and the maximum increase can be approximated arbitrarily well in polynomial time.
Abstract: We study consistent migration of flows, with special focus on software defined networks. Given a current and a desired network flow configuration, we give the first polynomial-time algorithm to decide if a congestion-free migration is possible. However, if all flows must be integer or are unsplittable, this is NP-hard to decide. A similar problem is providing increased bandwidth to an application, while keeping all other flows in the network, but possibly migrating them consistently to other paths. We show that the maximum increase can be approximated arbitrarily well in polynomial time. Current methods as RSVP-TE consider unsplittable flows and remove flows of lesser importance in order to increase bandwidth for an application: We prove that deciding what flows need to be removed is an NP-hard optimization problem with no PTAS possible unless P = NP.

85 citations


References
More filters

Journal ArticleDOI
31 Mar 2008
TL;DR: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day, based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries.
Abstract: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too

8,411 citations


Journal ArticleDOI
TL;DR: It is argued that this technique can provide a practical alternative to manual proof construction or use of a mechanical theorem prover for verifying many finite-state concurrent systems.
Abstract: We give an efficient procedure for verifying that a finite-state concurrent system meets a specification expressed in a (propositional, branching-time) temporal logic. Our algorithm has complexity linear in both the size of the specification and the size of the global state graph for the concurrent system. We also show how this approach can be adapted to handle fairness. We argue that our technique can provide a practical alternative to manual proof construction or use of a mechanical theorem prover for verifying many finite-state concurrent systems. Experimental results show that state machines with several hundred states can be checked in a matter of seconds.

3,260 citations


Proceedings ArticleDOI
20 Oct 2010
TL;DR: The greatest value of Mininet will be supporting collaborative network research, by enabling self-contained SDN prototypes which anyone with a PC can download, run, evaluate, explore, tweak, and build upon.
Abstract: Mininet is a system for rapidly prototyping large networks on the constrained resources of a single laptop The lightweight approach of using OS-level virtualization features, including processes and network namespaces, allows it to scale to hundreds of nodes Experiences with our initial implementation suggest that the ability to run, poke, and debug in real time represents a qualitative change in workflow We share supporting case studies culled from over 100 users, at 18 institutions, who have developed Software-Defined Networks (SDN) Ultimately, we think the greatest value of Mininet will be supporting collaborative network research, by enabling self-contained SDN prototypes which anyone with a PC can download, run, evaluate, explore, tweak, and build upon

1,663 citations


Journal ArticleDOI
01 Jul 2008
TL;DR: The question posed here is: Can one build a network operating system at significant scale?
Abstract: As anyone who has operated a large network can attest, enterprise networks are difficult to manage. That they have remained so despite significant commercial and academic efforts suggests the need for a different network management paradigm. Here we turn to operating systems as an instructive example in taming management complexity. In the early days of computing, programs were written in machine languages that had no common abstractions for the underlying physical resources. This made programs hard to write, port, reason about, and debug. Modern operating systems facilitate program development by providing controlled access to high-level abstractions for resources (e.g., memory, storage, communication) and information (e.g., files, directories). These abstractions enable programs to carry out complicated tasks safely and efficiently on a wide variety of computing hardware. In contrast, networks are managed through low-level configuration of individual components. Moreover, these configurations often depend on the underlying network; for example, blocking a user’s access with an ACL entry requires knowing the user’s current IP address. More complicated tasks require more extensive network knowledge; forcing guest users’ port 80 traffic to traverse an HTTP proxy requires knowing the current network topology and the location of each guest. In this way, an enterprise network resembles a computer without an operating system, with network-dependent component configuration playing the role of hardware-dependent machine-language programming. What we clearly need is an “operating system” for networks, one that provides a uniform and centralized programmatic interface to the entire network. Analogous to the read and write access to various resources provided by computer operating systems, a network operating system provides the ability to observe and control a network. A network operating system does not manage the network itself; it merely provides a programmatic interface. Applications implemented on top of the network operating system perform the actual management tasks. The programmatic interface should be general enough to support a broad spectrum of network management applications. Such a network operating system represents two major conceptual departures from the status quo. First, the network operating system presents programs with a centralized programming model; programs are written as if the entire network were present on a single machine (i.e., one would use Dijkstra to compute shortest paths, not Bellman-Ford). This requires (as in [3, 8, 14] and elsewhere) centralizing network state. Second, programs are written in terms of high-level abstractions (e.g., user and host names), not low-level configuration parameters (e.g., IP and MAC addresses). This allows management directives to be enforced independent of the underlying network topology, but it requires that the network operating system carefully maintain the bindings (i.e., mappings) between these abstractions and the low-level configurations. Thus, a network operating system allows management applications to be written as centralized programs over highlevel names as opposed to the distributed algorithms over low-level addresses we are forced to use today. While clearly a desirable goal, achieving this transformation from distributed algorithms to centralized programming presents significant technical challenges, and the question we pose here is: Can one build a network operating system at significant scale?

1,591 citations


Proceedings ArticleDOI
14 Nov 2011
TL;DR: This work proposes two simple, canonical, and effective update abstractions for network updates that should be provided by a runtime system, shielding the programmer from these concerns.
Abstract: Configuration changes are a common source of instability in networks, leading to broken connectivity, forwarding loops, and access control violations. Even when the initial and final states of the network are correct, the update process often steps through intermediate states with incorrect behaviors. These problems have been recognized in the context of specific protocols, leading to a number of point solutions. However, a piecemeal attack on this fundamental problem, while pragmatic in the short term, is unlikely to lead to significant long-term progress.Software-Defined Networking (SDN) provides an exciting opportunity to do better. Because SDN is a clean-slate platform, we can build general, reusable abstractions for network updates that come with strong semantic guarantees. We believe SDN desperately needs such abstractions to make programs simpler to design, more reliable, and easier to validate using automated tools. Moreover, we believe these abstractions should be provided by a runtime system, shielding the programmer from these concerns. We propose two simple, canonical, and effective update abstractions, and present implementation mechanisms. We also show how to integrate them with a network programming language, and discuss potential applications to program verification.

216 citations


Frequently Asked Questions (2)
Q1. What contributions have the authors mentioned in the paper "Abstractions for network update" ?

Reitblatt et al. this paper proposed an abstract interface that offers strong, precise, and intuitive semantic guarantees and concrete mechanisms that faithfully implement the semantics specified in the abstract interface. 

The authors also plan to extend their formal model to capture the per-flow consistent update abstraction, and prove the correctness of the per-flow update mechanisms. In addition, the authors will make their update library available to the community, to enable future OpenFlow applications to leverage these update abstractions. The authors wish to thank Hussam AbuLibdeh, Robert Escriva, Mike Freedman, Tim Griffin, Mike Hicks, Eric Keller, Srinivas Narayana, Alan Shieh, the anonymous reviewers, and their shepherd Ramana Kompella for many helpful suggestions.