scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Distributed Consistent Network Updates in SDNs: Local Verification for Global Guarantees

01 Sep 2019-pp 1-4
TL;DR: This paper initiates the study of a more distributed approach, in which the consistent network updates are implemented by the switches and routers directly in the data plane, which leverages concepts from local proof labeling systems, and shows that this is sufficient to obtain global network guarantees.
Abstract: While SDNs enable more flexible and adaptive network operations, (logically) centralized reconfigurations introduce overheads and delays, which can limit network reactivity. This paper initiates the study of a more distributed approach, in which the consistent network updates are implemented by the switches and routers directly in the data plane. In particular, our approach leverages concepts from local proof labeling systems, which allows the data plane elements to locally check network properties, and we show that this is sufficient to obtain global network guarantees. We demonstrate our approach considering three fundamental use cases, and analyze its benefits in terms of performance and fault-tolerance.
Citations
More filters
Journal ArticleDOI
TL;DR: The batch dynamic \congest model, in which a bandwidth-limited communication network and a dynamic edge labelling defines the problem input, is defined, which lays the foundations for the theory of input-dynamic distributed network algorithms.
Abstract: Consider a distributed system, where the topology of the communication network remains fixed, but local inputs given to nodes may change over time. In this work, we explore the following question: if some of the local inputs change, can an existing solution be updated efficiently, in a dynamic and distributed manner? To address this question, we define the batch dynamic CONGEST model, where the communication network $G = (V,E)$ remains fixed and a dynamic edge labelling defines the problem input. The task is to maintain a solution to a graph problem on the labeled graph under batch changes. We investigate, when a batch of $\alpha$ edge label changes arrive, -- how much time as a function of $\alpha$ we need to update an existing solution, and -- how much information the nodes have to keep in local memory between batches in order to update the solution quickly. We give a general picture of the complexity landscape in this model, including a general framework for lower bounds. In particular, we prove non-trivial upper bounds for two selected, contrasting problems: maintaining a minimum spanning tree and detecting cliques.

4 citations


Cites background from "Distributed Consistent Network Upda..."

  • ...A recent line of work has investigated how to efficiently fix solutions to graph problems under various distributed settings [4, 5, 10, 15, 17, 26, 29, 30, 37, 45]....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors define the batch dynamic CONGEST model in which they are given a bandwidth-limited communication network and a dynamic edge labelling defines the problem input, and investigate how much time as a function of the edge label changes arrive, and how much information the nodes have to keep in local memory between batches in order to update the solution quickly.
Abstract: Consider a distributed task where the communication network is fixed but the local inputs given to the nodes of the distributed system may change over time. In this work, we explore the following question: if some of the local inputs change, can an existing solution be updated efficiently, in a dynamic and distributed manner? To address this question, we define the batch dynamic CONGEST model in which we are given a bandwidth-limited communication network and a dynamic edge labelling defines the problem input. The task is to maintain a solution to a graph problem on the labeled graph under batch changes. We investigate, when a batch of $\alpha$ edge label changes arrive, -- how much time as a function of $\alpha$ we need to update an existing solution, and -- how much information the nodes have to keep in local memory between batches in order to update the solution quickly. Our work lays the foundations for the theory of input-dynamic distributed network algorithms. We give a general picture of the complexity landscape in this model, design both universal algorithms and algorithms for concrete problems, and present a general framework for lower bounds. In particular, we derive non-trivial upper bounds for two selected, contrasting problems: maintaining a minimum spanning tree and detecting cliques.

1 citations

DOI
02 Dec 2021
TL;DR: P4Update as mentioned in this paper proposes to shift the consistency control and most of the routing update logic out of the overloaded and slow control plane by mainly scheduling and offloading the update process to the data plane.
Abstract: Programmable networks come with the promise of logically centralized control, in order to optimize the network's routing behavior. However, until now, controllers are heavily involved in network operations to prevent inconsistencies such as blackholes, loops, and congestion. In this paper, we propose the P4Update framework, based on the network programming language P4, to shift the consistency control and most of the routing update logic out of the overloaded and slow control plane. As such P4Update avoids high and unnecessary control plane delays by mainly scheduling and offloading the update process to the data plane. P4Update returns to operating networks in a partially centralized and distributed manner --- taking the best of both centralized and distributed worlds. The main idea is to flip the problem setting and see asynchrony as an opportunity: switches inform their local neighborhood on resolved update dependencies. What's more, our mechanisms are also provably resilient against inconsistent, reordered, or conflicting concurrent updates. Unlike prior systems, P4Update enables switches to locally verify and reject inconsistent updates, and is also the first system to resolve inter-flow update dependencies purely in the data plane, significantly reducing control plane preparation time and improving its scalability. Beyond verification, we implement P4Update in a P4 software-switch-based environment. Measurements show that P4Update outperforms existing systems with respect to update speed by 28.6% to 39.1% in average.

1 citations

Journal ArticleDOI
TL;DR: This work proposes an end-to-end multi-task training network for semi-supervised Re-ID that is optimized by mining the object classification loss, exclusive loss and PS loss simultaneously simultaneously, and carefully design the network named Multiple Branch Network (MBN).
Abstract: We focus on the one-example person re-identification (Re-ID) task, where each identity has only one labeled example along with many unlabeled examples. Since each identity has only one labeled example, the number of initialized label examples is small, and the body parts of person are not aligned due to changes in person pose and camera angle under the camera. Therefore, the distinguishing information of learning labeled and unlabeled examples is challenging. To overcome these problems, we propose an end-to-end multi-task training network for semi-supervised Re-ID. First, we impose a part segmentation (PS) constraint on feature maps, forcing a module to predict part labels from the feature maps and enhance alignment. Second, we carefully design the network named Multiple Branch Network (MBN). MBN is a multi-branch deep network architecture, which consisting of one branch for global feature representation and two branches for local feature representation, local feature representation that including horizontal stripes representation and PS representation, respectively. Finally, loss function fusion is designed to learn discriminative features for semi-supervised Re-ID. Specifically, the MBN model is optimized by mining the object classification loss, exclusive loss and PS loss simultaneously. We validate the effectiveness of our approach by demonstrating its superiority over the state-of-the-art methods on the standard benchmark datasets, including Market-1501, DukeMTMC-reID. Notably, the rank-1 accuracy of our method outperforms the state-of-the-art method by 15.9 points (absolute, i.e., 71.7% vs. 55.8%) on Market-1501 and 8.9 points on DukeMTMC-reID.
References
More filters
Proceedings ArticleDOI
13 Aug 2012
TL;DR: This paper introduces the notion of consistent network updates---updates that are guaranteed to preserve well-defined behaviors when transitioning mbetween configurations, and identifies two distinct consistency levels, per-packet and per-flow.
Abstract: Configuration changes are a common source of instability in networks, leading to outages, performance disruptions, and security vulnerabilities. Even when the initial and final configurations are correct, the update process itself often steps through intermediate configurations that exhibit incorrect behaviors. This paper introduces the notion of consistent network updates---updates that are guaranteed to preserve well-defined behaviors when transitioning mbetween configurations. We identify two distinct consistency levels, per-packet and per-flow, and we present general mechanisms for implementing them in Software-Defined Networks using switch APIs like OpenFlow. We develop a formal model of OpenFlow networks, and prove that consistent updates preserve a large class of properties. We describe our prototype implementation, including several optimizations that reduce the overhead required to perform consistent updates. We present a verification tool that leverages consistent updates to significantly reduce the complexity of checking the correctness of network control software. Finally, we describe the results of some simple experiments demonstrating the effectiveness of these optimizations on example applications.

656 citations


"Distributed Consistent Network Upda..." refers background in this paper

  • ...[11]1: the routing path for flow F is updated in reverse, where the destination informs its predecessor on the path to update its rules for F ′, which in turn informs its predecessor, and so...

    [...]

  • ...1The 2-phase commit scheme in [11] updates the forwarding for a flow F to F ′ as follows: The new flow rules for F ′ are distributed in the network, and once ack’ed to the controller, the controller informs the packet source to from now on tag all flow packets with F ′, instead of the previous tag of F ....

    [...]

  • ...Observe that in the previous section, our approach moreover guaranteed so-called per-packet consistency [11], where a packet will either take the old F or the new F ′ path, but never a mix of both....

    [...]

Proceedings ArticleDOI
16 Aug 2013
TL;DR: This paper establishes a connection to the field of local algorithms and distributed computing, and shows that existing local algorithms can be used to develop efficient coordination protocols in which each controller only needs to respond to events that take place in its local neighborhood.
Abstract: Large SDN networks will be partitioned in multiple controller domains; each controller is responsible for one domain, and the controllers of adjacent domains may need to communicate to enforce global policies. This paper studies the implications of the local network view of the controllers. In particular, we establish a connection to the field of local algorithms and distributed computing, and discuss lessons for the design of a distributed control plane. We show that existing local algorithms can be used to develop efficient coordination protocols in which each controller only needs to respond to events that take place in its local neighborhood. However, while existing algorithms can be used, SDN networks also suggest a new approach to the study of locality in distributed computing. We introduce the so-called supported locality model of distributed computing. The new model is more expressive than the classical models that are commonly used in the design and analysis of distributed algorithms, and it is a better match with the features of SDN networks.

223 citations


"Distributed Consistent Network Upda..." refers background or methods in this paper

  • ...We follow standard assumptions [5], [6], [7] in our work, both regarding the network and the local verification model....

    [...]

  • ...We will also leverage a connection [5]...

    [...]

  • ...We propose and investigate the use of distributed mechanisms based on local proof labeling systems [5], to propagate and...

    [...]

  • ...The idea to leverage proof labeling schemes for verification purposes in SDNs was first investigated in [5], joined with consistent updates for destination-based routing in [6]....

    [...]

  • ...Related in this context is also the idea of local fixing [14] or preprocessing in distributed control planes in general [5], [19], [20]....

    [...]

Journal ArticleDOI
TL;DR: The approach separates the configuration design from the verification, which allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind.
Abstract: This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as “how expensive is local verification?” and more specifically, “how expensive is local verification compared to computation?” A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithms- one to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less costly sometimes, since the configuration is typically generated so as to be easily verifiable. In contrast, our approach separates the configuration design from the verification. That is, it first generates the desired configuration without bothering with the need to verify it, and then handles the task of constructing a suitable verification scheme. Our approach thus allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind.

217 citations

Proceedings ArticleDOI
21 Nov 2013
TL;DR: This work argues for the development of efficient methods to update the data plane state of an SDN, while maintaining desired consistency properties, and develops an update algorithm that has provably minimal dependency structure.
Abstract: We argue for the development of efficient methods to update the data plane state of an SDN, while maintaining desired consistency properties (e.g., no packet should be dropped). We highlight the inherent trade-off between the strength of the consistency property and dependencies it imposes among rules at different switches; these dependencies fundamentally limit how quickly data plane can be updated. For one basic consistency property---no packet should loop---we develop an update algorithm that has provably minimal dependency structure. We also sketch a general architecture for consistent updates that separates the twin concerns of consistency and efficiency.

184 citations


"Distributed Consistent Network Upda..." refers background in this paper

  • ...freedom, requires many interactions with the SDN controller in the worst case [2], [3], unless one resorts to packet header rewriting....

    [...]

Proceedings ArticleDOI
10 Apr 2016
TL;DR: It is proved that deciding what flows need to be removed is an NP-hard optimization problem with no PTAS possible unless P = NP, and the maximum increase can be approximated arbitrarily well in polynomial time.
Abstract: We study consistent migration of flows, with special focus on software defined networks. Given a current and a desired network flow configuration, we give the first polynomial-time algorithm to decide if a congestion-free migration is possible. However, if all flows must be integer or are unsplittable, this is NP-hard to decide. A similar problem is providing increased bandwidth to an application, while keeping all other flows in the network, but possibly migrating them consistently to other paths. We show that the maximum increase can be approximated arbitrarily well in polynomial time. Current methods as RSVP-TE consider unsplittable flows and remove flows of lesser importance in order to increase bandwidth for an application: We prove that deciding what flows need to be removed is an NP-hard optimization problem with no PTAS possible unless P = NP.

96 citations