scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

On the Resiliency of Randomized Routing Against Multiple Edge Failures

TL;DR: In this article, the authors study the Static-Routing-Resiliency problem, motivated by routing on the Internet, and propose a randomized routing algorithm that has expected number of hops O(|V|k) if at most k-1 edges fail, which reduces to O(V|) if only a fraction t of the links fail.
Abstract: We study the Static-Routing-Resiliency problem, motivated by routing on the Internet: Given a graph G = (V,E), a unique destination vertex d, and an integer constant c > 0, does there exist a static and destination-based routing scheme such that the correct delivery of packets from any source s to the destination d is guaranteed so long as (1) no more than c edges fail and (2) there exists a physical path from s to d? We embark upon a study of this problem by relating the edge-connectivity of a graph, i.e., the minimum number of edges whose deletion partitions G, to its resiliency. Following the success of randomized routing algorithms in dealing with a variety of problems (e.g., Valiant load balancing in the network design problem), we embark upon a study of randomized routing algorithms for the Static-Routing-Resiliency problem. For any k-connected graph, we show a surprisingly simple randomized algorithm that has expected number of hops O(|V|k) if at most k-1 edges fail, which reduces to O(|V|) if only a fraction t of the links fail (where t < 1 is a constant). Furthermore, our algorithm is deterministic if the routing does not encounter any failed link.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper embarked upon a systematic algorithmic study of the resiliency of forwarding tables in a variety of models (i.e., deterministic/probabilistic routing, with packets-header-rewriting, with packet-duplication), and shows that resiliencies to four simultaneous link failures, with limited path stretch, can be achieved without any packet modification/duplications or randomization.
Abstract: Fast reroute and other forms of immediate failover have long been used to recover from certain classes of failures without invoking the network control plane. While the set of such techniques is growing, the level of resiliency to failures that this approach can provide is not adequately understood. In this paper, we embarked upon a systematic algorithmic study of the resiliency of forwarding tables in a variety of models (i.e., deterministic/probabilistic routing, with packet-header-rewriting, with packet-duplication). Our results show that the resiliency of a routing scheme depends on the “connectivity” $k$ of a network, i.e., the minimum number of link deletions that partition a network. We complement our theoretical result with extensive simulations. We show that resiliency to four simultaneous link failures, with limited path stretch, can be achieved without any packet modification/duplication or randomization. Furthermore, our routing schemes provide resiliency against $k-1$ failures, with limited path stretch, by storing $\log (k)$ bits in the packet header, with limited packet duplication, or with randomized forwarding technique.

53 citations

Proceedings ArticleDOI
01 Jan 2017
TL;DR: Genesis is a datacenter network management system which allows policies to be specified in a declarative manner without explicitly programming the network data plane, and uses the formal foundations of constraint solving in combination with fast off-the-shelf SMT solvers.
Abstract: Operators in multi-tenant cloud datacenters require support for diverse and complex end-to-end policies, such as, reachability, middlebox traversals, isolation, traffic engineering, and network resource management. We present Genesis, a datacenter network management system which allows policies to be specified in a declarative manner without explicitly programming the network data plane. Genesis tackles the problem of enforcing policies by synthesizing switch forwarding tables. It uses the formal foundations of constraint solving in combination with fast off-the-shelf SMT solvers. To improve synthesis performance, Genesis incorporates a novel search strategy that uses regular expressions to specify properties that leverage the structure of datacenter networks, and a divide-and-conquer synthesis procedure which exploits the structure of policy relationships. We have prototyped Genesis, and conducted experiments with a variety of workloads on real-world topologies to demonstrate its performance.

46 citations

Journal ArticleDOI
TL;DR: This survey presents a systematic, tutorial-like overview of packet-based fast-recovery mechanisms in the data plane, focusing on concepts but structured around different networking technologies, from traditional link-layer and IP-based mechanisms, over BGP and MPLS to emerging software-defined networks and programmable data planes.
Abstract: In order to meet their stringent dependability requirements, most modern packet-switched communication networks support fast-recovery mechanisms in the data plane. While reactions to failures in the data plane can be significantly faster compared to control plane mechanisms, implementing fast recovery in the data plane is challenging, and has recently received much attention in the literature. This survey presents a systematic, tutorial-like overview of packet-based fast-recovery mechanisms in the data plane, focusing on concepts but structured around different networking technologies, from traditional link-layer and IP-based mechanisms, over BGP and MPLS to emerging software-defined networks and programmable data planes. We examine the evolution of fast-recovery standards and mechanisms over time, and identify and discuss the fundamental principles and algorithms underlying different mechanisms. We then present a taxonomy of the state of the art, summarize the main lessons learned, and propose a few concrete future directions.

42 citations

Journal ArticleDOI
TL;DR: In this paper, a broad overview on the basic theoretical background pertaining digital quantum simulations is presented, with a focus on the hardware-dependent mapping of spin-type Hamiltonians into the corresponding quantum circuit model.
Abstract: The past few years have witnessed the concrete and fast spreading of quantum technologies for practical computation and simulation. In particular, quantum computing platforms based on either trapped ions or superconducting qubits have become available for simulations and benchmarking, with up to few tens of qubits that can be reliably initialized, controlled, and measured. The present review aims at giving a comprehensive outlook on the state of art capabilities offered from these near-term noisy devices as universal quantum simulators, i.e. programmable quantum computers potentially able to digitally simulate the time evolution of many physical models. First, we give a broad overview on the basic theoretical background pertaining digital quantum simulations, with a focus on the hardware-dependent mapping of spin-type Hamiltonians into the corresponding quantum circuit model. Then, we review the main experimental achievements obtained in the last decade, mostly employing the two leading technological platforms. We compare their performances and outline future challenges, also in terms of prospective hybrid technologies, towards the ultimate goal of reaching the long sought quantum advantage from the simulation of complex manybody models in the physical sciences.

36 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of satisfiability of constrained and unconstrained random XORSATs and showed that for constrained XORASAT, the phase transition window for satisfiability is bounded.
Abstract: We consider "unconstrained" random $k$-XORSAT, which is a uniformly random system of $m$ linear non-homogeneous equations in $\mathbb{F}_2$ over $n$ variables, each equation containing $k \geq 3$ variables, and also consider a "constrained" model where every variable appears in at least two equations. Dubois and Mandler proved that $m/n=1$ is a sharp threshold for satisfiability of constrained 3-XORSAT, and analyzed the 2-core of a random 3-uniform hypergraph to extend this result to find the threshold for unconstrained 3-XORSAT. We show that $m/n=1$ remains a sharp threshold for satisfiability of constrained $k$-XORSAT for every $k\ge 3$, and we use standard results on the 2-core of a random $k$-uniform hypergraph to extend this result to find the threshold for unconstrained $k$-XORSAT. For constrained $k$-XORSAT we narrow the phase transition window, showing that $m-n \to -\infty$ implies almost-sure satisfiability, while $m-n \to +\infty$ implies almost-sure unsatisfiability.

31 citations

References
More filters
Journal ArticleDOI
TL;DR: There is a distributed randomized algorithm that can route every packet to its destination without two packets passing down the same wire at any one time, and finishes within time $O(\log N)$ with overwhelming probability for all such routing requests.
Abstract: Consider $N = 2^n $ nodes connected by wires to make an n-dimensional binary cube. Suppose that initially the nodes contain one packet each addressed to distinct nodes of the cube. We show that the...

675 citations

01 May 2005
TL;DR: This document defines RSVP-TE extensions to establish backup label- switched path (LSP) tunnels for local repair of LSP tunnels to allow nodes to implement either method or both and to interoperate in a mixed network.
Abstract: This document defines RSVP-TE extensions to establish backup label- switched path (LSP) tunnels for local repair of LSP tunnels. These mechanisms enable the re-direction of traffic onto backup LSP tunnels in 10s of milliseconds, in the event of a failure. Two methods are defined here. The one-to-one backup method creates detour LSPs for each protected LSP at each potential point of local repair. The facility backup method creates a bypass tunnel to protect a potential failure point; by taking advantage of MPLS label stacking, this bypass tunnel can protect a set of LSPs that have similar backup constraints. Both methods can be used to protect links and nodes during network failure. The described behavior and extensions to RSVP allow nodes to implement either method or both and to interoperate in a mixed network. [STANDARDS-TRACK]

649 citations

Journal ArticleDOI
TL;DR: It is argued that flooding schemes have significant drawbacks for such networks, and a general class of distributed algorithms for establishing new loop-free routes to the station for any node left without a route due to changes in the network topology is proposed.
Abstract: We consider the problem of maintaining communication between the nodes of a data network and a central station in the presence of frequent topological changes as, for example, in mobile packet radio networks. We argue that flooding schemes have significant drawbacks for such networks, and propose a general class of distributed algorithms for establishing new loop-free routes to the station for any node left without a route due to changes in the network topology. By virtue of built-in redundancy, the algorithms are typically activated very infrequently and, even when they are, they do not involve any communication within the portion of the network that has not been materially affected by a topological change.

386 citations

Proceedings ArticleDOI
27 Aug 2007
TL;DR: This work proposes a technique called Failure-Carrying Packets (FCP), a technique that allows data packets to autonomously discover a working path without requiring completely up-to-date state in routers, and shows that it provides better routing guarantees under failures despite maintaining lesser state at the routers.
Abstract: Current distributed routing paradigms (such as link-state, distance-vector, and path-vector) involve a convergence process consisting of an iterative exploration of intermediate routes triggered by certain events such as link failures. The convergence process increases router load, introduces outages and transient loops, and slows reaction to failures. We propose a new routing paradigm where the goal is not to reduce the convergence times but rather to eliminate the convergence process completely. To this end, we propose a technique called Failure-Carrying Packets (FCP) that allows data packets to autonomously discover a working path without requiring completely up-to-date state in routers. Our simulations, performed using real-world failure traces and Rocketfuel topologies, show that: (a) the overhead of FCP is very low, (b) unlike traditional link-state routing (such as OSPF), FCP can provide both low loss-rate as well as low control overhead, (c) compared to prior work in backup path pre-computations, FCP provides better routing guarantees under failures despite maintaining lesser state at the routers.

183 citations

Journal ArticleDOI
TL;DR: For x and y vertices of a connected graph G, let TG(x, y) denote the expected time before a random walk starting from x reaches y, and it is determined that for each n > 0, the n‐vertex graph G and verticesx and y for whichTG(x) is maximized.
Abstract: For x and y vertices of a connected graph G, let TG(x, y) denote the expected time before a random walk starting from x reaches y. We determine, for each n > 0, the n‐vertex graph G and vertices x and y for which TG(x, y) is maximized. the extremal graph consists of a clique on ⌊(2n + 1)/3⌋) (or ⌈)(2n − 2)/3⌉) vertices, including x, to which a path on the remaining vertices, ending in y, has been attached; the expected time TG(x, y) to reach y from x in this graph is approximately 4n3/27. © 1990 Wiley Periodicals, Inc.

129 citations