scispace - formally typeset
Search or ask a question

Showing papers by "Marco Chiesa published in 2021"


Journal ArticleDOI
TL;DR: This survey presents a systematic, tutorial-like overview of packet-based fast-recovery mechanisms in the data plane, focusing on concepts but structured around different networking technologies, from traditional link-layer and IP-based mechanisms, over BGP and MPLS to emerging software-defined networks and programmable data planes.
Abstract: In order to meet their stringent dependability requirements, most modern packet-switched communication networks support fast-recovery mechanisms in the data plane. While reactions to failures in the data plane can be significantly faster compared to control plane mechanisms, implementing fast recovery in the data plane is challenging, and has recently received much attention in the literature. This survey presents a systematic, tutorial-like overview of packet-based fast-recovery mechanisms in the data plane, focusing on concepts but structured around different networking technologies, from traditional link-layer and IP-based mechanisms, over BGP and MPLS to emerging software-defined networks and programmable data planes. We examine the evolution of fast-recovery standards and mechanisms over time, and identify and discuss the fundamental principles and algorithms underlying different mechanisms. We then present a taxonomy of the state of the art, summarize the main lessons learned, and propose a few concrete future directions.

42 citations


Book ChapterDOI
29 Mar 2021
TL;DR: In this article, the performance benefits and limitations of offloading a networking application to a network interface card (NIC) are analyzed and discussed. But the authors do not consider the offloading of the application to the cloud.
Abstract: Network interface cards (NICs) are fundamental components of modern high-speed networked systems, supporting multi-100 Gbps speeds and increasing programmability. Offloading computation from a server’s CPU to a NIC frees a substantial amount of the server’s CPU resources, making NICs key to offer competitive cloud services. Therefore, understanding the performance benefits and limitations of offloading a networking application to a NIC is of paramount importance.

20 citations


Journal ArticleDOI
TL;DR: In this article, a Fast Re-Routing (FRR) primitive for programmable data planes, PURR, is proposed, which provides low failover latency and high switch throughput, by avoiding packet recirculation.
Abstract: Highly dependable communication networks usually rely on some kind of Fast Re-Route (FRR) mechanism which allows to quickly re-route traffic upon failures, entirely in the data plane. This paper studies the design of FRR mechanisms for emerging reconfigurable switches. Our main contribution is an FRR primitive for programmable data planes, PURR, which provides low failover latency and high switch throughput, by avoiding packet recirculation . PURR tolerates multiple concurrent failures and comes with minimal memory requirements, ensuring compact forwarding tables, by unveiling an intriguing connection to classic “string theory” ( i.e. , stringology), and in particular, the shortest common supersequence problem. PURR is well-suited for high-speed match-action forwarding architectures ( e.g. , PISA) and supports the implementation of a broad variety of FRR mechanisms. Our simulations and prototype implementation (on an FPGA and a Tofino switch) show that PURR improves TCAM memory occupancy by a factor of $1.5 \times$ – $10.8 \times$ compared to a naive encoding when implementing state-of-the-art FRR mechanisms. PURR also improves the latency and throughput of datacenter traffic up to a factor of $2.8 \times$ – $5.5 \times$ and $1.2 \times$ – $2 \times$ , respectively, compared to approaches based on recirculating packets.

12 citations


Proceedings ArticleDOI
26 Apr 2021
TL;DR: In this paper, predictive methods based on Holt-Winters exponential smoothing (HW) and Long Short-Term Memory (LSTM) were used to decrease CPU slack by over 40%.
Abstract: Private and public clouds require users to specify requests for resources such as CPU and memory (RAM) to be provisioned for their applications. The values of these requests do not necessarily relate to the application's run-time requirements, but only help the cloud infrastructure resource manager to map requested resources to physical resources. If an application exceeds these values, it might be throttled or even terminated. As a consequence, requested values are often overestimated, resulting in poor resource utilization in the cloud infrastructure. Autoscaling is a technique used to overcome these problems. We observed that Kubernetes Vertical Pod Autoscaler (VPA) might be using an autoscaling strategy that performs poorly on workloads that periodically change. Our experimental results show that compared to VPA, predictive methods based on Holt-Winters exponential smoothing (HW) and Long Short-Term Memory (LSTM) can decrease CPU slack by over 40% while avoiding CPU insufficiency for various CPU workloads. Furthermore, LSTM has been shown to generate stabler predictions compared to that of HW, which allowed for more robust scaling decisions.

11 citations


Journal ArticleDOI
TL;DR: This paper aims to demonstrate the efforts towards in-situ applicability of EMMARM, which aims to provide real-time information about concrete mechanical properties such as E-modulus and compressive strength.
Abstract: Applications of Software-Defined Networking (SDN) to the Internet Routing hold great promises for supporting the ever-growing performance requirements of Internet applications The inherent centralization of these SDN approaches on the Internet routing comes with the following concerns: 1) privacy , the operators are reluctant to share private routing information, 2) separation of responsibilities , the Internet eXchange Point (IXP) running the centralized controller is involved in the routing and forwarding at too many levels, and 3) scalability , the growing number of IP prefixes routed on the Internet (ie, hundreds of thousands) pose extremely high requirements at both the control- and data-planes, eg, several minutes for policy compilations and a large number of forwarding rules, in SDN In this paper, we propose DeSI to apply SDN at IXPs by considering the above concerns We break this centralization by devising an SDN-enabled IXP architecture in which each member connects to an SDN-enabled IXP through its SDN controller and SDN switches, thus tackling privacy, scalability, and separation of concerns issues To spur adoption, we introduce an expressive, yet simple, language to configure the routing policies of the members Our evaluation shows that DeSI needs n times fewer forwarding table entries for an IXP in which n is the number of IXP members DeSI also gives the possibility of slowly migrating to the SDN-enabled IXPs

5 citations


Proceedings ArticleDOI
07 Jun 2021
TL;DR: In this article, the authors investigate and compare the performance obtainable by different implementations of connection tracking using high-speed real traffic traces, and show that connection tracking is an expensive operation, achieving at most 24 Gbps on a single core.
Abstract: The rise of commodity servers equipped with high-speed network interface cards poses increasing demands on the efficient implementation of connection tracking, i.e., the task of associating the connection identifier of an incoming packet to the state stored for that connection. In this work, we thoroughly investigate and compare the performance obtainable by different implementations of connection tracking using high-speed real traffic traces. Based on a load balancer use case, our results show that connection tracking is an expensive operation, achieving at most 24 Gbps on a single core. Core-sharding and lock-free hash tables emerge as the only suitable multi-thread approaches for enabling 100 Gbps packet processing. In contrast to recent beliefs, we observe that newly proposed techniques to "lazily" delete connection states are not more effective than properly tuned traditional deletion techniques based on timer wheels.

3 citations