scispace - formally typeset
Open AccessProceedings Article

Queues don't matter when you can JUMP them!

TLDR
It is shown that QJUMP achieves bounded latency and reduces in-network interference by up to 300×, outperforming Ethernet Flow Control (802.3x), ECN (WRED) and DCTCP and pFabric.
Abstract
QJUMP is a simple and immediately deployable approach to controlling network interference in datacenter networks. Network interference occurs when congestion from throughput-intensive applications causes queueing that delays traffic from latency-sensitive applications. To mitigate network interference, QJUMP applies Internet QoS-inspired techniques to datacenter applications. Each application is assigned to a latency sensitivity level (or class). Packets from higher levels are rate-limited in the end host, but once allowed into the network can "jump-the-queue" over packets from lower levels. In settings with known node counts and link speeds, QJUMP can support service levels ranging from strictly bounded latency (but with low rate) through to line-rate throughput (but with high latency variance). We have implemented QJUMP as a Linux Traffic Control module. We show that QJUMP achieves bounded latency and reduces in-network interference by up to 300×, outperforming Ethernet Flow Control (802.3x), ECN (WRED) and DCTCP. We also show that QJUMP improves average flow completion times, performing close to or better than DCTCP and pFabric.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Presto: Edge-based Load Balancing for Fast Datacenter Networks

TL;DR: A soft-edge load balancing scheme that closely tracks that of a single, non-blocking switch over many workloads and is adaptive to failures and topology asymmetry, called Presto is designed and implemented.
Proceedings ArticleDOI

Firmament: fast, centralized cluster scheduling at scale

TL;DR: Firmament is described, a centralized scheduler that scales to over ten thousand machines at sub-second placement latency even though it continuously reschedules all tasks via a min-cost max-flow (MCMF) optimization, and exceeds the placement quality of four widely-used centralized and distributed schedulers on a real-world cluster.
Journal ArticleDOI

A Survey on Data Center Networking (DCN): Infrastructure and Operations

TL;DR: A systematic taxonomy and survey of recent research efforts on the DCN is presented, which proposes to classify these research efforts into two areas: 1) DCN infrastructure and 2)DCN operations.
Proceedings ArticleDOI

Copa: Practical Delay-Based Congestion Control for the Internet

TL;DR: Copa, a practical delay-based congestion control algorithm for the Internet, solves three challenges and introduces "TCP-mode switching": normally Copa maintains low delays, but typically delay-sensitive schemes get low throughput when a buffer-filling flow shares the bottleneck.
Proceedings ArticleDOI

Homa: a receiver-driven low-latency transport protocol using network priorities

TL;DR: Homa as discussed by the authors uses in-network priority queues to ensure low latency for short messages; priority allocation is managed dynamically by each receiver and integrated with a receiver-driven flow control mechanism.
References
More filters

An Architecture for Differentiated Service

TL;DR: An architecture for implementing scalable service differentiation in the Internet achieves scalability by aggregating traffic classification state which is conveyed by means of IP-layer packet marking using the DS field [DSFIELD].
Journal ArticleDOI

A generalized processor sharing approach to flow control in integrated services networks: the multiple node case

TL;DR: Worst-case bounds on delay and backlog are derived for leaky bucket constrained sessions in arbitrary topology networks of generalized processor sharing (GPS) servers and the effectiveness of PGPS in guaranteeing worst-case session delay is demonstrated under certain assignments.
Proceedings ArticleDOI

VL2: a scalable and flexible data center network

TL;DR: VL2 is a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics, and is built on a working prototype.
Proceedings ArticleDOI

Network traffic characteristics of data centers in the wild

TL;DR: An empirical study of the network traffic in 10 data centers belonging to three different categories, including university, enterprise campus, and cloud data centers, which includes not only data centers employed by large online service providers offering Internet-facing applications but also data centers used to host data-intensive (MapReduce style) applications.
Proceedings ArticleDOI

Data center TCP (DCTCP)

TL;DR: DCTCP enables the applications to handle 10X the current background traffic, without impacting foreground traffic, thus largely eliminating incast problems, and delivers the same or better throughput than TCP, while using 90% less buffer space.
Related Papers (5)