scispace - formally typeset
Proceedings ArticleDOI

A server-to-server view of the internet

TLDR
This paper exploits the distributed platform of a large content delivery network, composed of thousands of servers around the globe, to assess the performance characteristics of the Internet's core and results indicate that significant daily oscillations in end-to-end RTTs of server- to-server paths is not the norm, but does occur, and, in most cases, contributes about a 20 ms increase in server-To-server path latencies.
Abstract
While the performance characteristics of access networks and end-user-to-server paths are well-studied, measuring the performance of the Internet's core remains, largely, an uncharted territory. With more content being moved closer to the end-user, server-to-server paths have increased in length and have a significant role in dictating the quality of services offered by content and service providers. In this paper, we present a large-scale study of the effects of routing changes and congestion on the end-to-end latencies of server-to-server paths in the core of the Internet.We exploit the distributed platform of a large content delivery network, composed of thousands of servers around the globe, to assess the performance characteristics of the Internet's core. We conduct measurement campaigns between thousands of server pairs, in both forward and reverse directions, and analyze the performance characteristics of server-to-server paths over both long durations (months) and short durations (hours). Our analyses show that there is a large variation in the frequency of routing changes. While routing changes typically have marginal or no impact on the end-to-end round-trip times (RTTs), 20% of them impact IPv4 (IPv6) paths by at least 26 ms (31 ms). We highlight how dual-stack servers can be utilized to reduce server-to-server latencies by up to 50 ms. Our results indicate that significant daily oscillations in end-to-end RTTs of server-to-server paths is not the norm, but does occur, and, in most cases, contributes about a 20 ms increase in server-to-server path latencies.

read more

Citations
More filters
Proceedings ArticleDOI

Inferring persistent interdomain congestion

TL;DR: A system and method to measure congestion on thousands of interdomain links without direct access to them based on the Time Series Latency Probes technique is developed and it is shown that congestion inferred using the lightweight TSLP method correlates with other metrics of interconnection performance impairment.
Proceedings ArticleDOI

bdrmap: Inference of Borders Between IP Networks

TL;DR: A method that uses targeted traceroutes, knowledge of traceroute idiosyncrasies, and codification of topological constraints in a structured set of heuristics, to correctly identify interdomain links at the granularity of individual border routers is developed.
Proceedings ArticleDOI

Pinpointing delay and forwarding anomalies using large-scale traceroute measurements

TL;DR: The diversity of RIPE Atlas traceroute measurements are leveraged to solve the classic problem of monitoring in-network delays and get credible delay change estimations to monitor network conditions in the wild.
Posted Content

Pinpointing Delay and Forwarding Anomalies Using Large-Scale Traceroute Measurements

TL;DR: In this article, the authors leverage the RIPE Atlas measurement platform to monitor and analyze network conditions and propose a set of complementary methods to detect network disruptions from traceroute measurements.
Journal ArticleDOI

On Mapping the Interconnections in Today’s Internet

TL;DR: A new approach to infer the existence of interconnections from localized traceroutes and use the Belief Propagation algorithm on a specially defined Markov Random Field graphical model to geolocate them to a target colocation facility.
References
More filters
Proceedings ArticleDOI

B4: experience with a globally-deployed software defined wan

TL;DR: This work presents the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet, using OpenFlow to control relatively simple switches built from merchant silicon.
Proceedings ArticleDOI

CloudCmp: comparing public cloud providers

TL;DR: Applying CloudCmp to four cloud providers that together account for most of the cloud customers today, it is found that their offered services vary widely in performance and costs, underscoring the need for thoughtful provider selection.
Proceedings ArticleDOI

Making middleboxes someone else's problem: network processing as a cloud service

TL;DR: APLOMB solves real problems faced by network administrators, can outsource over 90% of middlebox hardware in a typical large enterprise network, and, in a case study of a real enterprise, imposes an average latency penalty of 1.1ms and median bandwidth inflation of 3.8%.
Journal ArticleDOI

End-to-end routing behavior in the Internet

TL;DR: It is found that Internet paths are heavily dominated by a single prevalent route, but that the time periods over which routes persist show wide variation, ranging from seconds up to days.
Journal ArticleDOI

The Akamai network: a platform for high-performance internet applications

TL;DR: An overview of the components and capabilities of the Akamai platform is given, and some insight into its architecture, design principles, operation, and management is offered.
Related Papers (5)
Trending Questions (1)
How much RAM do you need to run a Fivem server?

We highlight how dual-stack servers can be utilized to reduce server-to-server latencies by up to 50 ms. Our results indicate that significant daily oscillations in end-to-end RTTs of server-to-server paths is not the norm, but does occur, and, in most cases, contributes about a 20 ms increase in server-to-server path latencies.