scispace - formally typeset

Proceedings ArticleDOI

Mapping peering interconnections to a facility

01 Dec 2015-pp 37

TL;DR: This study develops a methodology, called constrained facility search, to infer the physical interconnection facility where an interconnection occurs among all possible candidates, which outperforms heuristics based on naming schemes and IP geolocation.

AbstractAnnotating Internet interconnections with robust physical coordinates at the level of a building facilitates network management including interdomain troubleshooting, but also has practical value for helping to locate points of attacks, congestion, or instability on the Internet. But, like most other aspects of Internet interconnection, its geophysical locus is generally not public; the facility used for a given link must be inferred to construct a macroscopic map of peering. We develop a methodology, called constrained facility search, to infer the physical interconnection facility where an interconnection occurs among all possible candidates. We rely on publicly available data about the presence of networks at different facilities, and execute traceroute measurements from more than 8,500 available measurement servers scattered around the world to identify the technical approach used to establish an interconnection. A key insight of our method is that inference of the technical approach for an interconnection sufficiently constrains the number of candidate facilities such that it is often possible to identify the specific facility where a given interconnection occurs. Validation via private communication with operators confirms the accuracy of our method, which outperforms heuristics based on naming schemes and IP geolocation. Our study also reveals the multiple roles that routers play at interconnection facilities; in many cases the same router implements both private interconnections and public peerings, in some cases via multiple Internet exchange points. Our study also sheds light on peering engineering strategies used by different types of networks around the globe.

Topics: Internet backbone (63%), Peering (59%), Internet exchange point (57%), The Internet (55%), Network mapping (55%)

Summary (4 min read)

1. INTRODUCTION

  • Measuring and modeling the Internet topology at the logical layer of network interconnection, i. e., autonomous systems (AS) peering, has been an active area for nearly two decades.
  • There are good reasons for the dearth of this information: evolving complexity and scale of networking infrastructure, information hiding properties of the routing system (BGP), security and commercial sensitivities, and lack of incentives to gather or share data.
  • Knowledge of geophysical locations of interconnections also enables assessment of the resilience of interconnections in the event of natural disasters [53, 20], facility or router outages [6], peering disputes [46], and denial of service attacks [22, 60].
  • The authors first create and update a detailed map of interconnection facilities and the networks present at them.
  • The contributions of this work are as follows: .

2. BACKGROUND AND TERMINOLOGY

  • Interconnection is a collection of business practices and technical mechanisms that allows individually managed networks (ASes) to exchange traffic [11].
  • An Internet Exchange Point (IXP) is a physical infrastructure composed of layer-2 Ethernet switches where participating networks can interconnect their routers using the switch fabric.
  • These switches connect via high bandwidth connections to the core switches.
  • Crossconnects can be established between members that host their network equipment in different facilities of the same interconnection facility operator, if these facilities are interconnected.
  • Public peering, also referred to as public interconnect, is the establishment of peering connections between two members of an IXP via the IXP’s switch fabric.

3. DATASETS AND MEASUREMENTS

  • To infer details of a given interconnection, the authors need information about the prefixes of the two networks and physical facilities where they are present.
  • This section describes the publicly available data that the authors collected and analyzed for this study, and the publicly available measurement servers (vantage points) they utilized.

3.1 Data Sources

  • And in some cases it is required to be public (e. g., for facilities that partner with IXPs in Europe), the information is not available in one form.
  • To remove such discrepancies, the authors convert country and city names to standard ISO and UN names.
  • The authors use various publicly available sources to get an up-to-date list of IXPs, their prefixes, and associated interconnection facilities.
  • PeeringDB was not missing the records of the facilities, only their association with the IXPs.
  • ASes tend to connect to more interconnection facilities than IXPs, with 54% of the ASes in their dataset connected to more than one IXPs and 66% of the ASes connected at more than one interconnection facilities.

3.2 Vantage Points and Measurements

  • To perform targeted traceroute campaigns the authors used publicly available traceroute servers, RIPE Atlas, and looking glasses.
  • The authors also used existing public measurements gathered previously by RIPE Atlas nodes (e. g., periodic traceroute queries to Google from all Atlas nodes).
  • After filtering out inactive or otherwise unavailable looking glasses, the authors ended up with 1877 looking glasses in 438 ASes and 472 cities including many in members of IXPs and 21 offered by IXPs.
  • These types of looking glasses allow us to list the BGP sessions established with the router running the looking glass, and indicate the ASN and IP address of the peering router, as well as showing metainformation about the interconnection, e. g., via BGP communities [31].
  • The authors analyzed one dataset collected when they performed the traceroute campaigns with RIPE Atlas and the looking glasses.

4.1 Preparation of traceroute data

  • Interconnections occur at the network layer when two networks agree to peer and exchange traffic.
  • To capture these interconnections, the authors performed a campaign of traceroute measurements from RIPE Atlas and looking glass vantage points, targeting a set of various networks that include major content providers and Tier-1 networks (see section 5).
  • Such errors can reduce the accuracy of their methodology since they can lead to inference of incorrect candidate facilities for an IP interface.
  • The authors used the MIDAR alias resolution system [40] to infer which aliases belong to the same router.
  • Alias resolution helped us improve the accuracy of their IP-to-ASN mappings, but more importantly it provided additional constraints for mapping interfaces to facilities.

4.3 Facility search in the reverse direction

  • So far the authors have located the peering interconnections from the side of the peer AS that appears first in the outgoing direction of the traceroute probes.
  • In remote peering, tethering and public peering at IXPs where the second peer is connected at multiple facilities, the two peers can be located at different facilities.
  • In Figure 5 the CFS algorithm will infer the facility of A.1’s router but not the facility of IX.1’s router.
  • This outcome arises because traceroute replies typically return from the ingress, black, interface of a router and therefore do not reveal the router’s exgress, white, interfaces.
  • For many cases, but not all, this reverse search is possible, because the authors use a diverse set of vantage points.

4.4 Proximity Heuristic

  • As a fallback method to pinpoint the facility of the far end interface, the authors use knowledge of common IXP practices with respect to the location and hierarchy of switches.
  • For a public peering link (IPA, IPIXP,B , IPB) for which the authors have already inferred the facility of IPA, and for which IPB has more than one candidate IXP facilities, they require that IPB is located in the facility that is proximate to IPA.
  • The authors executed an additional traceroute campaign from 50 AMS-IX members who are each connected to a single facility of AMS-IX, targeting a different set of 50 AMSIX members who are each connected to two facilities.
  • The authors found that in 77% of the cases the switch proximity heuristic finds the exact facility for each IXP interface.
  • When it fails, the actual facility is in close proximity to the inferred one (e. g., both facilities are in the same building block), which is because (per the AMS-IX web site) the access switches are connected with the same backhaul switch.

5. RESULTS

  • To evaluate the feasibility of their methodology, the authors first launched an IPv4 traceroute campaign from different measurement platforms targeting a number of important groups of interconnections, and tried to infer their locations.
  • From the 13,889 peering interfaces in their traceroute data, 29% have no associated DNS record, while 55% of the remaining 9,861 interfaces do not encode any geolocation information in their hostname.
  • As shown in Figure 8, removing 850 facilities ( 50% of the total fa- 1Each iteration of the CFS algorithm repeats steps 2–4 as explained in section 4.2. cilities in their dataset) causes on average 30% of the previously resolved interfaces to become unresolved, while when the authors remove 1,400 (80%) facilities 60% of the resolved interfaces become unresolved.
  • The authors also find that 11.9% of the observed routers used to implement public peering, are used to establish links over two or three IXPs.
  • RIPE Atlas probes have a significantly larger footprint in Europe than in Asia, thus, it is expected that one can infer more interfaces in Europe.the authors.

6. VALIDATION

  • Due to its low-level nature, ground-truth data on interconnection to facility mapping is scarce.
  • For example the hostname x.y.rtr.thn.lon.z denotes that a router is located in the facility Telehouse-North in London.
  • The higher accuracy rate for this validation subset is explained by the fact that the authors collected through the IXP websites complete facilities lists for the IXPs and their members.
  • Importantly, when their inferences disagreed with the validation data the actual facility was located in the same city as the inferred one (e. g., Telecity Amsterdam 1 instead of Telecity Amsterdam 2).

8. CONCLUSION

  • The increasing complexity of interconnection hinders their ability to answer questions regarding their physical location and engineering approach.
  • Eventually the multiple sources of constraints led to a small enough set of possible peering locations that in many cases, it became feasible to identify a single location that satisfied all known constraints.
  • The accuracy of their method (>90%) outperforms heuristics based on naming schemes and IP geolocation.
  • Nevertheless, by utilizing results for individual interconnections and others inferred in the process, it is pos- sible to incrementally construct a more detailed map of interconnections.
  • The authors make their data available at http:// www.caida.org/publications/paper/2015/constrained facility search/supplemental.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

Mapping Peering Interconnections to a Facility
Vasileios Giotsas
CAIDA / UC San Diego
vgiotsas@caida.org
Georgios Smaragdakis
MIT / TU Berlin
gsmaragd@csail.mit.edu
Bradley Huffaker
CAIDA / UC San Diego
bhuffake@caida.org
Matthew Luckie
University of Waikato
mjl@wand.net.nz
kc claffy
CAIDA / UC San Diego
kc@caida.org
ABSTRACT
Annotating Internet interconnections with robust phys-
ical coordinates at the level of a building facilitates net-
work management including interdomain troubleshoot-
ing, but also has practical value for helping to locate
points of attacks, congestion, or instability on the In-
ternet. But, like most other aspects of Internet inter-
connection, its geophysical locus is generally not pub-
lic; the facility used for a given link must be inferred to
construct a macroscopic map of peering. We develop a
methodology, called constrained facility search, to infer
the physical interconnection facility where an intercon-
nection occurs among all possible candidates. We rely
on publicly available data about the presence of net-
works at different facilities, and execute traceroute mea-
surements from more than 8,500 available measurement
servers scattered around the world to identify the tech-
nical approach used to establish an interconnection. A
key insight of our method is that inference of the tech-
nical approach for an interconnection sufficiently con-
strains the number of candidate facilities such that it
is often possible to identify the specific facility where
a given interconnection occurs. Validation via private
communication with operators confirms the accuracy
of our method, which outperforms heuristics based on
naming schemes and IP geolocation. Our study also re-
veals the multiple roles that routers play at interconnec-
tion facilities; in many cases the same router implements
both private interconnections and public peerings, in
some cases via multiple Internet exchange points. Our
study also sheds light on peering engineering strategies
used by different types of networks around the globe.
Permission to make digital or hard copies of all or part of this work for personal
or classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice
and the full citation on the first page. Copyrights for components of this work
owned by others than ACM must be honored. Abstracting with credit is per-
mitted. To copy otherwise, or republish, to post on servers or to redistribute to
lists, requires prior specific permission and/or a fee. Request permissions from
permissions@acm.org.
CoNEXT ’15 December 01-04, 2015, Heidelberg, Germany
c
2015 ACM. ISBN 978-1-4503-3412-9/15/12. . . $15.00
DOI: 10.1145/2716281.2836122
CCS Concepts
Networks Network measurement; Physical topolo-
gies;
Keywords
Interconnections; peering facilities; Internet mapping
1. INTRODUCTION
Measuring and modeling the Internet topology at the
logical layer of network interconnection, i. e., autonomous
systems (AS) peering, has been an active area for nearly
two decades. While AS-level mapping has been an im-
portant step to understanding the uncoordinated forma-
tion and resulting structure of the Internet, it abstracts
a much richer Internet connectivity map. For example,
two networks may interconnect at multiple physical lo-
cations around the globe, or even at multiple locations
in the same city [55, 49].
There is currently no comprehensive mapping of in-
terconnections to physical locations where they occur [61].
There are good reasons for the dearth of this informa-
tion: evolving complexity and scale of networking in-
frastructure, information hiding properties of the rout-
ing system (BGP), security and commercial sensitivi-
ties, and lack of incentives to gather or share data. But
this opacity of the Internet infrastructure hinders re-
search and development efforts as well as network man-
agement. For example, annotating peering interconnec-
tions with robust physical coordinates at the level of a
building facilitates network troubleshooting and diag-
nosing attacks [43] and congestion [46]. Knowledge of
geophysical locations of interconnections also enables
assessment of the resilience of interconnections in the
event of natural disasters [53, 20], facility or router out-
ages [6], peering disputes [46], and denial of service at-
tacks [22, 60]. This information can also elucidate the
role of emerging entities, e. g., colocation facilities, car-
rier hotels, and Internet exchange points (IXP), that
enable today’s richly interconnected ecosystem [44, 7,
18, 19, 15]. It also increases traffic flow transparency,
e. g., to identify unwanted transit paths through specific

countries, and inform peering decisions in a competitive
interconnection market.
In this paper we describe a measurement and infer-
ence methodology to map a given interconnection to
a physical facility. We first create and update a de-
tailed map of interconnection facilities and the networks
present at them. This map requires manual assembly,
but fortunately the information is increasingly publicly
available in recent years, partly due to the fact that
many networks require it be available in order to es-
tablish peering [4], and many IXPs publish information
about which networks are present at which of their fa-
cilities in order to attract participating networks [18].
Interconnection facilities also increasingly make the list
of participant members available in their website or in
PeeringDB [45]. While it is a substantial investment of
time to keep such a list current, we find it is feasible.
However, a well-maintained mapping of networks to
facilities does not guarantee the ability to accurately
map all interconnections involving two ASes to specific
physical facilities, since many networks peer at mul-
tiple locations even within a city. Mapping a single
interconnection to a facility is a search problem with
a potentially large solution space; however, additional
constraints can narrow the search. The contributions of
this work are as follows:
We introduce and apply a measurement method-
ology, called constrained facility search, which in-
fers the physical facilities where two ASes intercon-
nect from among all (sometimes dozens of) possi-
ble candidates, and also infers the interconnection
method, e.g. public peering at an IXP, private
peering via cross-connect, point-to-point connec-
tion tunnelled over an IXP, or remote peering.
We validate the accuracy of our methodology using
direct feedback, BGP communities, DNS records,
and IXP websites and find our algorithm achieves
at least 90% accuracy for each type of interconnec-
tion and outperforms heuristics based on naming
schemes and IP geolocation.
We demonstrate our methodology using case stud-
ies of a diverse set of interconnections involving
content providers (Google, Akamai, Yahoo, Lime-
light, and Cloudflare) as well as transit providers
(Level3, Cogent, Deutsche Telekom, Telia, and NTT).
Our study reveals the multiple roles that routers
play at interconnection facilities; frequently the
same router implements both public and private
peering in some cases via multiple facilities.
2. BACKGROUND AND TERMINOLOGY
Interconnection is a collection of business practices
and technical mechanisms that allows individually man-
aged networks (ASes) to exchange traffic [11]. The
two primary forms of interconnection are transit, when
one AS sells another ISP access to the global Internet,
and peering, when two ISPs interconnect to exchange
customer traffic, although complicated relationships ex-
ist [28, 31]. Whether and how to interconnect requires
careful consideration, and depends on traffic volume ex-
changed between the networks, their customer demo-
graphics, peering and security policies, and the cost to
maintain the interconnection [50].
Interconnection Facility. An interconnection fa-
cility is a physical location (a building or part of one)
that supports interconnection of networks. These facil-
ities lease customers secure space to locate and operate
network equipment. They also provide power, cooling,
fire protection, dedicated cabling to support different
types of network connection, and in many cases admin-
istrative support. Large companies such as Equinix [27],
Telehouse [59], and Interxion [36] operate such facilities
around the globe. Smaller companies operate intercon-
nection facilities in a geographic region or a city. Most
interconnection facilities are carrier-neutral, although
some are operated by carriers, e. g., Level3. In large
communication hubs, such as in large cities, an intercon-
nection facility operator may operate multiple facilities
in the same city, and connect them, so that networks
participating at one facility can access networks at an-
other facility in the same city.
Internet Exchange Point. An Internet Exchange
Point (IXP) is a physical infrastructure composed of
layer-2 Ethernet switches where participating networks
can interconnect their routers using the switch fabric.
At every IXP there is one or more (for redundancy)
high-end switches called core switches (the center switch
in Figure 1). IXPs partner with interconnection facili-
ties in the city they operate and install access switches
there (switches at facilities 1 and 2 in Figure 1). These
switches connect via high bandwidth connections to the
core switches. In order to scale, some IXPs connect mul-
tiple access switches to back-haul switches. The back-
haul switch then connects to the core switch. All IXP
members connected to the same access switch or back-
haul switch exchange traffic locally if they peer; the
rest exchange traffic via the core switch. Thus, routers
owned by members of IXPs may be located at different
facilities associated with the same IXP [18].
Popular peering engineering options today are:
Private Peering with Cross-connect. A cross-
connect is a piece of circuit-switched network equipment
that physically connects the interfaces of two networks
at the interconnection facility. It can be either copper
or fiber with data speeds up to tens of Gbps. Cross-
connects can be established between members that host
their network equipment in different facilities of the
same interconnection facility operator, if these facilities
are interconnected. The downside of a cross-connect is
operational overhead: it is largely a manual process to
establish, update, or replace one.
Some large facilities have thousands of cross-connects,
e. g., Equinix reported 161.7K cross-connects across all
its colocation facilities, with more than half in the Amer-

AS A
AS B
AS C
AS F
AS EAS D
FACILITY 1 FACILITY 2
FACILITY 3
IXPIXP
private peering
“tethering”
2.4.1 Private Peering
Cross Connect
private peering
cross connect
public peering
public peering
accessaccessaccessaccess
core
core
2.4.2Remote Peeringremote peering
tunneled
direct
Figure 1: Interconnection facilities host routers of many different networks and partner with IXPs to support different
types of interconnection, including cross-connects (private peering with dedicated medium), public peering (peering
established over shared switching fabric), tethering (private peering using VLAN on shared switching fabric), and
remote peering (transport to IXP provided by reseller).
icas (Q2 2015) [3]. Cross-connects are installed by the
interconnection facilities but members of IXPs can or-
der cross-connects via the IXP for partnered intercon-
nection facilities, in some cases with a discount. For ex-
ample DE-CIX in Frankfurt has facilitated more than
900 cross-connects as of February 2015 [2].
Public Peering. Public peering, also referred to
as public interconnect, is the establishment of peering
connections between two members of an IXP via the
IXP’s switch fabric. IXPs are allocated IP prefix(es)
and often an AS number by a Regional Internet Reg-
istry. The IXP assigns an IP from this range to the
IXP-facing router interfaces of its IXP members to en-
able peering over its switch fabric [10]. One way to es-
tablish connectivity between two ASes is to establish a
direct BGP session between two of their respective bor-
der routers. Thus, if two IXP member ASes wanted to
exchange traffic via the IXP’s switching fabric, they es-
tablish a bi-lateral BGP peering session at the IXP. An
increasing number of IXPs offer their members the use
of route server to establish multi-lateral peering to sim-
plify public peering [32, 54]. With multi-lateral peering
an IXP member establishes a single BGP session to the
IXP’s route server and receives routes from other partic-
ipants using the route server. The advantage of public
peering is that by leasing one IXP port it is possible to
exchange traffic with potentially a large fraction of the
IXP members [57].
Private Interconnects over IXP. An increasing
number of IXPs offer private interconnects over their
public switch fabric. This type of private peering is also
called tethering or IXP metro VLAN. With tethering, a
point-to-point virtual private line is established via the
already leased port to reach other members of the IXP
via a virtual local area network (VLAN), e. g., IEEE
802.1Q. Typically there is a setup cost. In some cases
this type of private interconnect enables members of an
IXP to privately reach networks located in other facili-
ties where those members are not present, e. g., transit
providers or customers, or to privately connect their in-
frastructure across many facilities.
Remote Peering. Primarily larger IXPs, but also
some smaller ones, have agreements with partners, e. g.,
transport networks, to allow remote peering [14]. In
this case, the router of the remote peer can be located
anywhere in the world and connects to the IXP via an
Ethernet-over-MPLS connection. An advantage of re-
mote peering is that it does not require maintaining
network equipment at the remote interconnection fa-
cilities. Approximately 20% (and growing) of AMS-IX
participants were connected this way [18] in 2013. Re-
mote peering is also possible between a remote router
at the PoP of an ISP and a router present at an inter-
connection facility.
3. DATASETS AND MEASUREMENTS
To infer details of a given interconnection, we need
information about the prefixes of the two networks and
physical facilities where they are present. This section
describes the publicly available data that we collected
and analyzed for this study, and the publicly available
measurement servers (vantage points) we utilized.
3.1 Data Sources
3.1.1 Facility Information
For a given network we developed (and continue to
maintain to keep current) a list of the interconnection
facilities where it is present. Despite the fact that facili-
ties for commercial usage must be known to the network
operators to facilitate the establishment of new peering
links and to attract new customers, and in some cases it
is required to be public (e. g., for facilities that partner
with IXPs in Europe), the information is not available
in one form.
We started by compiling an AS-to-facilities mapping
using the list of interconnection facilities and associ-
ated networks (ASNs) available in PeeringDB [45]. Al-

0 20 40 60 80 100 120 140
AS
0.0
0.2
0.4
0.6
0.8
1.0
Fraction of facilities in PDB
10
0
10
1
10
2
10
3
Number of facilities
Total numbe r of facilitie s
Fr action in Pe e r ingDB
Figure 2: Number of interconnection facilities for 152
ASes extracted from their official website, and the as-
sociated fraction of facilities that appear in PeeringDB.
though this list is maintained on a volunteer basis (op-
erators contribute information for their own networks),
and may not be regularly updated for some networks,
it is the most widely used source of peering information
among operators, and it allows us to bootstrap our al-
gorithms. Due to its manual compilation process, there
are cases where different naming schemes are used for
the same city or country. To remove such discrepancies,
we convert country and city names to standard ISO and
UN names. If the distance between two cities is less
than 5 miles, we map them to the same metropolitan
area. We calculate the distance by translating the post-
codes of the facilities to geographical coordinates. For
example, we group Jersey City and New York City into
the NYC metropolitan area.
To augment the list of collected facilities, we extracted
colocation information from web pages of Network Op-
erating Centers (NOCs), where AS operators often doc-
ument their peering interconnection facilities. Extract-
ing information from these individual websites is a te-
dious process, so we did so only for the subset of net-
works that we encountered in our traceroutes and for a
network’s PeeringDB data did not seem to reflect the
geographic scope reported on the network’s own web
site. For example, we investigated cases where a NOC
web site identified a given AS as global but PeeringDB
listed facilities for that AS only in a single country.
Figure 2 summarizes the additional information ob-
tained from NOC websites. The gray bars show the
fraction of facilities found in PeeringDB. We checked
152 ASes with PeeringDB records, and found that Peer-
ingDB misses 1,424 AS-to-facility links for 61 ASes; for
4 of these ASes PeeringDB did not list any facility.
Interestingly, the ASes with missing PeeringDB infor-
mation provided detailed data on their NOC websites,
meaning that they were not intending to hide their pres-
ence.
3.1.2 IXP Information
London
New York
Paris
Frankfurt
Amsterdam
San Jose
Moscow
Los Angeles
Stockholm
Manchester
Miami
Berlin
Tokyo
Kiev
Sao Paulo
Vienna
Singapore
Auckland
Hong Kong
Melbourne
Montreal
Zurich
Prague
Seattle
Chicago
Dallas
Hamburg
Atlanta
Bucharest
Madrid
Milan
Duesseldorf
Sofia
St.Petersburg
0
5
10
15
20
25
30
35
40
45
Number of facilities
Figure 3: Metropolitan areas with at least 10 intercon-
nection facilities.
We use various publicly available sources to get an
up-to-date list of IXPs, their prefixes, and associated
interconnection facilities. This information is largely
available from IXP websites. We also use lists from
PeeringDB and Packet Clearing House (PCH). Useful
lists are provided by regional consortia of IXPs such
as Euro-IX (also lists IXPs in North America), Af-IX,
LAC-IX, and APIX that maintain databases for the af-
filiated IXPs and their members. Some IXPs may be
inactive; PCH regularly updates their list and annotates
inactive IXPs. To further filter out inactive IXPs, for
our study, we consider only IXPs that (i) we were able
to confirm the IXP IP address blocks from at least three
of these data sources, and (ii) we could associate at least
one active member from at least two of the above data
sources. We ended up with 368 IXPs in 263 cities in
87 countries. IXPs belonging to the same operators in
different cities may be different entries, e. g., DE-CIX
Frankfurt and DE-CIX Munich.
We then create a list of IXPs where a network is
a member, and annotate which facilities partner with
these exchange points. For private facilities, we use
PeeringDB data augmented with information available
at the IXP websites and databases of the IXP consortia.
We again encountered cases where missing IXP informa-
tion from PeeringDB was found on IXP websites. For
example, the PeeringDB record of the JPNAP Tokyo
I exchange does not list any partner colocation facil-
ities, while the JPNAP website lists two facilities [5].
Overall, we extracted additional data from IXP web-
sites for 20 IXPs that we encountered in traces but for
which PeeringDB did not list any interconnection fa-
cilities. PeeringDB was not missing the records of the
facilities, only their association with the IXPs.
By combining all the information we collected for fa-
cilities, we compiled a list of 1,694 facilities in 95 coun-
tries and 684 cities for April 2015. The regional dis-
tribution of the facilities is as follows: 503 in North
America, 860 in Europe, 143 in Asia, 84 in Oceania, 73

in South America, and 31 in Africa. Notice that these
facilities can be operated by colocation operators or by
carriers. Figure 3 shows the cities with at least 10 colo-
cation facilities. It is evident that for large metropoli-
tan areas the problem of pinpointing a router’s PoP at
the granularity of interconnection facility is consider-
ably more challenging than determining PoP locations
at a city-level granularity.
On average a metropolitan area has about 3 times
more interconnection facilities than IXPs. This hap-
pens because the infrastructure of IXPs is often dis-
tributed among several facilities in a city, or even across
neighboring cities, for purposes of redundancy and ex-
panded geographical coverage. For example, the topol-
ogy of DE-CIX in Frankfurt spans 18 interconnection
facilities. ASes tend to connect to more interconnec-
tion facilities than IXPs, with 54% of the ASes in our
dataset connected to more than one IXPs and 66% of
the ASes connected at more than one interconnection
facilities. This may be intuitive since connectivity to an
IXP requires presence at least one interconnection facil-
ity that partners with the IXP. However, we observe the
opposite behavior for a relatively small number of ASes
that use fewer than 10 interconnection facilities. This
behavior is consistent with two aspects of the peering
ecosystem: (i) an interconnection facility may partner
with multiple IXPs, so presence at one facility could al-
low connectivity to multiple IXPs, and (ii) remote peer-
ing allows connectivity to an IXP through an IXP port
reseller, in which case presence at an IXP does not nec-
essarily require physical presence at one of its partner
facilities. For instance, about 20% of all AMS-IX par-
ticipants connect remotely [18].
3.2 Vantage Points and Measurements
To perform targeted traceroute campaigns we used
publicly available traceroute servers, RIPE Atlas, and
looking glasses. We augmented our study with existing
daily measurements, from iPlane and CAIDA’s Archi-
pelago infrastructures, that in some cases had already
traversed interconnections we considered. Table 1 sum-
marizes characteristics of our vantage points.
RIPE Atlas. RIPE Atlas is an open distributed
Internet measurement platform that relies on measure-
ment devices connected directly to home routers, and
a smaller set of powerful measurement collectors (an-
chors) used for heavy measurements and synchroniza-
tion of the distributed measurement infrastructure. The
end-host devices can be scheduled to perform tracer-
oute, ping, and DNS resolution on the host. We em-
ployed ICMP Paris (supported by RIPE Atlas) tracer-
oute to mitigate traceroute artifacts caused by load bal-
ancing [9]. We also used existing public measurements
gathered previously by RIPE Atlas nodes (e. g., periodic
traceroute queries to Google from all Atlas nodes).
Looking Glasses. A looking glass provides a web-
based or telnet interface to a router and allows the
execution of non-privileged debugging commands. In
RIPE LGs iPlane Ark Total
Atlas unique
Vantage Pts. 6385 1877 147 107 8517
ASNs 2410 438 117 71 2638
Countries 160 79 35 41 170
Table 1: Characteristics of the four traceroute measure-
ment platforms we utilized.
many cases a looking glass provides access to routers in
different cities, as well multiple sites at the same city.
Many looking glasses are also colocated with IXPs. Of-
ten looking glass operators enforce probing limitations
through mandatory timeouts or by blocking users who
exceed the operator-supported probing rate. Therefore,
looking glasses are appropriate only for targeted queries
and not for scanning a large range of addresses. To con-
form to the probing rate limits, we used a timeout of 60
seconds between each query to the same looking glass.
We extracted publicly available and traceroute-capable
looking glasses from PeeringDB, traceroute.org [39], and
previous studies [41]. After filtering out inactive or oth-
erwise unavailable looking glasses, we ended up with
1877 looking glasses in 438 ASes and 472 cities includ-
ing many in members of IXPs and 21 offered by IXPs.
An increasing number of networks run public looking
glass servers capable of issuing BGP queries [32], e. g.,
“show ip bgp summary”, “prefix info”, “neighbor info”.
We identified 168 that support such queries and we used
them to augment our measurements. These types of
looking glasses allow us to list the BGP sessions estab-
lished with the router running the looking glass, and
indicate the ASN and IP address of the peering router,
as well as showing metainformation about the intercon-
nection, e. g., via BGP communities [31].
iPlane. The iPlane project [47] performs daily IPv4
traceroute campaigns from around 300 PlanetLab nodes.
iPlane employs Paris traceroute to target other Plan-
etLab nodes and a random fraction of the advertised
address space. We used two daily archives of traceroute
measurements, collected a week apart, from all the ac-
tive nodes at the time of our measurements.
CAIDA Archipelago (Ark). CAIDA maintains
Ark, a globally distributed measurement platform with
107 nodes deployed in 92 cities (as of May 2015 when
we gathered the data). These monitors are divided into
three teams, each of which performs Paris traceroutes
to a randomly selected IP address in every announced
/24 network in the advertised address space in about
2-3 days. We analyzed one dataset collected when we
performed the traceroute campaigns with RIPE Atlas
and the looking glasses.
Targeted traceroutes. It takes about 5 minutes
for a full traceroute campaign using more than 95% of
all active RIPE Atlas nodes for one target. The time
required by each looking glass to complete a traceroute
measurement to a single target depends on the number
of locations provided by each looking glass. The largest

Figures (12)
Citations
More filters

Proceedings ArticleDOI
11 Nov 2005
TL;DR: It is shown that by carefully exploiting the structure of the worm, especially its pseudo-random number generation, from limited and imperfect telescope data, this work can with high fidelity extract the individual rate at which each infectee injected packets into the network prior to loss.
Abstract: In this talk we discuss an analysis of the propagation in March 2004 of the "Witty" worm, which infected more than 12,000 hosts worldwide in 75 minutes. We show that by carefully exploiting the structure of the worm, especially its pseudo-random number generation, from limited and imperfect telescope data we can recover a wealth of information with high fidelity. The corresponding paper, coauthored with Abhishek Kumar and Nicholas Weaver, appears in the Proceedings of the 2005 ACM Internet Measurement Conference.

53 citations


Proceedings ArticleDOI
07 Aug 2017
TL;DR: A novel and lightweight methodology for detecting peering infrastructure outages based on the observation that BGP communities, announced with routing updates, are an excellent and yet unexplored source of information allowing to pinpoint outage locations with high accuracy is developed.
Abstract: Peering infrastructures, namely, colocation facilities and Internet exchange points, are located in every major city, have hundreds of network members, and support hundreds of thousands of interconnections around the globe. These infrastructures are well provisioned and managed, but outages have to be expected, e.g., due to power failures, human errors, attacks, and natural disasters. However, little is known about the frequency and impact of outages at these critical infrastructures with high peering concentration.In this paper, we develop a novel and lightweight methodology for detecting peering infrastructure outages. Our methodology relies on the observation that BGP communities, announced with routing updates, are an excellent and yet unexplored source of information allowing us to pinpoint outage locations with high accuracy. We build and operate a system that can locate the epicenter of infrastructure outages at the level of a building and track the reaction of networks in near real-time. Our analysis unveils four times as many outages as compared to those publicly reported over the past five years. Moreover, we show that such outages have significant impact on remote networks and peering infrastructures. Our study provides a unique view of the Internet's behavior under stress that often goes unreported.

49 citations


Proceedings ArticleDOI
14 Nov 2016
TL;DR: A method that uses targeted traceroutes, knowledge of traceroute idiosyncrasies, and codification of topological constraints in a structured set of heuristics, to correctly identify interdomain links at the granularity of individual border routers is developed.
Abstract: We tackle the tedious and unsolved problem of automatically and correctly inferring network boundaries in traceroute. We explain why such a conceptually simple task is so hard in the real world, and how lack of progress has impeded a wide range of research and development efforts for decades. We develop and validate a method that uses targeted traceroutes, knowledge of traceroute idiosyncrasies, and codification of topological constraints in a structured set of heuristics, to correctly identify interdomain links at the granularity of individual border routers. In this study we focus on the network boundaries we have most confidence we can accurately infer in the presence of sampling bias: interdomain links attached to the network launching the traceroute. We develop a scalable implementation of our algorithm and validate it against ground truth information provided by four networks on 3,277 links, which showed 96.3% -- 98.9% of our inferences were correct. With 19 vantage points (VPs) distributed across a large U.S. broadband provider, we use our method to reveal the tremendous density of router-level interconnection between some ASes. In January 2016, the broadband provider had 45 router-level links with a Tier-1 peer. We also quantify the VP deployment required to observe this ISP's interdomain connectivity, with 17 VPs required to observe all 45 links. Our method forms the cornerstone of the system we are building to map interdomain performance, and we release our code.

47 citations


Cites methods from "Mapping peering interconnections to..."

  • ...Our work enables the construction of a router-level map of interdomain connectivity, which will empirically ground e orts to accurately model AS topology evolution....

    [...]


Proceedings ArticleDOI
01 Nov 2017
TL;DR: This paper develops and evaluates a methodology to automatically detect BGP blackholing activity in the wild, and assesses the effect of black holing on the data plane using both targeted active measurements as well as passive datasets, finding thatblackholing is indeed highly effective in dropping traffic before it reaches its destination, though it also discards legitimate traffic.
Abstract: The Border Gateway Protocol (BGP) has been used for decades as the de facto protocol to exchange reachability information among networks in the Internet. However, little is known about how this protocol is used to restrict reachability to selected destinations, e.g., that are under attack. While such a feature, BGP blackholing, has been available for some time, we lack a systematic study of its Internet-wide adoption, practices, and network efficacy, as well as the profile of blackholed destinations. In this paper, we develop and evaluate a methodology to automatically detect BGP blackholing activity in the wild. We apply our method to both public and private BGP datasets. We find that hundreds of networks, including large transit providers, as well as about 50 Internet exchange points (IXPs) offer blackholing service to their customers, peers, and members. Between 2014--2017, the number of blackholed prefixes increased by a factor of 6, peaking at 5K concurrently blackholed prefixes by up to 400 Autonomous Systems. We assess the effect of blackholing on the data plane using both targeted active measurements as well as passive datasets, finding that blackholing is indeed highly effective in dropping traffic before it reaches its destination, though it also discards legitimate traffic. We augment our findings with an analysis of the target IP addresses of blackholing. Our tools and insights are relevant for operators considering offering or using BGP blackholing services as well as for researchers studying DDoS mitigation in the Internet.

43 citations


Proceedings ArticleDOI
14 Nov 2016
TL;DR: A new algorithm, Multipass Accurate Passive Inferences from Traceroute (MAP-IT), for inferring the exact interface addresses used for point-to-point inter-AS links, as well as the specific ASes involved, are described, suggesting that MAP-IT is sufficiently reliable for network diagnostics.
Abstract: Mapping the Internet at scale is increasingly important to network security, failure diagnosis, and performance analysis, yet remains challenging. Accurately determining the interface addresses used for inter-AS links from traceroute traces can be hard because these interfaces are often assigned addresses from neighboring ASes. Identifying these interfaces can benefit Internet researchers and network diagnosticians by providing accurate IP-to-AS mappings where such mapping is most difficult -- at AS boundaries. We describe a new algorithm, Multipass Accurate Passive Inferences from Traceroute (MAP-IT), for inferring the exact interface addresses used for point-to-point inter-AS links, as well as the specific ASes involved. MAP-IT combines evidence of an AS switch from distinct traceroute traces; using traceroute data makes it portable across IP networks. Each pass leverages prior inferences to refine existing inferences and to discover additional inter-AS link interfaces. We test MAP-IT with interface-level ground truth information from Internet2, achieving 100% precision. Using approximate ground truth from Level 3 and TeliaSonera yields 95.0% precision. These results suggest that MAP-IT is sufficiently reliable for network diagnostics.

38 citations


Cites background from "Mapping peering interconnections to..."

  • ...These results suggest that MAP-IT is sufficiently reliable for network diagnostics....

    [...]


References
More filters

Proceedings ArticleDOI
19 Aug 2002
TL;DR: New Internet mapping techniques that have enabled us to directly measure router-level ISP topologies are presented, finding that these maps are substantially more complete than those of earlier Internet mapping efforts.
Abstract: To date, realistic ISP topologies have not been accessible to the research community, leaving work that depends on topology on an uncertain footing. In this paper, we present new Internet mapping techniques that have enabled us to directly measure router-level ISP topologies. Our techniques reduce the number of required traces compared to a brute-force, all-to-all approach by three orders of magnitude without a significant loss in accuracy. They include the use of BGP routing tables to focus the measurements, exploiting properties of IP routing to eliminate redundant measurements, better alias resolution, and the use of DNS to divide each map into POPs and backbone. We collect maps from ten diverse ISPs using our techniques, and find that our maps are substantially more complete than those of earlier Internet mapping efforts. We also report on properties of these maps, including the size of POPs, distribution of router outdegree, and the inter-domain peering structure. As part of this work, we release our maps to the community.

1,288 citations


Journal ArticleDOI
TL;DR: The Internet Topology Zoo is a store of network data created from the information that network operators make public, and is the most accurate large-scale collection of network topologies available, and includes meta-data that couldn't have been measured.
Abstract: The study of network topology has attracted a great deal of attention in the last decade, but has been hampered by a lack of accurate data. Existing methods for measuring topology have flaws, and arguments about the importance of these have overshadowed the more interesting questions about network structure. The Internet Topology Zoo is a store of network data created from the information that network operators make public. As such it is the most accurate large-scale collection of network topologies available, and includes meta-data that couldn't have been measured. With this data we can answer questions about network structure with more certainty than ever before - we illustrate its power through a preliminary analysis of the PoP-level topology of over 140 networks. We find a wide range of network designs not conforming as a whole to any obvious model.

998 citations


Proceedings ArticleDOI
30 Aug 2010
TL;DR: The majority of inter-domain traffic by volume now flows directly between large content providers, data center / CDNs and consumer networks, and this analysis shows significant changes in inter-AS traffic patterns and an evolution of provider peering strategies.
Abstract: In this paper, we examine changes in Internet inter-domain traffic demands and interconnection policies. We analyze more than 200 Exabytes of commercial Internet traffic over a two year period through the instrumentation of 110 large and geographically diverse cable operators, international transit backbones, regional networks and content providers. Our analysis shows significant changes in inter-AS traffic patterns and an evolution of provider peering strategies. Specifically, we find the majority of inter-domain traffic by volume now flows directly between large content providers, data center / CDNs and consumer networks. We also show significant changes in Internet application usage, including a global decline of P2P and a significant rise in video traffic. We conclude with estimates of the current size of the Internet by inter-domain traffic volume and rate of annualized inter-domain traffic growth.

660 citations


"Mapping peering interconnections to..." refers background in this paper

  • ..., colocation facilities, carrier hotels, and Internet exchange points (IXP), that enable today’s richly interconnected ecosystem [46, 8, 20, 21, 16]....

    [...]


Proceedings ArticleDOI
06 Nov 2006
TL;DR: The design, implementation, and evaluation of iPlane are presented, a scalable service providing accurate predictions of Internet path performance for emerging overlay services and demonstrating the feasibility and utility of the service by applying it to several representative overlay services in use today.
Abstract: In this paper, we present the design, implementation, and evaluation of iPlane, a scalable service providing accurate predictions of Internet path performance for emerging overlay services. Unlike the more common black box latency prediction techniques in use today, iPlane adopts a structural approach and predicts end-to-end path performance by composing the performance of measured segments of Internet paths. For the paths we observed, this method allows us to accurately and efficiently predict latency, bandwidth, capacity and loss rates between arbitrary Internet hosts. We demonstrate the feasibility and utility of the iPlane service by applying it to several representative overlay services in use today: content distribution, swarming peer-to-peer filesharing, and voice-over-IP. In each case, using iPlane's predictions leads to improved overlay performance.

591 citations


Proceedings Article
14 Aug 2013
TL;DR: ZMap is introduced, a modular, open-source network scanner specifically architected to perform Internet-wide scans and capable of surveying the entire IPv4 address space in under 45 minutes from user space on a single machine, approaching the theoretical maximum speed of gigabit Ethernet.
Abstract: Internet-wide network scanning has numerous security applications, including exposing new vulnerabilities and tracking the adoption of defensive mechanisms, but probing the entire public address space with existing tools is both difficult and slow. We introduce ZMap, a modular, open-source network scanner specifically architected to perform Internet-wide scans and capable of surveying the entire IPv4 address space in under 45 minutes from user space on a single machine, approaching the theoretical maximum speed of gigabit Ethernet. We present the scanner architecture, experimentally characterize its performance and accuracy, and explore the security implications of high speed Internet-scale network surveys, both offensive and defensive. We also discuss best practices for good Internet citizenship when performing Internet-wide surveys, informed by our own experiences conducting a long-term research survey over the past year.

582 citations


Frequently Asked Questions (1)
Q1. What are the contributions in "Mapping peering interconnections to a facility" ?

A key insight of their method is that inference of the technical approach for an interconnection sufficiently constrains the number of candidate facilities such that it is often possible to identify the specific facility where a given interconnection occurs. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.