scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Networking named content

01 Dec 2009-pp 1-12
TL;DR: Content-Centric Networking (CCN) is presented, which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name, using new approaches to routing named content.
Abstract: Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
14 Nov 2011
TL;DR: The existing commonalities and important differences in data-oriented or content-centric network architectures are identified, and some remaining research issues are discussed, to emerge skeptical (but open-minded) about the value of this approach to networking.
Abstract: There have been many recent papers on data-oriented or content-centric network architectures. Despite the voluminous literature, surprisingly little clarity is emerging as most papers focus on what differentiates them from other proposals. We begin this paper by identifying the existing commonalities and important differences in these designs, and then discuss some remaining research issues. After our review, we emerge skeptical (but open-minded) about the value of this approach to networking.

501 citations

Proceedings ArticleDOI
12 Aug 2013
TL;DR: NLSR's main design choices are discussed, including a hierarchical naming scheme for routers, keys, and routing updates, a hierarchical trust model for routing within a single administrative domain, a hop-by-hop synchronization protocol to replace the traditional network-wide flooding for routing update dissemination, and a simple way to rank multiple forwarding options.
Abstract: This paper presents the design of the Named-data Link State Routing protocol (NLSR), a routing protocol for Named Data Networking (NDN). Since NDN uses names to identify and retrieve data, NLSR propagates reachability to name prefixes instead of IP prefixes. Moreover, NLSR differs from IP-based link-state routing protocols in two fundamental ways. First, NLSR uses Interest/Data packets to disseminate routing updates, directly benefiting from NDN's data authenticity. Second, NLSR produces a list of ranked forwarding options for each name prefix to facilitate NDN's adaptive forwarding strategies. In this paper we discuss NLSR's main design choices on (1) a hierarchical naming scheme for routers, keys, and routing updates, (2) a hierarchical trust model for routing within a single administrative domain, (3) a hop-by-hop synchronization protocol to replace the traditional network-wide flooding for routing update dissemination, and (4) a simple way to rank multiple forwarding options. Compared with IP-based link state routing, NLSR offers more efficient update dissemination, built-in update authentication, and native support of multipath forwarding.

451 citations

Journal ArticleDOI
26 Jun 2012
TL;DR: The design of NDN's adaptive forwarding is outlined, its potential benefits are articulated, and open research issues are identified.
Abstract: In Named Data Networking (NDN) architecture, packets carry data names rather than source or destination addresses. This change of paradigm leads to a new data plane: data consumers send out Interest packets, routers forward them and maintain the state of pending Interests, which is used to guide Data packets back to the consumers. NDN routers' forwarding process is able to detect network problems by observing the two-way traffic of Interest and Data packets, and explore multiple alternative paths without loops. This is in sharp contrast to today's IP forwarding process which follows a single path chosen by the routing process, with no adaptability of its own. In this paper we outline the design of NDN's adaptive forwarding, articulate its potential benefits, and identify open research issues.

449 citations

Journal ArticleDOI
TL;DR: In this survey, the naming and routing mechanisms proposed by some of the most prominent ICN research projects are analyzed, compare, and contrast.
Abstract: The concept of information-centric networking (ICN) defines a new communication model that focuses on what is being exchanged rather than which network entities are exchanging information. From the ICN perspective, contents are first class network citizens instead of hosts. ICN's primary objective is to shift the current host-oriented communication model toward a content-centric model for effective distribution of content over the network. In recent years this paradigm shift has generated much interest in the research community and sprung several research projects around the globe to investigate and advance this stream of thought. Content naming and content-based routing are core research challenges in this research community. In this survey, we analyze, compare, and contrast the naming and routing mechanisms proposed by some of the most prominent ICN research projects.

433 citations

Journal ArticleDOI
01 Mar 2017
TL;DR: The evolutionary stages, i.e., generations, that have characterized the development of IoT are presented, along with the motivations of their triggering, and the role that IoT can play in addressing the main societal challenges and the set of features expected from the relevant solutions are analyzed.
Abstract: The high penetration rate of new technologies in all the activities of everyday life is fostering the belief that for any new societal challenge there is always an ICT solution able to successfully deal with it. Recently, the solution that is proposed almost anytime is the “Internet of Things” (IoT). This apparent panacea of the ICT world takes different aspects on and, actually, is identified with different (often very different) technological solutions. As a result, many think that IoT is just RFIDs, others think that it is sensor networks, and yet others that it is machine-to-machine communications. In the meanwhile, industrial players are taking advantage of the popularity of IoT to use it as a very trendy brand for technology solutions oriented to the consumer market. The scientific literature sometimes does not help much in clarifying, as it is rich in definitions of IoT often discordant between them. Objective of this paper is to present the evolutionary stages, i.e., generations, that have characterized the development of IoT, along with the motivations of their triggering. Besides, it analyzes the role that IoT can play in addressing the main societal challenges and the set of features expected from the relevant solutions. The final objective is to give a modern definition of the phenomenon, which de facto shows a strong pervasive nature, and, if not well understood in its theories, technologies, methodologies, and real potentials, then runs the risk of being regarded with suspicion and, thus, rejected by users.

431 citations

References
More filters
Proceedings ArticleDOI
27 Aug 2001
TL;DR: Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Abstract: A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.

10,286 citations


Additional excerpts

  • ...A circular namespace is created to ensure correct routing (as in Chord [36]), but additional pointers are added to shorten routes....

    [...]

Journal ArticleDOI
01 Aug 1988
TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Abstract: In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”.Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimationexponential retransmit timer backoffslow-startmore aggressive receiver ack policydynamic window sizing on congestionKarn's clamped retransmit backofffast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet.This paper is a brief description of (i) - (v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. (viii) is described in a soon-to-be-published RFC.Algorithms (i) - (v) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them.By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy?There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, orA sender injects a new packet before an old packet has exited, orThe equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.

5,620 citations


"Networking named content" refers background in this paper

  • ...A similar flow balance between data and ack packets is what gives TCP its scalability and adaptability [8] but, unlike TCP, CCN’s model works for many-to-many multipoint delivery (see Section 3....

    [...]

  • ...The TCP solution is for endpoints to dynamically adjust their window sizes to keep the aggregate traffic volume below the level where congestion occurs [8]....

    [...]

Journal ArticleDOI
12 Nov 2000
TL;DR: OceanStore monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data.
Abstract: OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data. A prototype implementation is currently under development.

3,376 citations


"Networking named content" refers background in this paper

  • ...ing effectively a self-certifying name [23, 25, 11, 8]), or by the identity (key) of its publisher [8, 29, 22, 21])....

    [...]

Book ChapterDOI
01 Jan 2001
TL;DR: Freenet as discussed by the authors is an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers, but it does not provide any centralized location index.
Abstract: We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and difficult for a node operator to determine or be held responsible for the actual physical contents of her own node.

1,899 citations

Proceedings ArticleDOI
27 Aug 2007
TL;DR: The Data-Oriented Network Architecture (DONA) is proposed, which involves a clean-slate redesign of Internet naming and name resolution to adapt to changes in Internet usage.
Abstract: The Internet has evolved greatly from its original incarnation. For instance, the vast majority of current Internet usage is data retrieval and service access, whereas the architecture was designed around host-to-host applications such as telnet and ftp. Moreover, the original Internet was a purely transparent carrier of packets, but now the various network stakeholders use middleboxes to improve security and accelerate applications. To adapt to these changes, we propose the Data-Oriented Network Architecture (DONA), which involves a clean-slate redesign of Internet naming and name resolution.

1,643 citations


"Networking named content" refers background or methods in this paper

  • ...The Data-Oriented Network Architecture [21] replaces DNS names with flat, self-certifying names and a namebased anycast primitive above the IP layer....

    [...]

  • ...ing effectively a self-certifying name [23, 25, 11, 8]), or by the identity (key) of its publisher [8, 29, 22, 21])....

    [...]