scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Networking named content

01 Dec 2009-pp 1-12
TL;DR: Content-Centric Networking (CCN) is presented, which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name, using new approaches to routing named content.
Abstract: Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: It becomes critically important to study how the current approaches to standardization in this area can be improved, and better understand the opportunities for the research community to contribute to the IoT field.
Abstract: Technologies to support the Internet of Things are becoming more important as the need to better understand our environments and make them smart increases. As a result it is predicted that intelligent devices and networks, such as WSNs, will not be isolated, but connected and integrated, composing computer networks. So far, the IP-based Internet is the largest network in the world; therefore, there are great strides to connect WSNs with the Internet. To this end, the IETF has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks. However, many open challenges remain, mostly due to the complex deployment characteristics of such systems and the stringent requirements imposed by various services wishing to make use of such complex systems. Thus, it becomes critically important to study how the current approaches to standardization in this area can be improved, and at the same time better understand the opportunities for the research community to contribute to the IoT field. To this end, this article presents an overview of current standards and research activities in both industry and academia.

744 citations

Proceedings ArticleDOI
17 Aug 2012
TL;DR: The results show reduction of up to 20% in server hits, and up to 10% in the number of hops required to hit cached contents, but, most importantly, reduction of cache-evictions by an order of magnitude in comparison to universal caching.
Abstract: In-network caching necessitates the transformation of centralised operations of traditional, overlay caching techniques to a decentralised and uncoordinated environment. Given that caching capacity in routers is relatively small in comparison to the amount of forwarded content, a key aspect is balanced distribution of content among the available caches. In this paper, we are concerned with decentralised, real-time distribution of content in router caches. Our goal is to reduce caching redundancy and in turn, make more efficient utilisation of available cache resources along a delivery path.Our in-network caching scheme, called ProbCache, approximates the caching capability of a path and caches contents probabilistically in order to: i) leave caching space for other flows sharing (part of) the same path, and ii) fairly multiplex contents of different flows among caches of a shared path.We compare our algorithm against universal caching and against schemes proposed in the past for Web-Caching architectures, such as Leave Copy Down (LCD). Our results show reduction of up to 20% in server hits, and up to 10% in the number of hops required to hit cached contents, but, most importantly, reduction of cache-evictions by an order of magnitude in comparison to universal caching.

615 citations

Proceedings ArticleDOI
06 Mar 2014
TL;DR: The evolution from Intelligent Vehicle Grid to Autonomous, Internet-connected Vehicles, and Vehicular Cloud is discussed, the equivalent of Internet cloud for vehicles, providing all the services required by the autonomous vehicles.
Abstract: Traditionally, the vehicle has been the extension of the man's ambulatory system, docile to the driver's commands. Recent advances in communications, controls and embedded systems have changed this model, paving the way to the Intelligent Vehicle Grid. The car is now a formidable sensor platform, absorbing information from the environment (and from other cars) and feeding it to drivers and infrastructure to assist in safe navigation, pollution control and traffic management. The next step in this evolution is just around the corner: the Internet of Autonomous Vehicles. Pioneered by the Google car, the Internet of Vehicles will be a distributed transport fabric capable to make its own decisions about driving customers to their destinations. Like other important instantiations of the Internet of Things (e.g., the smart building), the Internet of Vehicles will have communications, storage, intelligence, and learning capabilities to anticipate the customers' intentions. The concept that will help transition to the Internet of Vehicles is the Vehicular Cloud, the equivalent of Internet cloud for vehicles, providing all the services required by the autonomous vehicles. In this article, we discuss the evolution from Intelligent Vehicle Grid to Autonomous, Internet-connected Vehicles, and Vehicular Cloud.

610 citations

Journal ArticleDOI
TL;DR: An initial design of NDN's forwarding plane is described and the results show that this stateful forwarding plane can successfully circumvent prefix hijackers, avoid failed links, and utilize multiple paths to mitigate congestion.

581 citations

Proceedings ArticleDOI
27 Aug 2013
TL;DR: A proof-of-concept design of an incrementally deployable ICN architecture is presented and it is found that pervasive caching and nearest-replica routing are not fundamentally necessary and most of the performance benefits can be achieved with simpler caching architectures.
Abstract: Information-Centric Networking (ICN) has seen a significant resurgence in recent years. ICN promises benefits to users and service providers along several dimensions (e.g., performance, security, and mobility). These benefits, however, come at a non-trivial cost as many ICN proposals envision adding significant complexity to the network by having routers serve as content caches and support nearest-replica routing. This paper is driven by the simple question of whether this additional complexity is justified and if we can achieve these benefits in an incrementally deployable fashion. To this end, we use trace-driven simulations to analyze the quantitative benefits attributed to ICN (e.g., lower latency and congestion). Somewhat surprisingly, we find that pervasive caching and nearest-replica routing are not fundamentally necessary---most of the performance benefits can be achieved with simpler caching architectures. We also discuss how the qualitative benefits of ICN (e.g., security, mobility) can be achieved without any changes to the network. Building on these insights, we present a proof-of-concept design of an incrementally deployable ICN architecture.

506 citations

References
More filters
Proceedings ArticleDOI
27 Aug 2001
TL;DR: Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Abstract: A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.

10,286 citations


Additional excerpts

  • ...A circular namespace is created to ensure correct routing (as in Chord [36]), but additional pointers are added to shorten routes....

    [...]

Journal ArticleDOI
01 Aug 1988
TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Abstract: In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”.Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimationexponential retransmit timer backoffslow-startmore aggressive receiver ack policydynamic window sizing on congestionKarn's clamped retransmit backofffast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet.This paper is a brief description of (i) - (v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. (viii) is described in a soon-to-be-published RFC.Algorithms (i) - (v) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them.By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy?There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, orA sender injects a new packet before an old packet has exited, orThe equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.

5,620 citations


"Networking named content" refers background in this paper

  • ...A similar flow balance between data and ack packets is what gives TCP its scalability and adaptability [8] but, unlike TCP, CCN’s model works for many-to-many multipoint delivery (see Section 3....

    [...]

  • ...The TCP solution is for endpoints to dynamically adjust their window sizes to keep the aggregate traffic volume below the level where congestion occurs [8]....

    [...]

Journal ArticleDOI
12 Nov 2000
TL;DR: OceanStore monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data.
Abstract: OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data. A prototype implementation is currently under development.

3,376 citations


"Networking named content" refers background in this paper

  • ...ing effectively a self-certifying name [23, 25, 11, 8]), or by the identity (key) of its publisher [8, 29, 22, 21])....

    [...]

Book ChapterDOI
01 Jan 2001
TL;DR: Freenet as discussed by the authors is an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers, but it does not provide any centralized location index.
Abstract: We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and difficult for a node operator to determine or be held responsible for the actual physical contents of her own node.

1,899 citations

Proceedings ArticleDOI
27 Aug 2007
TL;DR: The Data-Oriented Network Architecture (DONA) is proposed, which involves a clean-slate redesign of Internet naming and name resolution to adapt to changes in Internet usage.
Abstract: The Internet has evolved greatly from its original incarnation. For instance, the vast majority of current Internet usage is data retrieval and service access, whereas the architecture was designed around host-to-host applications such as telnet and ftp. Moreover, the original Internet was a purely transparent carrier of packets, but now the various network stakeholders use middleboxes to improve security and accelerate applications. To adapt to these changes, we propose the Data-Oriented Network Architecture (DONA), which involves a clean-slate redesign of Internet naming and name resolution.

1,643 citations


"Networking named content" refers background or methods in this paper

  • ...The Data-Oriented Network Architecture [21] replaces DNS names with flat, self-certifying names and a namebased anycast primitive above the IP layer....

    [...]

  • ...ing effectively a self-certifying name [23, 25, 11, 8]), or by the identity (key) of its publisher [8, 29, 22, 21])....

    [...]