scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Object-Oriented Packet Caching for ICN

TL;DR: Object-oriented Packet Caching is proposed, a novel caching scheme that overcomes theSRAM bottleneck, by combining object-level indexing in the SRAM with packet-level storage in the DRAM, and can enhance the impact of ICN packet- level caching, reducing both network and server load.
Abstract: One of the most discussed features offered by Information-centric Networking (ICN) architectures is the ability to support packet-level caching at every node in the network. By individually naming each packet, ICN allows routers to turn their queueing buffers into packet caches, thus exploiting the network's existing storage resources. However, the performance of packet caching at commodity routers is restricted by the small capacity of their SRAM, which holds the index for the packets stored at the, slower, DRAM. We therefore propose Object-oriented Packet Caching (OPC), a novel caching scheme that overcomes the SRAM bottleneck, by combining object-level indexing in the SRAM with packet-level storage in the DRAM. We implemented OPC and experimentally evaluated it over various cache placement policies, showing that it can enhance the impact of ICN packet-level caching, reducing both network and server load.
Citations
More filters
Journal ArticleDOI
Zhuo Li1, Xu Yaping1, Beichuan Zhang2, Liu Yan1, Kaihua Liu1 
TL;DR: This survey gives the complete requirements and compares all the schemes proposed for NDN forwarding plane based on the data structure utilized, and discusses some issues, challenges, and directions in future research.
Abstract: Named Data Networking (NDN) is the most promising paradigm recently conceived for future Internet architectures, where communications are driven by content instead of host addresses. To realize this novel paradigm, three novel tables, namely Content Store, Pending Interest Table, and Forwarding Information Base, are utilized in NDN forwarding plane. Designing and evaluating the quick enough forwarding plane with high capacity is a major challenge within the overall NDN research area. Since NDN was proposed in 2010, there have been many efforts focusing on this challenge and a rich literature has been developed. Unfortunately, there is a lack of the comprehensive sketch about the requirements of NDN forwarding plane and the study on various schemes proposed. Focusing on the above insufficiency, this survey gives the complete requirements and compares all the schemes proposed for NDN forwarding plane based on the data structure utilized. In addition, the survey also discusses some issues, challenges, and directions in future research. It is considered that designing a novel data structure to meet all requirements of the forwarding plane and studying on a better structure of content store play important roles, while discussing the necessity of a unified index, combining with other contents in NDN research and implementing a unified benchmark are also required in this domain.

75 citations


Cites background from "Object-Oriented Packet Caching for ..."

  • ...Packet Caching (OPC) [102] to solve this issue, which combines the object-level indexing with packet-level storage....

    [...]

  • ...But some schemes can not fully support the cache replacement policy, such as OPC [102]....

    [...]

Journal ArticleDOI
08 Feb 2019-Sensors
TL;DR: By simulation, it is shown that PPCS, utilizing edge-computing for the joint optimization of caching decision and replacement policies, considerably outperforms relevant existing ICN caching strategies in terms of latency, cache redundancy, and content availability.
Abstract: This article proposes a novel chunk-based caching scheme known as the Progressive Popularity-Aware Caching Scheme (PPCS) to improve content availability and eliminate the cache redundancy issue of Information-Centric Networking (ICN). Particularly, the proposal considers both entire-object caching and partial-progressive caching for popular and non-popular content objects, respectively. In the case that the content is not popular enough, PPCS first caches initial chunks of the content at the edge node and then progressively continues caching subsequent chunks at upstream Content Nodes (CNs) along the delivery path over time, according to the content popularity and each CN position. Therefore, PPCS efficiently avoids wasting cache space for storing on-path content duplicates and improves cache diversity by allowing no more than one replica of a specified content to be cached. To enable a complete ICN caching solution for communication networks, we also propose an autonomous replacement policy to optimize the cache utilization by maximizing the utility of each CN from caching content items. By simulation, we show that PPCS, utilizing edge-computing for the joint optimization of caching decision and replacement policies, considerably outperforms relevant existing ICN caching strategies in terms of latency (number of hops), cache redundancy, and content availability (hit rate), especially when the CN's cache size is small.

28 citations


Cites background from "Object-Oriented Packet Caching for ..."

  • ...proposed Object-Oriented Packet Caching (OPC) to study and improve the impact of packet-caching in ICN [14]....

    [...]

Journal ArticleDOI
TL;DR: A hardware implementation of the pending interest table (PIT) for named data networking (NDN) is presented that employs an on-chip Bloom Filter and an off-chip linear-chained hash table in its design and incorporates a name ID table to store all distinct name IDs in the PIT.

17 citations

Proceedings ArticleDOI
26 Sep 2017
TL;DR: The paper designs a reference forwarding engine by selecting well-established high-speed techniques and analyzes state-of-the-art prototype implementation to know its performance bottleneck and designs two prefetch-friendly packet processing techniques to hide DRAM access latency.
Abstract: The goal of the paper is to present what an ideal NDN forwarding engine on a commercial off-the-shelf (COTS) computer is supposed to be. The paper designs a reference forwarding engine by selecting well-established high-speed techniques and then analyzes state-of-the-art prototype implementation to know its performance bottleneck. The microarchitectural analysis at the level of CPU pipelines and instructions reveals that dynamic random access memory (DRAM) access latency is one of bottlenecks for high-speed forwarding engines. Finally, the paper designs two prefetch-friendly packet processing techniques to hide DRAM access latency. The prototype according to the techniques achieves more than 40 million packets per second packet forwarding on a COTS computer.

17 citations


Cites background from "Object-Oriented Packet Caching for ..."

  • ...Note that most of the existing approaches to accelerate the computing speed of per-packet caching, such as object-oriented caching [23] and cache admission algorithms [19], are applicable to a hash table-based CS....

    [...]

Journal ArticleDOI
TL;DR: This paper proposed a novel split architecture to cope with the problem of speed mismatch between high-speed packet forwarding and low-speed block I/O operation over POF switches and proposed an efficient and scalable design.
Abstract: Research has proven that in-network caching is an effective way of eliminating redundant network traffic. For a larger cache that scales up to terabytes, a network element must utilize block storage devices. Nevertheless, accessing block devices in packet forwarding paths could be a major performance bottleneck because storage devices are prone to be much slower than memory devices concerning bandwidth and latency. Software-defined networking (SDN) has entered into all aspects of network architecture by separating the control and forwarding plane to make it more programmable and application-aware. Protocol-oblivious forwarding (POF), which is an enhancement to current OpenFlow-based SDN forwarding architecture, enhances the network programmability further. In this paper, we proposed a novel split architecture to cope with the problem of speed mismatch between high-speed packet forwarding and low-speed block I/O operation over POF switches. The issues raised by this split architecture were first explored and could be summarized as packet dependency and protocol conversion. Then, we focused on solving these two problems and proposed an efficient and scalable design. Finally, we conducted extensive experiments to evaluate the split architecture along with the proposed approaches for packet dependency and protocol conversion.

8 citations


Cites background or methods from "Object-Oriented Packet Caching for ..."

  • ...For instance, authors have implemented a hierarchical packet cache design, i.e., a small but fast DRAM combined with multiple large but slow SSDs in [20] and [21]....

    [...]

  • ...If the packet is stored in the DRAM, the corresponding data packet is prepared and an SSCP reply packet, denoted by SSCPreply, interest, is sent to switch end....

    [...]

  • ...packets that are stored at slower DRAM [15], [17]....

    [...]

  • ...Thomas et al. [15] proposed a two-layer ICN router that overcomes the SRAM bottleneck, by combining object-level indexing in the SRAM with packet-level storage in the DRAM....

    [...]

  • ...If not, it will be cached on the first (DRAM) level caching....

    [...]

References
More filters
Journal ArticleDOI
15 Oct 1999-Science
TL;DR: A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.
Abstract: Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.

33,771 citations


"Object-Oriented Packet Caching for ..." refers methods in this paper

  • ...We examined 10 scale-free topologies of 50 nodes, created via the BarabásiAlbert algorithm [26], as in the experiments in [19]....

    [...]

Journal ArticleDOI
TL;DR: A survey of the core functionalities of Information-Centric Networking (ICN) architectures to identify the key weaknesses of ICN proposals and to outline the main unresolved research challenges in this area of networking research.
Abstract: The current Internet architecture was founded upon a host-centric communication model, which was appropriate for coping with the needs of the early Internet users. Internet usage has evolved however, with most users mainly interested in accessing (vast amounts of) information, irrespective of its physical location. This paradigm shift in the usage model of the Internet, along with the pressing needs for, among others, better security and mobility support, has led researchers into considering a radical change to the Internet architecture. In this direction, we have witnessed many research efforts investigating Information-Centric Networking (ICN) as a foundation upon which the Future Internet can be built. Our main aims in this survey are: (a) to identify the core functionalities of ICN architectures, (b) to describe the key ICN proposals in a tutorial manner, highlighting the similarities and differences among them with respect to those core functionalities, and (c) to identify the key weaknesses of ICN proposals and to outline the main unresolved research challenges in this area of networking research.

1,408 citations


"Object-Oriented Packet Caching for ..." refers background in this paper

  • ...The distinguishing feature of ICN is the placement of information in the center of network operations, in contrast to endpoint-oriented IP networks [7]....

    [...]

  • ...Most such weaknesses can potentially be addressed by Information-Centric Networking (ICN) [7]....

    [...]

Proceedings ArticleDOI
17 Aug 2012
TL;DR: The results show reduction of up to 20% in server hits, and up to 10% in the number of hops required to hit cached contents, but, most importantly, reduction of cache-evictions by an order of magnitude in comparison to universal caching.
Abstract: In-network caching necessitates the transformation of centralised operations of traditional, overlay caching techniques to a decentralised and uncoordinated environment. Given that caching capacity in routers is relatively small in comparison to the amount of forwarded content, a key aspect is balanced distribution of content among the available caches. In this paper, we are concerned with decentralised, real-time distribution of content in router caches. Our goal is to reduce caching redundancy and in turn, make more efficient utilisation of available cache resources along a delivery path.Our in-network caching scheme, called ProbCache, approximates the caching capability of a path and caches contents probabilistically in order to: i) leave caching space for other flows sharing (part of) the same path, and ii) fairly multiplex contents of different flows among caches of a shared path.We compare our algorithm against universal caching and against schemes proposed in the past for Web-Caching architectures, such as Leave Copy Down (LCD). Our results show reduction of up to 20% in server hits, and up to 10% in the number of hops required to hit cached contents, but, most importantly, reduction of cache-evictions by an order of magnitude in comparison to universal caching.

615 citations


"Object-Oriented Packet Caching for ..." refers background in this paper

  • ...Content placement – cache selection policy (macro) • Where (in the network) to store a packet • Everywhere (universal) • Betweenness Centrality [7], Probabilistic caching [8], ....

    [...]

Journal ArticleDOI
11 Aug 2006
TL;DR: This paper introduces a new representation for regular expressions, called the Delayed Input DFA (D2FA), which substantially reduces space equirements as compared to a DFA, and describes an efficient architecture that can perform deep packet inspection at multi-gigabit rates.
Abstract: There is a growing demand for network devices capable of examining the content of data packets in order to improve network security and provide application-specific services. Most high performance systems that perform deep packet inspection implement simple string matching algorithms to match packets against a large, but finite set of strings. owever, there is growing interest in the use of regular expression-based pattern matching, since regular expressions offer superior expressive power and flexibility. Deterministic finite automata (DFA) representations are typically used to implement regular expressions. However, DFA representations of regular expression sets arising in network applications require large amounts of memory, limiting their practical application.In this paper, we introduce a new representation for regular expressions, called the Delayed Input DFA (D2FA), which substantially reduces space equirements as compared to a DFA. A D2FA is constructed by transforming a DFA via incrementally replacing several transitions of the automaton with a single default transition. Our approach dramatically reduces the number of distinct transitions between states. For a collection of regular expressions drawn from current commercial and academic systems, a D2FA representation reduces transitions by more than 95%. Given the substantially reduced space equirements, we describe an efficient architecture that can perform deep packet inspection at multi-gigabit rates. Our architecture uses multiple on-chip memories in such a way that each remains uniformly occupied and accessed over a short duration, thus effectively distributing the load and enabling high throughput. Our architecture can provide ostffective packet content scanning at OC-192 rates with memory requirements that are consistent with current ASIC technology.

553 citations

Proceedings ArticleDOI
27 Aug 2013
TL;DR: A proof-of-concept design of an incrementally deployable ICN architecture is presented and it is found that pervasive caching and nearest-replica routing are not fundamentally necessary and most of the performance benefits can be achieved with simpler caching architectures.
Abstract: Information-Centric Networking (ICN) has seen a significant resurgence in recent years. ICN promises benefits to users and service providers along several dimensions (e.g., performance, security, and mobility). These benefits, however, come at a non-trivial cost as many ICN proposals envision adding significant complexity to the network by having routers serve as content caches and support nearest-replica routing. This paper is driven by the simple question of whether this additional complexity is justified and if we can achieve these benefits in an incrementally deployable fashion. To this end, we use trace-driven simulations to analyze the quantitative benefits attributed to ICN (e.g., lower latency and congestion). Somewhat surprisingly, we find that pervasive caching and nearest-replica routing are not fundamentally necessary---most of the performance benefits can be achieved with simpler caching architectures. We also discuss how the qualitative benefits of ICN (e.g., security, mobility) can be achieved without any changes to the network. Building on these insights, we present a proof-of-concept design of an incrementally deployable ICN architecture.

506 citations


"Object-Oriented Packet Caching for ..." refers background or result in this paper

  • ...Furthermore, we assume 40 byte LRU entries and 1500 byte chunks, similarly to most previous work [9, 19, 20]....

    [...]

  • ...Most other research simply assumes a Least Recently Used (LRU) replacement policy [16, 17, 20, 19, 20, 22] or novel policies for the proper distribution of the cached content along the path [15, 18, 23], without evaluating whether router-cache performance is limited by the size of its fast memory....

    [...]

  • ...In addition, some authors advocate caching content only at a subset of network nodes that satisfy certain centrality requirements [19], while others argue that an “edge” caching deployment provides roughly the same gains with a universal caching architecture [20]....

    [...]

Trending Questions (1)
How do I clear my Exchange server cache?

We implemented OPC and experimentally evaluated it over various cache placement policies, showing that it can enhance the impact of ICN packet-level caching, reducing both network and server load.