scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Optimal Content Caching in Content-Centric Networks

TL;DR: This work proposes an optimization based in-network caching policy, named as opt-Cache, which makes more efficient use of available cache resources, in order to reduce overall network utilization with reduced latency and improve bandwidth consumption and latency.
Abstract: Content-Centric Networking (CCN) is a novel architecture that is shifting host-centric communication to a content-centric infrastructure. In recent years, in-network caching in CCNs has received significant attention from research community. To improve the cache hit ratio, most of the existing schemes store the content at maximum number of routers along the downloading path of content from source. While this helps in increased cache hits and reduction in delay and server load, the unnecessary caching significantly increases the network cost, bandwidth utilization, and storage consumption. To address the limitations in existing schemes, we propose an optimization based in-network caching policy, named as opt-Cache, which makes more efficient use of available cache resources, in order to reduce overall network utilization with reduced latency. Unlike existing schemes that mostly focus on a single factor to improve the cache performance, we intend to optimize the caching process by simultaneously considering various factors, e.g., content popularity, bandwidth, and latency, under a given set of constraints, e.g., available cache space, content availability, and careful eviction of existing contents in the cache. Our scheme determines optimized set of content to be cached at each node towards the edge based on content popularity and content distance from the content source. The contents that have less frequent requests have their popularity decreased with time. The optimal placement of contents across the CCN routers allows the overall reduction in bandwidth and latency. The proposed scheme is compared with the existing schemes and depicts better performance in terms of bandwidth consumption and latency while using less network resources.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A comprehensive taxonomy of machine learning techniques for in-network caching in edge networks is formulated and a comparative analysis of the state-of-the-art literature is presented with respect to the parameters identified in the taxonomy.

71 citations

Journal ArticleDOI
TL;DR: In this analysis, the Compound Popular Content Caching Strategy (CPCCS) has performed better in terms to enhance the overall NDN-based caching performance and it is suggested that the CPCCS will perform better to achieve enhanced performance in emerging environments such as, Internet of Things, Fog computing, Edge computing, 5G, and Software Defined Network (SDN).
Abstract: Data communication in the present Internet paradigm is dependent on fixed locations that disseminate similar data several times. As a result, the number of problems has been generated in which location dependency is the most crucial for communication. Therefore, Named Data Networking (NDN) is a new network architecture that revolutionized the handling gigantic amount of data generated from diverse locations. The NDN offers in-network cache which is the most beneficial feature to reduce the difficulties of location-based Internet paradigms. Moreover, it mitigates network congestion and provides a short stretch path in the data downloading procedure. The current study explores a new comparative analysis of popularity-based cache management strategies for NDN to find the optimal caching scheme to enhance the overall network performance. Therefore, the content popularity-based caching strategies are comparatively and extensively studied in an NDN-based simulation environment in terms of most significant metrics such as hit ratio, content diversity ratio, content redundancy, and stretch ratio. In this analysis, the Compound Popular Content Caching Strategy (CPCCS) has performed better in terms to enhance the overall NDN-based caching performance. Therefore, it is suggested that the CPCCS will perform better to achieve enhanced performance in emerging environments such as, Internet of Things (IoT), Fog computing, Edge computing, 5G, and Software Defined Network (SDN).

35 citations

Journal ArticleDOI
TL;DR: A new content caching mechanism named the compound popular content caching strategy (CPCCS) is proposed for efficient content dissemination and its performance is measured in terms of cache hit ratio, content diversity, and stretch.
Abstract: The aim of named data networking (NDN) is to develop an efficient data dissemination approach by implementing a cache module within the network. Caching is one of the most prominent modules of NDN that significantly enhances the Internet architecture. NDN-cache can reduce the expected flood of global data traffic by providing cache storage at intermediate nodes for transmitted contents, making data broadcasting in efficient way. It also reduces the content delivery time by caching popular content close to consumers. In this study, a new content caching mechanism named the compound popular content caching strategy (CPCCS) is proposed for efficient content dissemination and its performance is measured in terms of cache hit ratio, content diversity, and stretch. The CPCCS is extensively and comparatively studied with other NDN-based caching strategies, such as max-gain in-network caching (MAGIC), WAVE popularity-based caching strategy, hop-based probabilistic caching (HPC), LeafPopDown, most popular cache (MPC), cache capacity aware caching (CCAC), and ProbCache through simulations. The results shows that the CPCCS performs better in terms of the cache hit ratio, content diversity ratio, and stretch ratio than all other strategies.

32 citations

Journal ArticleDOI
TL;DR: A novel content caching scheme called DPWCS (Dynamic Popularity Window-based Caching Scheme) has been proposed that efficiently places contents in the network cache and is suitable for Industry 4.0 and futuristic Internet architectures with 5G.

30 citations

Journal ArticleDOI
TL;DR: The consequences show that the proposed EHPC strategy delivers an enhanced number of heterogeneous contents in terms of improving content diversity ratio along with the higher cache-hit ratio as compared to the existing state-of-the-art strategies.
Abstract: The abilities of in-network caching in Named Data Networking (NDN) agrees to enable effective data dissemination throughout the globe without requiring the end-to-end communication infrastructure. The goal of this study is to develop an efficient content placement strategy for the NDN caching module to achieve the enhanced performance for the NDN network. In this work, optimizations of the problems in NDN-based caching strategies are formulated and to solve these problems, a new caching strategy is designed Named as Efficient Hybrid Content Placement (EHCP). The aim of the EHPC strategy is to reduce the multiple replications of homogeneous contents at numerous locations during data dissemination and to increase the diverse number of contents along the data delivery path. In addition, the effect of the cache-hit ratio, hop-decrement, and content redundancy is shown by comparing the proposed caching strategy with other state-of-the-art caching strategies in a simulation environment. The consequences show that the proposed strategy delivers an enhanced number of heterogeneous contents in terms of improving content diversity ratio along with the higher cache-hit ratio. Moreover, with different cache sizes, EHCP performs better in terms of content redundancy and hop-decrement as compared to the existing state-of-the-art strategies.

22 citations


Cites background from "Optimal Content Caching in Content-..."

  • ...The reason is that the LPD and CCAC strategies create similar content replications along the data routing path....

    [...]

  • ...It is clear from these results, the EHCP strategy performed better as compared to LPD, CCAC, and ProbCache with respect to all selected parameters with all caches size (1GB to 10GB) in hand....

    [...]

  • ...Leaf Popular Down (LPD) [29] is a popularity-based content placement strategy, which caches the popular content at the downstream node and edge node....

    [...]

  • ...LPD performs better to some extent as it caches content at a downstream node as a first caching operation that increases the chance to accommodate more contents along the data delivery path....

    [...]

  • ...As compare to 155608 VOLUME 7, 2019 ProbCache, CCAC, LPD, EHCP performs better in terms of redundant caching operations because of its heterogeneous nature to execute limited caching operations alongside the data routing path....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: Content-Centric Networking (CCN) is presented which uses content chunks as a primitive---decoupling location from identity, security and access, and retrieving chunks of content by name, and simultaneously achieves scalability, security, and performance.
Abstract: Current network use is dominated by content distribution and retrieval yet current networking protocols are designed for conversations between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which uses content chunks as a primitive---decoupling location from identity, security and access, and retrieving chunks of content by name. Using new approaches to routing named content, derived from IP, CCN simultaneously achieves scalability, security, and performance. We describe our implementation of the architecture's basic features and demonstrate its performance and resilience with secure file downloads and VoIP calls.

3,122 citations


"Optimal Content Caching in Content-..." refers background in this paper

  • ...All these issues affect the performance of in-network caching [7, 8]....

    [...]

Proceedings ArticleDOI
17 Aug 2012
TL;DR: The results show reduction of up to 20% in server hits, and up to 10% in the number of hops required to hit cached contents, but, most importantly, reduction of cache-evictions by an order of magnitude in comparison to universal caching.
Abstract: In-network caching necessitates the transformation of centralised operations of traditional, overlay caching techniques to a decentralised and uncoordinated environment. Given that caching capacity in routers is relatively small in comparison to the amount of forwarded content, a key aspect is balanced distribution of content among the available caches. In this paper, we are concerned with decentralised, real-time distribution of content in router caches. Our goal is to reduce caching redundancy and in turn, make more efficient utilisation of available cache resources along a delivery path.Our in-network caching scheme, called ProbCache, approximates the caching capability of a path and caches contents probabilistically in order to: i) leave caching space for other flows sharing (part of) the same path, and ii) fairly multiplex contents of different flows among caches of a shared path.We compare our algorithm against universal caching and against schemes proposed in the past for Web-Caching architectures, such as Leave Copy Down (LCD). Our results show reduction of up to 20% in server hits, and up to 10% in the number of hops required to hit cached contents, but, most importantly, reduction of cache-evictions by an order of magnitude in comparison to universal caching.

615 citations

Posted Content
TL;DR: In this article, the authors proposed a coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared to previously known schemes, in particular the improvement can be on the order of the number of users in the network.
Abstract: Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e, the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared to previously known schemes. In particular, the improvement can be on the order of the number of users in the network. Moreover, we argue that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters.

581 citations

Book ChapterDOI
21 May 2012
TL;DR: A centrality-based caching algorithm is proposed by exploiting the concept of (ego network) betweenness centrality to improve the caching gain and eliminate the uncertainty in the performance of the simplistic random caching strategy.
Abstract: Ubiquitous in-network caching is one of the key aspects of information-centric networking (ICN) which has recently received widespread research interest. In one of the key relevant proposals known as Networking Named Content (NNC), the premise is that leveraging in-network caching to store content in every node it traverses along the delivery path can enhance content delivery. We question such indiscriminate universal caching strategy and investigate whether caching less can actually achieve more . Specifically, we investigate if caching only in a subset of node(s) along the content delivery path can achieve better performance in terms of cache and server hit rates. In this paper, we first study the behavior of NNC's ubiquitous caching and observe that even naive random caching at one intermediate node within the delivery path can achieve similar and, under certain conditions, even better caching gain. We propose a centrality-based caching algorithm by exploiting the concept of (ego network) betweenness centrality to improve the caching gain and eliminate the uncertainty in the performance of the simplistic random caching strategy. Our results suggest that our solution can consistently achieve better gain across both synthetic and real network topologies that have different structural properties.

360 citations


"Optimal Content Caching in Content-..." refers background in this paper

  • ...Works proposed by [20, 21] investigate that caching only at a subset of node(s) along the content delivery path can improve in-network caching performance in terms of cache and server hit....

    [...]

Journal ArticleDOI
TL;DR: An approximate analytic model is constructed for the case of LCD interconnection of LRU caches and used to gain a better insight as to why the LCD interconnections yields an improved performance.

334 citations