It is shown that the multicast-aware caching problem is NP-hard and solutions with performance guarantees using randomized-rounding techniques are developed, showing that in the presence of massive demand for delay tolerant content, combining caching and multicast can indeed reduce energy costs.
Abstract:
The landscape toward 5G wireless communication is currently unclear, and, despite the efforts of academia and industry in evolving traditional cellular networks, the enabling technology for 5G is still obscure. This paper puts forward a network paradigm toward next-generation cellular networks, targeting to satisfy the explosive demand for mobile data while minimizing energy expenditures. The paradigm builds on two principles; namely caching and multicast . On one hand, caching policies disperse popular content files at the wireless edge, e.g., pico-cells and femto-cells, hence shortening the distance between content and requester. On other hand, due to the broadcast nature of wireless medium, requests for identical files occurring at nearby times are aggregated and served through a common multicast stream. To better exploit the available cache space, caching policies are optimized based on multicast transmissions. We show that the multicast-aware caching problem is NP-hard and develop solutions with performance guarantees using randomized-rounding techniques. Trace-driven numerical results show that in the presence of massive demand for delay tolerant content, combining caching and multicast can indeed reduce energy costs. The gains over existing caching schemes are 19% when users tolerate delay of three minutes, increasing further with the steepness of content access pattern.
TL;DR: Caching has been studied for more than 40 years and has recently received increased attention from industry and academia as mentioned in this paper, with the following goal: to convince the reader that content caching is an exciting research topic for the future communication systems and networks.
TL;DR: A systematical survey of the state-of-the-art caching techniques that were recently developed in cellular networks, including macro-cellular networks, heterogeneous networks, device-to-device networks, cloud-radio access networks, and fog-radioaccess networks.
TL;DR: In this article, the authors proposed a context-aware proactive caching algorithm, which learns context-specific content popularity online by regularly observing context information of connected users, updating the cache content and observing cache hits subsequently.
TL;DR: In this article, the authors proposed a proactive caching scheme for UAV-enabled content-centric communication systems, where a UAV is dispatched to serve a group of ground nodes (GNs) with random and asynchronous requests for files drawn from a given set.
TL;DR: This paper designs D2D caching strategies using multi-agent reinforcement learning and uses Q-learning to learn how to coordinate the caching decisions, and proposes a modified combinatorial upper confidence bound algorithm to reduce the action space for both IL and JAL.
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
TL;DR: This paper proposes a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared with previously known schemes, and argues that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters.
TL;DR: The need for an alternative strategy, where low power nodes are overlaid within a macro network, creating what is referred to as a heterogeneous network is discussed, and a high-level overview of the 3GPP LTE air interface, network nodes, and spectrum allocation options is provided, along with the enabling mechanisms.
Q1. How much data traffic is expected to grow in the next years?
The authors are witnessing an unprecedented worldwide growth of mobile data traffic that is expected to continue at an annual rate of 45% over the next years, reaching 30.5 exabytes per month by 2020 [2].
Q2. How does the paper describe the performance of a multicast-aware caching algorithm?
Using randomized rounding techniques, the authors develop a multicast-aware caching algorithm that achieves performance guarantees under the assumption that the capacity constraints can be violated in a bounded way.
Q3. What is the scenario for leveraging storage for improving network performance?
The idea of leveraging storage for improving network performance is gaining increasing interest with applications in content distribution [25], [26], IPTV [27], social [28] and heterogeneous cellular networks [17]-[20], [41], [42].
Q4. What is the effect of coordination on the energy cost of the backhaul link?
The authors find that coordination can indeed reduce energy cost, but the gains are low (≤ 1% and ≤ 5% for the wired and wireless case respectively).
Q5. What is the power consumption of a wired backhaul link?
The power consumption of a wired backhaul link includes the power consumed at theaggregation switches (1− α)AgswitchAgmax Pmax [34].
Q6. how many times the data is placed in a SBS cache?
the probability that xni takes the value 1 is at most:1− min r∈R:n∈ry†ri2µ(9) ≤ x † ni2µ (17)Summing over all the files yields that the expected amount of data placed in a SBS cache n ∈ N is at most:∑i∈I(x†ni 2µ ) (12) ≤ 1 2µ · Sn (18)For example, picking the value µ = 16 will result a solution of cost that is at most three times larger than the optimal violating cache capacities by a factor less than three.