scispace - formally typeset
Open AccessJournal ArticleDOI

Exploiting Caching and Multicast for 5G Wireless Networks

Reads0
Chats0
TLDR
It is shown that the multicast-aware caching problem is NP-hard and solutions with performance guarantees using randomized-rounding techniques are developed, showing that in the presence of massive demand for delay tolerant content, combining caching and multicast can indeed reduce energy costs.
Abstract
The landscape toward 5G wireless communication is currently unclear, and, despite the efforts of academia and industry in evolving traditional cellular networks, the enabling technology for 5G is still obscure. This paper puts forward a network paradigm toward next-generation cellular networks, targeting to satisfy the explosive demand for mobile data while minimizing energy expenditures. The paradigm builds on two principles; namely caching and multicast . On one hand, caching policies disperse popular content files at the wireless edge, e.g., pico-cells and femto-cells, hence shortening the distance between content and requester. On other hand, due to the broadcast nature of wireless medium, requests for identical files occurring at nearby times are aggregated and served through a common multicast stream. To better exploit the available cache space, caching policies are optimized based on multicast transmissions. We show that the multicast-aware caching problem is NP-hard and develop solutions with performance guarantees using randomized-rounding techniques. Trace-driven numerical results show that in the presence of massive demand for delay tolerant content, combining caching and multicast can indeed reduce energy costs. The gains over existing caching schemes are 19% when users tolerate delay of three minutes, increasing further with the steepness of content access pattern.

read more

Content maybe subject to copyright    Report

Exploiting Caching and Multicast for 5G Wireless
Networks
Konstantinos Poularakis, George Iosifidis, Vasilis Sourlas, Member, IEEE, and Leandros Tassiulas, Fellow, IEEE
Abstract—The landscape towards 5G wireless communication
is currently unclear, and, despite the efforts of academia and
industry in evolving traditional cellular networks, the enabling
technology for 5G is still obscure. This paper puts forward a
network paradigm towards next-generation cellular networks,
targeting to satisfy the explosive demand for mobile data while
minimizing energy expenditures. The paradigm builds on two
principles; namely caching and multicast. On one hand, caching
policies disperse popular content files at the wireless edge, e.g.,
pico-cells and femto-cells, hence shortening the distance between
content and requester. On other hand, due to the broadcast
nature of wireless medium, requests for identical files occurred
at nearby times are aggregated and served through a common
multicast stream. To better exploit the available cache space,
caching policies are optimized with concerns on multicast trans-
missions. We show that the multicast-aware caching problem
is NP-Hard and develop solutions with performance guarantees
using randomized-rounding techniques. Trace-driven numerical
results show that in presence of massive demand for delay
tolerant content, combining caching and multicast can indeed
reduce energy costs. The gains over existing caching schemes
are 19% when users tolerate delay of three minutes, increasing
further with the steepness of content access pattern.
Index Terms—Content Caching, Multicast Delivery, Network
Optimization, 5G Wireless Networks.
I. INTRODUCTION
A. Motivation
We are witnessing an unprecedented worldwide growth of
mobile data traffic that is expected to continue at an annual rate
of 45% over the next years, reaching 30.5 exabytes per month
by 2020 [2]. To handle this “data tsunami”, the emerging
5
th
generation (5G) systems need to improve the network
performance in terms of energy consumption, throughput and
user experienced delay, and at the same time make a better
use of the network resources such as wireless bandwidth and
backhaul link capacity. Two candidate solutions that have been
investigated are caching and multicast.
On the first issue, there is an increasing interest for in-
network caching architectures where operators cache popular
Part of this work appeared in the proceedings of IEEE Wireless Communi-
cations and Networking Conference (WCNC), pp. 2300-2305, April 2014 [1].
This work was supported partly by the EC through the FP7 project FLEX
(no. 612050), the Marie Curie project INTENT (grant no. 628360) and by the
National Science Foundation Graduate Research Fellowship Program (grant
no. CNS-1527090).
K. Poularakis is with the Dept. of Electrical and Computer Engineering,
University of Thessaly, Greece (e-mail: kopoular@uth.gr). G. Iosifidis and
L. Tassiulas are with the Electrical Engineering Department & Institute
for Network Science, Yale University, USA (e-mail: {georgios.iosifidis,
leandros.tassiulas}@yale.edu). V. Sourlas is with the Electronic & Elec-
trical Engineering Department, University College London, UK (e-mail:
v.sourlas@ucl.ac.uk).
content files at the Evolved Packet Core (EPC) or at the
Radio Access Network (RAN), e.g., in dedicated boxes or at
the cellular base stations. The common denominator is that
they distribute storage resources near the end-user (rather than
stored in data centers). In the context of heterogeneous cellular
networks (HCNs) [3], caches can be installed at small-cell
base stations (SBSs), e.g., pico-cells and femto-cells, targeting
to offload traffic from the collocated macro-cell base station
(MBS) [4]. Measurement studies have revealed up to 66%
reduction in network traffic by using caching in 3G [5] and
4G [6] networks. Meanwhile, the wireless industry began to
commercialize systems that support caching with examples
including Altobridge’s “Data at the Edge” solution [7], Nokia
Siemens Networks’ liquid application [8] and Saguna Net-
works’ Open RAN platform [9].
On the second issue, many operators take advantage of mul-
ticast to efficiently utilize the available bandwidth of their net-
works in delivering the same content to multiple receivers [10].
For example, multicast is often used for delivering spon-
sored content, e.g., mobile advertisements in certain locations,
downloading news, stock market reports, weather and sports
updates [11]. Meanwhile, multicast has been incorporated in
3GPP specifications in which the proposed technology for LTE
is called Evolved Multimedia Broadcast and Multicast Ser-
vices (eMBMS) [12]. Commercial examples of eMBMS are
Ericsson and Qualcomm LTE Broadcast solutions [13], [14].
This technology can be used across multiple cells where the
transmission across them is synchronous using a common
carrier frequency. Hence, multicast consumes a subset of the
radio resources needed by a unicast service. The remaining
resources can be used to support transmissions towards other
users, thus enhancing network capacity.
Current proposals from academia and industry consider
caching and multicast independently one from the other and
for different purposes. On one hand, caching is used to shift
traffic from peak to off-peak hours by exploiting the periodic
pattern of traffic generation. This is realized by filling the
caches with content during off-peak hours (e.g., nighttime),
and serving requests for the stored content by the caches
during peak-time (e.g., daytime). On other hand, multicast
is used to reduce energy and bandwidth consumption by
serving concurrent user requests for the same content via a
single point-to-multipoint transmission instead of many point-
to-point (unicast) transmissions.
Intuitively, caching should be effective when there is enough
content reuse; i.e., many recurring requests for a few content
files appear over time. Multicast should be effective when there
is significant concurrency in accessing information across

users; i.e., many users concurrently generate requests for the
same content file. Such scenarios are more common during
crowded events with a large number of co-located people that
are interested in the same contents, e.g., during sporting games,
concerts and public demonstrations with often tens of thousand
attendees [15], [16]. In next generation 5G systems where the
demand for mobile data is often massive, and a variety of
new services such as social networking platforms and news
services employ the one-to-many communication paradigm,
e.g., updates in Tweeter, Facebook, etc, it is expected that
multicast will be more often applied.
Clearly, it is of paramount importance to design caching and
multicast mechanisms for servicing the mobile user requests
with the minimum possible energy expenditures. For a given
anticipated content demand, the caching problem asks for
determining in which caches to store each content file. This
becomes more challenging in HCNs where users are covered
by multiple base stations and hence content can be delivered
to requesters through multiple network paths [17]-[20]. Also,
the caching problem differs when multicast is employed to
serve concurrent requests for the same content file. Compared
to unicast communication, multicast incurs less traffic as the
requested file is transmitted to users only once, rather than with
many point-to-point transmissions. Hence, the caching prob-
lem needs to be revisited to effectively tackle the following
questions: Can caching and multicast be combined to reduce
energy costs of an operator? If yes, what is the condition and
where the gains come from?
B. Methodology and Contributions
In order to answer the above questions, we consider a HCN
model that supports caching and multicast for the service of
the mobile users. Requests for the same content file generated
during a short-time window are aggregated and served through
a single multicast transmission when the corresponding win-
dow expires (batching multicast [21]). To ensure that the user
experienced delay will be limited, the duration of this window
should be as small as possible. For example, users may tolerate
a very small start-up delay for video streaming applications,
whereas larger delay may be acceptable for downloading
news, stock market reports, weather and sports updates. The
multicast stream can be delivered either by a SBS that is
in communication range with the requesters in case that the
respective file is available in its cache, or by the MBS which
has access to the entire file library through a backhaul link.
Clearly, a MBS multicast transmission can satisfy requests
generated within the coverage areas of different SBSs that have
not cached the requested file. However, it typically induces
higher energy cost than a SBS, since the distance to the
receiver is larger and it also needs to fetch the file via its
backhaul link.
First, we demonstrate through simple examples how mul-
ticast affects the efficiency of caching policies. Then, we
introduce a general optimization problem (which we name
MACP) for devising the multicast-aware caching policy that
minimizes the overall energy cost. Our model explicitly takes
into consideration: (i) the heterogeneity of the base stations
which may have different cache sizes and transmission cost
parameters (e.g., due to their different energy consumption
profile [22]), and (ii) the variation of request patterns of the
users which may ask for different content files with different
intensity. We formally prove the intractability of the MACP
problem by reducing it to the set packing problem, which is
NP-Hard [23]. Following that, we develop an algorithm with
performance guarantees under the assumption that the capacity
of the caches can be expanded by a bounded factor. This
algorithm applies linear relaxation and randomized rounding
techniques. Then, we describe a simple heuristic solution
that can achieve significant performance gains over existing
caching schemes.
Using traffic information from a crowded event with over
fifty thousand attendees [15], we investigate numerically the
impact of various system parameters, such as delay tolerance
of user application, SBS cache sizes, base station transmission
costs and demand steepness. We find that the superiority of
multicast-aware caching over traditional caching schemes is
highly pronounced when: (i) the user demand for content is
high and (ii) the user requests for content are delay-tolerant.
The gains are 19% when users tolerate delay of three minutes,
increasing further with the steepness of content access pattern.
Our main technical contributions are as follows:
Multicast-aware caching problem (MACP). We propose a
novel caching paradigm and an optimization framework
building on the combination of caching and multicast
techniques in HCNs. This is important, as content de-
livery via multicast is part of 3GPP standards and gains
increasing interest.
Complexity Analysis. We prove the intractability of the
MACP problem by reducing it to the set packing problem
[23]. That is, we show that MACP is NP-Hard even
to approximate within a factor of O(
N), where N is
the number of SBSs in a macro-cell. This result reveals
how the consideration of multicast further perplexes the
caching problem.
Solution algorithms. Using randomized rounding tech-
niques, we develop a multicast-aware caching algorithm
that achieves performance guarantees under the assump-
tion that the capacity constraints can be violated in a
bounded way. Also, we describe a simple-to-implement
heuristic algorithm that provides significant performance
gains compared to the existing caching schemes.
Performance Evaluation. Using system parameters driven
from real traffic observations in a crowded event, we
show the cases where the next generation HCN systems
should optimize caching with concerns on multicast de-
livery. The proposed algorithms yield significant energy
savings over existing caching schemes, which are more
pronounced when the demand is massive and the user
requests can be delayed by three minutes or more.
The rest of the paper is organized as follows: Section II
describes the system model and defines the MACP problem
formally. In Section III, we show the intractability of the
problem and present algorithms with performance guarantees
and heuristics. Section IV presents our trace-driven numerical

results, while Section V reviews our contribution compared to
the related works. We conclude our work in Section VI.
II. SYSTEM MODEL AND PROBLEM FORMULATION
In this section we introduce the system model, we provide
a motivating example that highlights how multicast affects the
efficiency of caching policies and, finally, we formally define
the multicast-aware caching optimization problem.
A. System Model
We study the downlink operation of a heterogeneous cellular
network (HCN) like the one depicted in Fig. 1. A set N of
N small-cell base stations (SBSs), e.g., pico-cells and femto-
cells, are deployed within a macro-cell coexisting with the
macro-cell base station (MBS). The MBS can associate to any
user in the macro-cell, while SBSs can associate only to users
lying in their coverage areas. Each SBS n is equipped with
a cache of size S
n
0 bytes which can be filled in with
content files fetched from the core network through a backhaul
link. Since the SBS backhaul links are usually of low-capacity,
e.g., often facilitated by the consumers’ home networks such
as Digital Subscriber Line (DSL) [24], they cannot be used
to download content on demand to serve users. Instead, they
are only used to periodically refresh the content stored in the
caches [17]-[20]. In contrast, the backhaul link of the MBS
is of sufficient capacity to download the content requested by
users. Therefore, a user can be served either by the MBS or
by a covering SBS provided that the latter has cached the
requested content file.
The user demand for a set of popular files and within
a certain time period is assumed to be known in advance,
as in [17]-[20], [25]-[28] which is possible using learning
techniques [29], [30]. Let I indicate that collection of files,
with I = |I|. For notational convenience, we consider all files
to have the same size normalized to 1. This assumption can
be easily removed as, in real systems, files can be divided into
blocks of the same length [17], [27]. The SBS coverage areas
can be overlapping in general, but each user can associate to
only one SBS according to a best-server criterion (e.g., highest
SNR rule). We denote with λ
ni
0 (requests per time unit)
the average demand for file i generated by the users associating
to SBS n. Also, λ
0i
0 denotes the average demand for file
i generated by users who are not in the coverage area of any
of the SBSs
1
.
The operator employs multicast (such as eMBMS) for trans-
mission of the same content to multiple receivers. In this case,
user requests within a short-time window are aggregated and
served through a single multicast stream when the correspond-
ing window expires. We denote with d (time units) the time
duration of this window, also called multicast period. Clearly,
it is important to identify which SBSs receive file requests
within the multicast period. To this end, we denote with p
ni
the
probability that at least one request for file i is generated by
1
Notice that the current practice of operators is to deploy SBSs to certain
areas with high traffic. Hence, other less congested areas may be covered only
by the MBS.
Fig. 1. Graphical illustration of the discussed model. The circles represent the
coverage areas of the MBS and the SBSs. To ease presentation, the backhaul
links of the SBSs are not depicted.
users associating to SBS n (area n)
2
during a multicast period.
Similarly, p
0i
indicates the respective probability for the users
that are not in the coverage area of any of the SBSs (area n
0
).
For example, if the number of requests for file i associated to
SBS n follows the Poisson probability distribution with rate
parameter λ
ni
, it becomes:
p
ni
= 1 e
λ
ni
d
. (1)
We then define the collection of all subsets of areas excluding
the empty set as follows:
R = (r : r N n
0
, r 6= ). (2)
We also define with q
ri
the probability that at least one request
for the file i I is generated within each one of the areas
r R during a multicast period. For example, if requests
are generated independently among different areas, then the
following equation holds:
q
ri
=
Y
nr
(p
ni
) ·
Y
n /r
(1 p
ni
). (3)
Our model is generic, since it allows for any probability
distributions p
ni
and q
ri
.
The power consumption is typically higher for MBS com-
pared to SBSs, while it depends on the channel conditions and
the distance between transmitter and receiver. Let P
n
(watts)
denote the minimum transmission power required by MBS
for transmitting a file to a user in area n. According to SINR
criteria this is given by [31], [32]:
P
n
= P
s
G
n
G
m
+ L
mn
+ Ψ
n
+ 10 log
10
M
n
. (4)
In the above equation P
s
is the receiver sensitivity for the
specific service, parameter G
n
represents the antenna gain of
a user in area n and G
m
represents the antenna gain of MBS.
L
mn
is the path loss between MBS and a user in area n
which depends on the channel characteristics and the distance
between MBS and user, Ψ
n
is the shadow component derived
2
With a slight abuse of notation we use the same index for base stations
and their covering areas.

by a lognormal distribution and M
n
is the number of resource
blocks assigned to a user in area n. A similar definition holds
for the transmission power of the SBSs.
We consider the more general case in which both the
MBS and the SBSs employ multicast. Namely, a multicast
transmission of SBS n N satisfies the requests for a
cached file generated in area n, while a MBS multicast
transmission satisfies the requests generated in different areas
(and requests in area n
0
) where the associated SBSs have
not cached the requested file. Let n
denote the area that
requires the highest transmission power in a subset r R,
i.e., n
= argmax
nr
P
n
. Then, to multicast a file to all the
users in r, the power consumption required by MBS is given
by [33]:
c
W r
= P
n
= max
nr
P
n
. (5)
Similarly, c
n
denotes the power consumption required by SBS
n for multicasting a cached file to its local users, where in
general c
n
c
W r
, n, r. Finally, we denote with c
B
0 the
power consumed for transferring a file via the backhaul link
of the MBS [34].
Before we introduce formally the problem, let us provide
a simple example that highlights how the consideration of
multicast transmissions perplexes the caching problem.
B. Motivating Example
Let us consider a multicast service system with two SBSs
(N = {1, 2}) and three files (I = {1, 2, 3}). Each SBS can
cache at most one file because of its limited cache size. We
set c
B
+ c
W r
= 1 r, c
1
= c
2
= 0 and d = 1. We also
set the generation of request to follow a Poisson probability
distribution. Finally, we set λ
11
= 0.51, λ
12
= 0.49, λ
13
= 0,
λ
21
= 0.51, λ
22
= 0, and λ
23
= 0.49, which imply that
p
11
= 0.3995, p
12
= 0.3874, p
13
= 0, p
21
= 0.3995, p
22
= 0
and p
23
= 0.3874 (cf. equation (1)).
In a conventional system, each user request is served via
a point-to-point unicast transmission. It is well known that
placing the most popular files with respect to the local demand
in each cache is optimal (in terms of the overall energy cost)
in this setting. Hence, the optimal caching policy places file
1, which is the most popular file, to both SBS caches. By
applying the above caching policy to the multicast service
system that we consider here, all the requests for file 1 will
be satisfied by the accessed SBSs at zero cost. The requests
within SBS 1 for file 2 and the requests within SBS 2 for file 3
will be served by the MBS with c
B
+c
W r
= 1 cost each (Fig.
2(a)). Assuming independent generation of requests, the total
energy cost will be: (c
B
+ c
W 1
) ·p
12
·(1 p
23
) + (c
B
+ c
W 2
) ·
(1 p
12
) ·p
23
+ (c
B
+ c
W 1
+ c
B
+ c
W 2
) ·p
12
·p
23
= 0.7747,
where in the last term the cost is 2 instead of 1 because two
different files are requested for download and thus two MBS
transmissions are required for serving the requests.
However, if we take into consideration the fact that the user
requests are aggregated and served via multicast transmissions
every d = 1 time unit, then the optimal caching policy
changes; it places file 2 to SBS 1 and file 3 to SBS 2. In
this case, all the requests for file 1 will be served by the MBS
via a single multicast transmission of cost c
B
+c
W r
= 1 (Fig.
(a) Conventional caching. (b) Multicast-aware caching.
Fig. 2. An example with two SBSs and three files when (a) conventional
and (b) multicast-aware caching is applied. The labels below SBSs represent
the cached files. The labels on the top represent the files delivered by MBS.
2(b)). The requests for the rest files will be satisfied by the
accessed SBSs at zero cost. Hence, the total energy cost will
be: (c
B
+ c
W 1
) ·p
11
·(1 p
21
) + (c
B
+ c
W 2
) ·(1 p
11
) ·p
21
+
(c
B
+ c
W 12
) · p
11
· p
21
= 0.6394 < 0.7747.
This example demonstrated the inefficiency of conventional
caching schemes that neglect multicast transmissions when
determining the file placement to the caches. Novel schemes
are needed that combine caching with multicast to better
exploit the available cache space.
C. Problem Formulation
Let us introduce the binary optimization variable x
ni
that
indicates whether file i I is stored in the cache of SBS
n N (x
ni
= 1) or not (x
ni
= 0). These variables constitute
the caching policy of the operator:
x = (x
ni
{0, 1} : n N, i I). (6)
We recall that the files will be transferred to the SBS caches
through the backhaul links at the beginning of the period
of study. Clearly, this operation consumes power. Power is
also consumed by the caches themselves, with the exact value
depending on the caching hardware technology, e.g., solid state
disk (SSD) or dynamic random access memory (DRAM) [35].
We capture the above cost factors by the term c
S
which
denotes the power consumed by storing a file in a SBS cache
amortized over a multicast period.
We also use the binary optimization variable y
ri
to indicate
whether a MBS multicast transmission will occur when a
subset of areas r R receive requests for a file i I (y
ri
= 1)
or not (y
ri
= 0). These variables constitute the multicast policy
of the operator:
y = (y
ri
{0, 1} : r R, i I). (7)
Clearly, a MBS multicast will occur (y
ri
= 1) when at least
one requester cannot find i in an SBS cache. This implies that
at least one of the following conditions holds: (i) a request for
file i is generated within an area that is not in the coverage
area of any of the SBSs, i.e., n
0
r, or (ii) a request for file
i is generated by a user associated to an SBS n r \ n
0
, but
the latter has not stored in its cache the requested file. Hence,

y
ri
should satisfy the following inequalities:
y
ri
1
{n
0
r}
, r R, i I, (8)
y
ri
1 x
ni
, r R, i I, n r, (9)
where 1
{.}
is the indicator function, i.e., 1
{b}
= 1 iff condition
b is true; otherwise 1
{b}
= 0.
Let us now denote with J
i
(y) the energy cost for servicing
the requests for file i that are generated within a multicast
period, which clearly depends on the multicast policy y of
the operator. For each subset of areas r that may generate
requests for file i within a time period, a single MBS multicast
transmission of cost c
B
+ c
W r
occurs, if a requester cannot
find i in an accessed SBS (y
ri
= 1). In other case (y
ri
= 0),
all the requests are satisfied by the accessed SBSs, where the
requests in area n incur cost c
n
. Hence:
J
i
(y) =
X
r∈R
q
ri
·
y
ri
·(c
B
+c
W r
)+(1 y
ri
)·
X
nr
c
n
. (10)
Table I summarizes the key notation used throughout the paper.
The Multicast-Aware Caching Problem (MACP) determines
the caching and multicast policies that minimize the expected
energy cost within a multicast period
3
:
minimize
x,y
X
n∈N
X
i∈I
(c
S
· x
ni
) +
X
i∈I
(J
i
(y)), (11)
subject to: (8), (9),
X
i∈I
x
ni
S
n
, n N, (12)
x
ni
{0, 1}, n N, i I, (13)
y
ri
{0, 1}, r R, i I, (14)
where the first term in the objective function is the caching
cost, and the second is the servicing cost. Inequalities in (12)
ensure that the total amount of data stored in a cache will not
exceed its size. Constraints in (13), (14) indicate the discrete
nature of the optimization variables.
MACP is an integer programming problem, and hence, is in
general hard to solve. Also, its objective function in (11) has
an exponentially long description in the number of SBSs N,
since the summation in J
i
(y) is over all subsets r R. As
we formally prove in the next section, MACP is an NP-Hard
problem.
III. COMPLEXITY AND SOLUTION ALGORITHMS
In this section, we prove the high complexity of the MACP
problem and present solution algorithms with performance
guarantees and heuristics.
A. Complexity
In this subsection, we prove that the MACP problem cannot
be approximated within any ratio better than the square root of
the number of SBSs. The proof is based on a reduction from
the well known NP-Hard set packing problem (SPP) [23]. In
3
We emphasize that our model is focused on the energy consumed
for caching and transmitting data to users. Hence, other factors such as
cooling [22] are left outside the scope of our study.
TABLE I
KEY NOTATIONS
Symbol Physical Meaning
n
0
Area that is out of coverage of all SBSs
n SBS (area) belonging to the set N
r Subset of areas belonging to the collection R
i File belonging to the set I
S
n
Cache capacity of SBS n
c
S
Energy cost for storing a file in a SBS cache
c
B
Energy cost for multicasting a file via MBS backhaul
c
W r
Energy cost for multicasting a file from MBS to areas r
c
n
Energy cost for multicasting a file from SBS n
λ
ni
Average demand in area n for file i
d Duration of multicast period
p
ni
Probability that requests for file i appear in area n within d
q
ri
Probability that requests for file i appear in areas r within d
x
ni
Caching decision for file i to SBS n
y
ri
Indicator of MBS multicast for serving file i in areas r
J
i
(y) Energy cost for servicing the requests for file i
other words, we prove that SPP is a special case of MACP.
Particularly, the following theorem holds:
Theorem 1. It is NP-Hard to approximate MACP within any
ratio better than O(
N).
Theorem 1 is of high importance, since it reveals how
the consideration of multicast transmissions further perplexes
the caching problem. In order to prove Theorem 1 we will
consider the corresponding (and equivalent) decision problem,
called Multicast-Aware Caching Decision Problem (MACDP).
Specifically:
MACDP: Given a set N of SBSs, a set I of files, the cache
sizes S
n
n N, the costs c
S
, c
B
, c
W r
and c
n
r R, n
N, the multicast period d, the probabilities q
ri
r R, i I,
and a real number Q 0, we ask the following question: do
there exist caching and multicast policies x, y, such that the
value of the objective function in (11) is less or equal to Q
and constraints (8),(9),(12),(13),(14) are satisfied?
The set packing decision problem is defined as follows:
SPP: Consider a finite set of elements E and a list L
containing subsets of E. We ask: do there exist k subsets in
L that are pairwise disjoint?
Lemma 1. SPP problem is polynomial-time reducible to the
MACDP.
Proof: Let us consider an arbitrary instance of the SP P
decision problem and a specific instance of MACDP with N =
|E| SBSs, i.e., N = {1, 2, . . . , |E|}, I = |L| files, i.e., I =
{1, 2, . . . , |L|}, unit-sized caches (S
n
= 1 n N), c
S
= 0,
c
B
+ c
W r
= 1 and c
n
= 0 r R, n N. Parameter d is
any positive number, and the question is if we can satisfy all
the user requests with energy cost Q = 1
k
|L|
, where k is
the parameter from the SPP. The important point is that we
define the q
ri
probabilities as follows:
q
ri
=
(
1/|L|, if r = L(i)
0, else
(15)
where L(i) is the i
th
component of the list L. Notice that
with the previous definitions, L(i) contains a certain subset

Citations
More filters
Journal ArticleDOI

The Role of Caching in Future Communication Systems and Networks

TL;DR: Caching has been studied for more than 40 years and has recently received increased attention from industry and academia as mentioned in this paper, with the following goal: to convince the reader that content caching is an exciting research topic for the future communication systems and networks.
Journal ArticleDOI

A Survey of Caching Techniques in Cellular Networks: Research Issues and Challenges in Content Placement and Delivery Strategies

TL;DR: A systematical survey of the state-of-the-art caching techniques that were recently developed in cellular networks, including macro-cellular networks, heterogeneous networks, device-to-device networks, cloud-radio access networks, and fog-radioaccess networks.
Journal ArticleDOI

Context-Aware Proactive Content Caching With Service Differentiation in Wireless Networks

TL;DR: In this article, the authors proposed a context-aware proactive caching algorithm, which learns context-specific content popularity online by regularly observing context information of connected users, updating the cache content and observing cache hits subsequently.
Journal ArticleDOI

Overcoming Endurance Issue: UAV-Enabled Communications With Proactive Caching

TL;DR: In this article, the authors proposed a proactive caching scheme for UAV-enabled content-centric communication systems, where a UAV is dispatched to serve a group of ground nodes (GNs) with random and asynchronous requests for files drawn from a given set.
Journal ArticleDOI

Multi-Agent Reinforcement Learning for Efficient Content Caching in Mobile D2D Networks

TL;DR: This paper designs D2D caching strategies using multi-agent reinforcement learning and uses Q-learning to learn how to coordinate the caching decisions, and proposes a modified combinatorial upper confidence bound algorithm to reduce the action space for both IL and JAL.
References
More filters
Book

Computers and Intractability: A Guide to the Theory of NP-Completeness

TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Book

Introduction to linear optimization

TL;DR: p. 27, l.
Journal ArticleDOI

Fundamental Limits of Caching

TL;DR: This paper proposes a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared with previously known schemes, and argues that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters.
Journal ArticleDOI

A survey on 3GPP heterogeneous networks

TL;DR: The need for an alternative strategy, where low power nodes are overlaid within a macro network, creating what is referred to as a heterogeneous network is discussed, and a high-level overview of the 3GPP LTE air interface, network nodes, and spectrum allocation options is provided, along with the enabling mechanisms.
Related Papers (5)
Frequently Asked Questions (6)
Q1. How much data traffic is expected to grow in the next years?

The authors are witnessing an unprecedented worldwide growth of mobile data traffic that is expected to continue at an annual rate of 45% over the next years, reaching 30.5 exabytes per month by 2020 [2]. 

Using randomized rounding techniques, the authors develop a multicast-aware caching algorithm that achieves performance guarantees under the assumption that the capacity constraints can be violated in a bounded way. 

The idea of leveraging storage for improving network performance is gaining increasing interest with applications in content distribution [25], [26], IPTV [27], social [28] and heterogeneous cellular networks [17]-[20], [41], [42]. 

The authors find that coordination can indeed reduce energy cost, but the gains are low (≤ 1% and ≤ 5% for the wired and wireless case respectively). 

The power consumption of a wired backhaul link includes the power consumed at theaggregation switches (1− α)AgswitchAgmax Pmax [34]. 

the probability that xni takes the value 1 is at most:1− min r∈R:n∈ry†ri2µ(9) ≤ x † ni2µ (17)Summing over all the files yields that the expected amount of data placed in a SBS cache n ∈ N is at most:∑i∈I(x†ni 2µ ) (12) ≤ 1 2µ · Sn (18)For example, picking the value µ = 16 will result a solution of cost that is at most three times larger than the optimal violating cache capacities by a factor less than three.