scispace - formally typeset
Open AccessJournal ArticleDOI

User Preference Aware Caching Deployment for Device-to-Device Caching Networks

Reads0
Chats0
TLDR
A user preference aware caching deployment algorithm is proposed for D2D caching networks that can achieve significant improvement on cache hit ratio, content access delay, and traffic offloading gain.
Abstract
Content caching in the device-to-device (D2D) cellular networks can be utilized to improve the content delivery efficiency and reduce traffic load of cellular networks. In such cache-enabled D2D cellular networks, how to cache the diversity contents in the multiple cache-enabled mobile terminals, namely, the caching deployment, has a substantial impact on the network performance. In this paper, a user preference aware caching deployment algorithm is proposed for D2D caching networks. First, the definition of the user interest similarity is given based on the user preference. Then, a content cache utility of a mobile terminal is defined by taking the transmission coverage region of this mobile terminal and the user interest similarity of its adjacent mobile terminals into consideration. A general cache utility maximization problem with joint caching deployment and cache space allocation is formulated, where the special logarithmic utility function is integrated. In doing so, the caching deployment and the cache space allocation can be decoupled by equal cache space allocation. Subsequently, we relax the logarithmic utility maximization problem, and obtain a low complexity near-optimal solution via a dual decomposition method. Compared with the existing caching placement methods, the proposed algorithm can achieve significant improvement on cache hit ratio, content access delay, and traffic offloading gain.

read more

Content maybe subject to copyright    Report

1
User Preference Aware Caching Deployment for
Device-to-Device Caching Networks
Tiankui Zhang, Senior Member, IEEE, Hongmei Fan, Jonathan Loo, Member, IEEE, and Dantong Liu
AbstractContent Caching in the Device-to-Device (D2D)
cellular networks can be utilized to improve the content delivery
efficiency and reduce traffic load of cellular networks. In such
cache enabled D2D communication networks, how to cache the
diversity contents in the multiple cache enabled mobile terminals,
namely, the caching deployment, has a substantial impact on
network performance since the cache space in a mobile terminal
is relatively small compared with the huge amounts of multimedia
contents. In this paper, a user preference aware caching deploy-
ment algorithm is proposed for D2D caching networks. Firstly,
the user preference is defined to measure the interests of users on
the data contents, and based on this, the definition of user interest
similarity is given. Then a content cache utility of a mobile
terminal is defined by taking the communication coverage of this
mobile terminal and the user interest similarity of its adjacent
mobile terminals into consideration. A general cache utility
maximization problem with joint caching deployment and cache
space allocation is formulated, where the special logarithmic
utility function is integrated. In doing so, the caching deployment
and the cache space allocation can be decoupled by equal cache
space allocation. Subsequently, we relax the logarithmic utility
maximization problem, and obtain a low complexity near-optimal
solution via dual decomposition method. Compared with the
existing caching placement methods, the proposed algorithm can
achieve significant improvement on cache hit ratio, content access
delay and traffic offloading gain.
Index Terms—device-to-device communication, content
caching, user preference
I. INTRODUCTION
T
ODAY’S internet traffic internet traffic is dominated by
content distribution and retrieval. With the rapid explo-
sion of the data volume and content diversity, it becomes
challenging to deliver high quality service to the end user
efficiently and securely. In the pioneer work [1], opportunistic
multihop transmission had been considered to offload network
traffic by exploiting the mobile devices capabilities in the
cellular networks. Recently, content caching, a widely adopted
content delivery technique in Internet for reducing network
traffic load, has been exploited in fifth generation (5G) mobile
networks. It has been proven that caching of popular content
and pushing them close to consumers can significantly reduce
the mobile traffic [2]. [3], [4] have explicitly demonstrated the
role of the caching technology in enabling the 5G networks.
With this, in the cellular networks, there have been many
works utilizing the cache to improve the system performance
This work was supported in part by NSF of China (No. 61461029).
Tiankui Zhang, Hongmei Fan are with the School of Information and Com-
munication Engineering, Beijing University of Posts and Telecommunications,
Beijing, 100876, China (e-mail: {zhangtiankui, fhm}@bupt.edu.cn).
Jonathan Loo is with the School of Computing and Engineering, University
of West London, United Kingdom (e-mail: jonathan.loo@uwl.ac.uk).
Dantong Liu is with the Chief technology and Architecture office, Cisco
Systems Inc., CA 95134, USA (e-mail: datliu@cisco.com).
of the network. Authors in [5]–[7] have introduced the idea
of femto-caching helpers, which are small base stations (B-
Ss) with a low-bandwidth backhaul link and high storage
capabilities. Recent work [8]–[11] have shown that one of
the most promising approaches for the system performance
improvement relies on caching, i.e., storing the video files
in the users local caches and/or in dedicated helper nodes
distributed in the network coverage area.
Intuitively, caching provides a way to exploit the inherent
content reuse while coping with the asynchronism among
requests [12]. In addition, caching is appealing since it lever-
ages the wireless devices storage capacity, which can improve
the network capacity and mitigate the video stalling [13],
minimize the average content access delay [14], and reduce the
energy consumption [15]. The content caching can be more
efficiency with the help of big data analysis and estimation
techniques [16].
Apart from caching, device-to-device (D2D) communication
has been regarded as another driving force behind the evolution
into 5G, considering D2D communication is able to effectively
utilize the air interface resources and offload the cellular
network traffic. In the conventional cellular network, a mobile
terminal (MT) can only rely on a base station (BS) in the
cellular network to acquire the desired content. In the cellular
network with D2D, the prospect of cellular communication
applications can be extended with direct communication ca-
pabilities between devices. For example,if the neighbor MTs
have the same content, the content can be directly delivered
from his neighbor devices.
D2D caching networks, which take both advantages of
caching and D2D communication technologies, have naturally
set the stage for the 5G evolution [8], [9]. In the D2D caching
networks, the MTs equipped with storage space are used as
caching nodes, and the mobile users collaborative download
and cache different parts of the same content simultaneously
from the serving BS, and then share them by using D2D
communications. Although the cache space on each individual
MT is not necessarily large, the cache space of multiple
devices can form a large virtual unified cache space, which can
cache a large amount of multimedia content. By using storage
overhead in exchange for transmission efficiency, the D2D
caching networks can offload cellular traffic, reduce content
access delay, and improve the user experience.
In view of forming a large virtual unified cache space,
caching deployment is a problem that looks into how the
diversity contents is cached in multiple cache-enabled MTs as
it would lead to significant impact to the network performance
of D2D caching networks. Some literatures have studied the

2
caching deployment optimization problem [17]–[24], which
are discussed in the related works section. These works
utilize the D2D cache to achieve their goals by taking the
channel state information, the popularity of the content, the
available bandwidth resources, the data transmission rate, and
the distribution of users into account. However, in the context
of multimedia content distribution, especially in wireless social
networks, the user’s preference for content has a great impact
on the cache system performance. In [25], the author pointed
out that each data object would eventually be sent to interested
users. For users, the closer to the storage location of the data
object, the less network traffic it will consume to access the
data objects. In the selection of caching location of the content
replicas, the users interest preference has certain guidance,
and the content replicas should be stored in the location
much closer to the user who is interested in it. Therefore,
the caching deployment strategy can be designed based on the
user preference, considering that each user in the D2D network
is a relatively independent entity.
In this paper, we propose a user preference aware caching
deployment algorithm for the D2D caching networks. We
integrate user preference when formulating the content cache
utility, establish the optimization problem of cache utility, and
implement the near-optimal caching placement algorithm. The
outcomes of this research provide the upper bound of caching
performance of D2D networks, and also the performance
bounds for the follow-up study of distributed and online
caching strategies.
The contributions of this paper are shown as below:
1) In order to improve the caching performance, the content
cache utility of each MT is defined to measure the caching
utilization. Distinguished with the existing research on the
caching deployment optimization problem, the proposed utility
definition takes both the user preference and the transmission
coverage region into consideration. The rationale behind this is
that when the content replicas are cached in the nodes near the
nodes generating the content requests, the caching utilization
can be improved. As such, the MT should cache some specific
contents which may be interested by the adjacent MTs with
similar user preference to improve the caching efficiency. In
addition, more neighbor MTs in the communication coverage
region of a MT will lead to the higher possibility of content
sharing of the cached content, and the larger content caching
utility of this MT.
2) The content cache utility maximization problem is formu-
lated for the caching performance optimization. The existing
works on user preference based caching deployment strategy
mainly use heuristic method, while the optimal solution of the
optimization problem formulated in this work will give the best
caching performance, which can be seen as the upper bound
of the caching performance obtained by the caching deploy-
ment. Firstly, a general cache utility maximization problem
is introduced. Then, a specific logarithmic utility function is
adopted, which provides a network-wide proportional fairness.
With the logarithmic cache utility, the caching deployment and
the cache space allocation coupled maximization problem can
be reduced to cache utility maximization problem with equal
cache space allocation.
3) Finally, for the logarithmic cache utility maximization
problem, the single MT caching constraint is relaxed to multi-
ple MTs caching, which converts the intractable combinatorial
problem into a convex optimization problem. Then a near-
optimal solution is obtained by dual decomposition method.
This solution provides a feasible, efficient and low-overhead
algorithm for implementation in D2D caching networks. The
simulation results show that the proposed algorithm can con-
verge to the maximization solution in a few iterations and
achieve significant performance on cache hit ratio, content
access delay and traffic offloading gain.
The rest of the paper is organized as follows: In Section II,
we review the related work. Section III presents the system
model. Section IV defines the user preference, user interest
similarity, and the content cache utility. Section V proposes
content cache utility optimization problem. Section VI is the
proposed dual decomposition algorithm. Section VII evaluates
the performance of the proposed algorithm. Finally is the
conclusion of this work. The main symbols and variables used
in this paper are summarized in Table I.
II. RELATED WORKS
The previous works on D2D communication are mainly
focused on how D2D communication can run efficiently as an
underlay to cellular networks, and the research concentrated
on resource allocation and interference avoidance, see [26] and
the reference therein.
With a limited amount of storage on each device, the main
challenge is how cellular traffic can be maximally offloaded
by using D2D communication to satisfy requests for content
as well as to share messages between neighboring devices. A
carefully designed caching deployment strategy would have a
great impact on the network performance of the D2D caching
networks. Some of existing contributions include the caching
deployment optimization and strategy designing [17]–[24], the
design of D2D cache network structure [10], [11], [27], [28],
D2D caching clustering [29], [30], and so on.
In [17], the authors considered the distribution of users
request when designing the caching strategy to maximize the
probability of successful content delivery. With the consider-
ation of content popularity, a cut-off random caching scheme
and a segment-based random caching scheme were proposed
to improve the cache hitting probability [18]. In [19], an
optimization problem was formulated to determine the proba-
bility of storing the individual content that could minimize the
average caching failure rate, and then a low-complexity search
algorithm was proposed for solving the optimization problem.
In [20], the authors formulated a continuous time optimization
problem to determine the optimal transmission and caching
policies that minimize a generic cost function, such as energy,
bandwidth or throughput. In [21], a caching allocation scheme
was proposed to enhance storage utilization for D2D networks,
and the optimal storage assignment achieved tradeoff between
static caching and on-demand relaying. In [22], the authors
studied the problem of maximizing cellular traffic offloading
with D2D communication by selectively caching popular
content locally, and exploring maximal matching for sender-
receiver pairs. In [23], the authors optimized the content cache

3
distribution considering the user’s geographical location in the
D2D network to improve the cache hitting probability. In [24],
the authors combined the channel-aware caching and coded
multicasting for wireless video delivery.
The above works utilize the D2D cache to achieve their
goals by taking the channel state information, the popularity
of the content, the available bandwidth resources, the data
transmission rate, and the distribution of users into account.
However, in the context of multimedia content distribution, the
user’s preference for content has a great impact on the cache
system performance, especially in wireless social networks.
The user preference is a concept from the social networks
and the recommendation systems [31]. In the context of D2D
caching networks, there are existing some caching strategies
that take into account the users interest preferences [3], [32],
[33]. The authors in [3] proposed a mechanism to cache
popular contents proactively in the mobile users, in which,
the files were delivered to some influential mobile users in a
social community and then shared in the community by D2D
communications. In [32], users were divided into different
clusters according to the users’ interest preferences. Then, the
corresponding cache strategy was obtained by compromising
the average download delay of each group. Considering the
difference of users preference and the selfishness nature of
D2D users, a caching incentive scheme based on backpack
theory was proposed in [33].
It is worth mentioning that user preference based caching
strategies have been explored in content centric networking
(CCN) recently [34], [35]. Nevertheless, the works in CCN
pay more attention to the online and on-path caching decision
design, which cannot obtain the overall network performance
optimization.
Although the works in [32]–[35] laid a good foundation in
integrating user’s interest preference to the caching strategy
design, the effect of the similarity of the users interest prefer-
ence on the caching strategy design is less well understood. In
the caching strategy design, if the MT caches some specific
contents which may be interested by the adjacent MTs, the
cache space utilization can be improved. Hence, the content
caching of a MT should not only consider the user preference
itself, but also take account of the user interest similarity
on the contents of adjacent MTs. Our work fills the gap
by carefully considering the user interest similarity and the
D2D transmission coverage region when defining the content
caching utility, thereby improving the network performance
via the caching deployment problem optimization.
III. SYSTEM MODEL
A. Network model
The system model of this paper is illustrated in Fig. 1. A
single macrocell is considered, where a macro BS serves N
uniformly distributed D2D users. In this paper, we consider
the in-band D2D communication, in which, the D2D users
can access the licensed spectrum in a dedicated mode (also
described as an overlay or orthogonal mode in the literature).
In the dedicated mode, the transmission of the cellular users
and the D2D users has assigned a non-overlapping orthogonal
TABLE I: Symbols and variable list
Parameters Description
B
M
System bandwidth of downlink macrocell
B
D
system bandwidth of D2D communication
p
tx
BS
Maximum transmit power of BS
p
tx
n
Transmit power of TM n
c
nn
Data rate from user n
to user n
c
n BS
Data rate from severing BS to user n
σ
2
Additive white Gaussian noise power
N Number of users
M Number of contents
M
n
The number of content cached in MT n
S The cache ability of each user
v The size of each content
φ
mn
The preference of user n to content m
φ
m
(n, n
) the interest similarity of user n and n
d (n, n
) The distance between user n and n
x
mn
user n caching index for content m
y
mn
user n cache space allocation for content m
u
mn
user n cache utility per unit cache space for content m
radio resource, so there is no interference among cellular users
and D2D users, nor interference among D2D users [36].
Macro BS
Self-Serve
D2D MT
Cache
Figure 1. D2D caching networks.
.
The user n can communicate and share the content directly
with his neighbor MTs through D2D communication link if
user n cached a content. For a given content, which MT to
save the content replica in an overlap region of multiple MTs
will be decide by the caching replacement strategy.
The system bandwidth of downlink macrocell is B
M
and
the system bandwidth of D2D is B
D
, we use the time domain
Round-Robin scheduling to allocate the radio resource for cel-
lular users and D2D users [36]. When user n is communicated
with user n
, the data rate from user n
to user n is
c
nn
= B
D
log
1 +
g
n
n
p
tx
n
σ
2
, (1)
when user n is communicated with BS, the data rate from its
severing BS to user n is
c
n BS
= B
M
log
1 +
g
BS n
p
tx
BS
σ
2
, (2)
where σ
2
is additive white Gaussian noise power, p
tx
n
is the
maximum transmit power of user n
, p
tx
BS
is the maximum
transmit power of BS, and g
n
n
is the channel gain between
user n
and user n , and g
BS n
is the channel gain between
BS and user n [37].
B. Caching model
Taking into account the diversity of contents within the
D2D network, this paper assumes that each cell can cache M
contents. More generally, different cells can cache different M

4
contents, and the value of M in different cells can be various.
If considering to relax the assumption, we can further consider
that the macro BS (or other functional entity with management
function) selects M contents to be cached according to certain
criteria, for instance, M contents which having the highest
popularity. In addition, due to the limited storage capacity of
MT in the actual application scenario, the amount of data in the
whole contents is much larger than the cache space available
to each MT. The contents are assumed to have the same data
size, and the data volume of each content is v. Each MT has
a cache space which is able to cache up to S contents.
In our caching model, a user can be a content requester
and a content provider. If there exists a complete copy of
content m in its own cache, the request is fulfilled with no
delay and without the need to establish a communication
link. Otherwise, the user broadcasts a request message for
the content m to the neighbor MTs within its coverage, if
the user can find the requested file from a MT’s cache space
within its D2D transmission range, then it can establish a D2D
communication link and obtain the content. If the user cannot
find the requested content neither in its own cache nor its
proximity users, it needs to download the file from the BS.
The aforementioned procedure is straightforward and could
be implemented via the existing approaches, such as how
to establish D2D communication and allocate radio resource,
how to measure content popularity and select the most popular
contents. This paper mainly focuses on the caching deploy-
ment problem in the D2D networks, that is, how to cache M
contents among N MTs.
IV. CACHE UTILITY FUNCTION
A. User preference and users interest similarity
User preference reflects a user’s interest into one content,
and can also indirectly reflect the probability that a user
requests one content. The users preference for contents are
closely related to the type of contents.
We assume that there are K themes for each content in the
network, and W = {w
1
, w
2
···w
K
} represents the set of all
themes of each content. The property function of content m
under the topic w
k
is P r o (m, w
k
), if content m includes the
theme w
k
, the value of P ro ( m, w
k
) is one, otherwise, the
value is zero.
The users have their own preference for each type of
the themes, and we let the preference function P re (n, w
k
)
representing the preference of the user n for the theme w
k
.
In this paper, we assume that the user preference function is
represented by mutual information [38], which is defined as,
P re (n, w
k
) =I (X (w
k
) ; V
j
) = log
p (X (w
k
) |V
j
)
p (X (w
k
))
, (3)
where X (w
k
) is the set of all items which contain feature w
k
,
and I (X (w
k
) ; V
j
) is the mutual information. p (X (w
k
)) is
the unconditional feature probability representing the probabil-
ity of contents containing the feature w
k
in the whole content
set. p (X (w
k
) |V
j
) is the conditional feature probability, i.e.,
the probability of contents including w
k
in the user n history
information V
j
.
The interest of user n to the content m is defined based on
cousin theory, that is,
φ
mn
=
K
k=1
P ro ( m, w
k
) P re (n, w
k
)
K
k=1
[P ro (m, w
k
)]
2
K
k=1
[P re (n, w
k
)]
2
. (4)
The more similar P ro (m, w
k
) and P re (n, w
k
) are, the
higher φ
mn
is, and 0 φ
mn
1.
According to the above definition of the user preference, the
interest similarity function is further defined to characterize
the interest similarity among users. In this paper, we use a
simple model to capture the user interest similarity of real
social networks [39]. Since φ
m
is within the segment [0, 1] ,
the interest similarity between user n and user n
is defined
as the Euclidean distance on the wrapped segment,
φ
m
(n, n
) = min {|φ
mn
φ
mn
|, 1 |φ
mn
φ
mn
|}. (5)
The small distance between φ
mn
and φ
mn
is, the larger
interest similarity of user n and n
on the content m is. A
larger interest similarity between two users indicates that the
more likely content cached in one user is requested by the
another user.
B. Cache utility
In this paper, we define the cache utility function of a user
considering both the communication coverage of this user and
the user interest similarity of its adjacent users.
As described above, φ (n, n
) represents the interest sim-
ilarity between user n and user n
. Besides, we let d (n, n
)
represents the physical distance between the two users, and let
Φ
n
denotes the set of neighbors of user n in its communication
range. In the D2D communication coverage region of a user,
the more neighbor users, the higher the possibility of content
sharing of the cached content, and the larger the caching utility
of this user. Therefore, the cache utility per unit cache space
of user n caching content m is defined as,
u
mn
=
n
Φ
n
φ
m
(n, n
)
α
· d(n, n
)
β
, (6)
where α and β are the weighting factors of the user interest
similarity and the user physical distance.
In the cache utility function definition, the D2D transmission
coverage region is decided by the physical distance of MTs.
We used the physical distance, equivalent to the pathloss, in
the cache utility definition, the reason is that, i) from the
view of the caching management, it is easy to collect the
pathloss among MTs in a macrocell, the caching deployment
can be implemented in a very short interval; ii) the timescale
of content delivery among MTs is larger than that of channel
condition varying with fast fading.
Assumption 1: user n can cache a portion of content m.
This assumption is practical and necessary, because a user may
have multiple interested content to be cached and the cache
space in a MT is relative small compared with the data volume
of the multiple contents.

5
we define a caching index x
mn
= 1 of user n and content
m, indicating that a portion of (or entire) content m is cached
in user n, otherwise x
mn
= 0.
Suppose the cache space of each MT is S, and the cache
space allocated by the MT n for caching parts of the content m
is y
mn
, we have
M
m=1
y
mn
< S, it means that all the contents
cached in the MT n can not exceed the maximal available
cache space.
From the view of the network, the revenue obtained by
content m cached in mobile terminal MT n is x
mn
y
mn
u
mn
,
the total revenue of content m caching in the D2D network is
N
n=1
x
mn
y
mn
u
mn
, so the cache utility function of content m is
u
m
= f
N
n=1
x
mn
y
mn
u
mn
, and the f (·) is a continuously
differentiable, monotonically increasing, and strictly concave
utility function [40].
In the following section, we will study the problem of
maximizing cache utility for the whole network, so as to find
the optimal caching deployment and cache space allocation.
V. PROBLEM FORMULATION
The goal of this paper is to optimize the cache utility of the
whole network to obtain the caching deployment algorithm,
so as to improve the performance of the network from the
backhaul link’s service offload, cache hit ratio and content
access delay to end users.
A. General utility maximization
1) Unique caching case
Firstly, we consider the scenario that one content only can
be cached one MT. In the case of unique caching, the caching
deployment strategy has to be combined with the allocation
of cache space, because they are interdependent. We construct
an optimization problem as the function of caching index x
mn
and cache space allocation y
mn
. In the case of general utility
function expressions, the utility maximization problem is
P1 : max
x,y
M
m=1
f
N
n=1
x
mn
y
mn
u
mn

s.t. C1 :
N
n=1
x
mn
= 1, m {1, ···, M }
C2 :x
mn
[0, 1] , m {1, ···, M }, and n {1, ···, N}
C3 : 0 y
mn
S, m {1, ···, M }, and n {1, ···, N }
C4 :
M
m=1
y
mn
S, n { 1, ···, N }.
(7)
In the utility optimization problem P1, x
mn
can only take
0 or 1, where
N
n=1
x
mn
= 1 represents that content m can
only be cached in a single MT. 0 y
mn
S means that if
MT n caches the content m, the size of cache space occupied
by content m in the MT n is smaller than the size of the
MT cache space.
M
m=1
y
mn
S indicates that the size of all
contents cached in MT n cannot exceed the storage capacity
of MT n. The above restrictions are all linear. Therefore,
the optimization problem is a challenging 0-1 programming
problem, it can be proved that the optimization problem is
NP-hard problem [41].
2) Multiple caching case
Then, we consider the multiple caching case.
Assumption 2: one content can be cached in multiple MTs
simultaneously. This assumption may require more overhead
to implement, but it is a practical method in D2D caching
networks, since the multiple MTs can collaborative download
and cache some large volume contents.
From (7) we can notice that, under the Assumption 2,
the constraint
N
n=1
x
mn
= 1 can be eliminated, and hence
there is no need for x
mn
as additional indicators for caching.
The cache space allocation variable y
mn
[0, 1] indicates
the state of caching, i.e., y
mn
> 0 means a portion of the
content m is cached in MT n, otherwise, y
mn
= 0 . In this
case, we focus on how the cache space should be allocated
to different contents with different u
mn
so as to maximize
the utility of MTs, instead of considering conjunction with
caching deployment.
We formulate the optimization problem of multiple caching
as follows,
P2 : max
y
M
m=1
f
N
n=1
y
mn
u
mn

s.t. C3, C4.
(8)
It can be seen that the problem P2 is only related to the
cache space allocation of different MTs, without considering
the caching deployment. Therefore, the optimization problem
is simplified.
In the following sections, we show that with the logarith-
mic utility function, y
mn
can be directly found without the
Assumption 2, and thus there is no need to decouple x
mn
and y
mn
in this optimization. However, for general utility
maximization, problem P2 can provides an ultimate limit on
achievable network performance.
B. Logarithmic utility and cache space allocation
Logarithmic utility function in particular is a very common
choice of utility function [40], which naturally achieves some
level of utility fairness among the contents. To accomplish
this, we use a logarithmic utility function in the cache utility
maximization problem. The resulting utility function is
f
N
n=1
x
mn
y
mn
u
mn
= log
N
n=1
x
mn
y
mn
u
mn
. (9)
This logarithmic utility function is concave, and hence has
diminishing returns. This property encourages cache space
allocation balancing.
In the remainder of this paper, we focus on the caching
deployment with the logarithmic utility function.
First, we consider the unique caching case. In doing so, the
utility maximization problem P1 in (7) is equal to,
P3 : max
x,y
M
m=1
log
N
n=1
x
mn
y
mn
u
mn

s.t. C1, C2, C3, C4.
(10)

Citations
More filters
Journal ArticleDOI

Recent Advances in Information-Centric Networking-Based Internet of Things (ICN-IoT)

TL;DR: The potential of ICN for IoT is presented by providing state-of-the-art literature survey on ICN-based caching, naming, security, and mobility approaches for IoT with appropriate classification and operating systems and simulation tools are presented.
Journal ArticleDOI

Outage Probability and Optimal Cache Placement for Multiple Amplify-and-Forward Relay Networks

TL;DR: A relay-selection criterion to choose the best relay, which maximizes the received signal-to-noise ratio at the destination is proposed, and the cache placement, which minimizes the outage probability is optimized.
Journal ArticleDOI

Applications of Economic and Pricing Models for Resource Management in 5G Wireless Networks: A Survey

TL;DR: In this article, the authors present a comprehensive literature review on applications of economic and pricing theory for resource management in the evolving fifth generation (5G) wireless networks, including user association, spectrum allocation, and interference and power management.
Journal ArticleDOI

Optimal caching scheme in D2D networks with multiple robot helpers

TL;DR: In this article, an improved caching scheme named robot helper aided caching (RHAC) is proposed to optimize the system performance by moving the robot helpers to the optimal positions, which can bring significant performance improvements in terms of hitting probability, cost, delay and energy consumption.
Journal ArticleDOI

Water Pipeline Leakage Detection Based on Machine Learning and Wireless Sensor Networks.

TL;DR: Simulation analysis and experimental results indicate that the proposed leakage identification method can effectively identify the water pipeline leakage and has lower energy consumption than the networking methods used in conventional wireless sensor networks.
References
More filters
Book

Integer programming

TL;DR: The principles of integer programming are directed toward finding solutions to problems from the fields of economic planning, engineering design, and combinatorial optimization as mentioned in this paper, which is a standard of graduate-level courses since 1972.
Proceedings ArticleDOI

I tube, you tube, everybody tubes: analyzing the world's largest user generated content video system

TL;DR: In this article, the authors analyzed YouTube, the world's largest UGC VoD system, and provided an in-depth study of the popularity life cycle of videos, intrinsic statistical properties of requests and their relationship with video age, and the level of content aliasing or of illegal content.
Journal ArticleDOI

Recommender system application developments

TL;DR: This paper reviews up-to-date application developments of recommender systems, clusters their applications into eight main categories, and summarizes the related recommendation techniques used in each category.
Journal ArticleDOI

Living on the Edge: The Role of Proactive Caching in 5G Wireless Networks

TL;DR: In this article, a proactive caching mechanism is proposed to reduce peak traffic demands by proactively serving predictable user demands via caching at base stations and users' devices, and the results show that important gains can be obtained for each case study, with backhaul savings and a higher ratio of satisfied users.
Journal ArticleDOI

User Association for Load Balancing in Heterogeneous Cellular Networks

TL;DR: In this paper, the authors provide a low-complexity distributed algorithm that converges to a near-optimal solution with a theoretical performance guarantee, and observe that simple per-tier biasing loses surprisingly little, if the bias values Aj are chosen carefully.
Related Papers (5)
Frequently Asked Questions (13)
Q1. What are the contributions in "User preference aware caching deployment for device-to-device caching networks" ?

In this paper, a user preference aware caching deployment algorithm is proposed for D2D caching networks. 

Beyond that, the authors have introduced a caching utility function with the aim to maximize caching utility in order to enhance the possibility of content sharing of among the multiple MTs. The proposed centralized algorithm has obtained the near-optimal performance of the caching deployment, which can be used as the benchmark for the online caching strategy design. 

Logarithmic utility function in particular is a very common choice of utility function [40], which naturally achieves some level of utility fairness among the contents. 

In this paper, the performance criteria considered are the average content access delay, cache hit ratio, offloading ratio, and the content caching utility. 

The cache space allocation variable ymn ∈ [0, 1] indicates the state of caching, i.e., ymn > 0 means a portion of the content m is cached in MT n, otherwise, ymn = 0 . 

In short, the authors prove the effectiveness and efficiency of their proposed PAC by changing the number of users, the size of cache space, and the number of contents. 

In the case of general utility function expressions, the utility maximization problem isP1 : max x,y M∑ m=1( f ( N∑n=1 xmnymnumn )) s.t. C1 :N∑ n=1 xmn = 1, ∀m ∈ {1, · · · ,M}C2 :xmn ∈ [0, 1] , ∀m ∈ {1, · · · ,M} , and ∀n ∈ {1, · · · , N} C3 : 0 ≤ ymn ≤ S, ∀m ∈ {1, · · · ,M} , and ∀n ∈ {1, · · · , N}C4 : M∑m=1 ymn ≤ S, ∀n ∈ {1, · · · , N} . 

Their work fills the gap by carefully considering the user interest similarity and the D2D transmission coverage region when defining the content caching utility, thereby improving the network performance via the caching deployment problem optimization. 

In addition, due to the limited storage capacity of MT in the actual application scenario, the amount of data in the whole contents is much larger than the cache space available to each MT. 

Iteration: in the tth iteration of gradient projection algorithm for the content m, the procedure is as following,Step 1: the macro BS obtain the MT n satisfies n∗ = argmaxn (log (Sumn)− λn (t)); then set xmn∗ > 0 and update M∗n (t+ 1) = M∑m=1xmn∗ ;Step 2: the macro BS updates the values of Mn (t+ 1) according to the problem (24), the authors set its gradient to be 0 with the constraint Mn ≤ M , i.e., λn−1− logMn = 0, then the authors have, Mn = e(λn(t)−1), then the value of Mn is updated byMn (t+ 1) = min { M, e(λn(t)−1) } . 

The derivative of function D (λ)(22) is given by∂D ∂λn (λ) = Mn (λ)− ∑ m xmn (λ). (30)In their primal problem Mn = M∑m=1 xmn ≤ M where Nis the total number of MTs. 

So optimization problem of the multiple caching case is,P5 : max x M∑ m=1 N∑ n=1 xmn log SumnM∑ m=1 xmn s.t. C1, C5 : 0 ≤ xmn ≤ 1.(18)This physical relaxation makes the (18) to be convex and decoupled the caching deployment and cache space allocation. 

The popularity of M contents follows a Zipf -like distribution as previous studies [43], and the content size v is set to 1024 bytes.