scispace - formally typeset
Proceedings Articleβ€’DOIβ€’

Distributed Maintenance of Cache Freshness in Opportunistic Mobile Networks

Wei Gao1, Guohong Cao1, Mudhakar Srivatsa2, Arun Iyengar2Β β€’
18 Jun 2012-pp 132-141

...read more

Content maybe subject toΒ copyrightΒ Β Β  Report

Distributed Maintenance of Cache Freshness in
Opportunistic Mobile Networks
Wei Gao and Guohong Cao
Department of Computer Science and Engineering
The Pennsylvania State University
University Park, PA 16802
{weigao,gcao}@cse.psu.edu
Mudhakar Srivatsa and Arun Iyengar
IBM T. J. Watson Research Center
Hawthorne, NY 10532
{msrivats, aruni}@us.ibm.com
Abstractβ€”Opportunistic mobile networks consist of personal
mobile devices which are intermittently connected with each
other. Data access can be provided to these devices via cooperative
caching without support from the cellular network infrastructure,
but only limited research has been done on maintaining the
freshness of cached data which may be refreshed periodically
and is subject to expiration. In this paper, we propose a scheme
to efficiently maintain cache freshness. Our basic idea is to let
each caching node be only responsible for refreshing a specific
set of caching nodes, so as to maintain cache freshness in a
distributed and hierarchical manner. Probabilistic replication
methods are also proposed to analytically ensure that the fresh-
ness requirements of cached data are satisfied. Extensive trace-
driven simulations show that our scheme significantly improves
cache freshness, and hence ensures the validity of data access
provided to mobile users.
I. INT RODUCTION
In recent years, personal hand-held mobile devices such
as smartphones are capable of storing, processing and dis-
playing various types of digital media contents including
news, music, pictures or video clips. It is hence important
to provide efficient data access to m obile users with such
devices. Opportunistic mobile networks, which are also known
as Delay Tolerant Networks (DTNs) [13] or Pocket Switched
Networks (PSNs) [20], are exploited for providing such data
access without support of cellular network infrastructure. In
these networks, it is generally difficult to maintain end-to-
end communication links among mobile users. Mobile users
are only intermittently connected when they opportunistically
contact, i.e., moving into the communication range of the
short-range radio (e.g., Bluetooth, WiFi) of their smartphones.
Data access can be provided to mobile users via cooperative
caching. More specifically, data is cached at mobile devices
based on the query history, so that queries for the data in
the future can be satisfied with less delay. Currently, research
efforts have been focusing on determining the appropriate
caching locations [27], [19], [17] or the optimal caching
policies for minimizing the data access d elay [28], [22].
However, there is only limited research on ma intaining the
freshness of cached data in the network, despite the fact that
media contents may be r efreshed periodically. I n practice, the
This work was supported in part by the US National Science Foundation
(NSF) under grant number CNS-0721479, and by Network Science CTA under
grant W911NF-09-2-0053.
refreshing frequency varies according to the specific content
characteristics. For example, the local weather report is usually
refreshed daily, but the media news at websites of CNN or
New York Times may be refreshed hourly. In such cases, the
versions of cached data in the network may be out-of-date, or
even be completely useless due to expiration.
The maintenance of cache freshness in opportunistic mo-
bile networks is challenging due to the intermittent network
connectivity and subsequent lack of information about cached
data. First, there may be multiple data copies being cached in
the network, so as to ensure timely response to user queries.
Without persistent network connectivity, it is generally difficult
for the data source to obtain information about the caching
locations or current versions of the cached data. It is therefore
challenging for the data source to determine β€œwhere to” and
β€œhow to” refresh the cached data. Second, the opportunistic
network conn ectivity in creases the uncertainty of data trans-
mission and complicates the estimation of data transmission
delay. It is therefore difficult to determine whether the cached
data can be refreshed on time.
In this paper, we propose a scheme to address these chal-
lenges and to efficiently maintain freshness of the cached data.
Our basic idea is to organize the caching nodes
1
as a tree
structure during data access, and let each caching node be
responsible for refreshing the data cached at its children in
a distributed and h ierarchical manner. The cache freshness
is also improved when the caching nodes opportunistically
contact each other. To the best of our knowledge, our work
is the first which specifically focuses on cache freshness in
opportunistic mobile networks.
Our detailed contributions are as follows:
βˆ™ We investigate the refreshing patterns of realistic web
contents. We observe that the distributions of inter-
refreshing time of the RSS feeds from major news
websites exhibit hybrid characteristics of exponential and
power-law, which have been validated by both empirical
and analytical evidences.
βˆ™ Based on the experimental investigation results, we ana-
lytically measure the utility of data updates for refreshing
the cached data via opportunistic node contacts. These
1
In the rest of this paper, the terms β€œdevices” and β€œnodes” are used
interchangeably.

utilities are calculated based on a probabilistic model to
measure cache freshness. They are then used to oppor-
tunistically replicate data updates and analytically ensure
that the freshness requirements of cached data can be
satisfied.
The rest of this p aper is organized as follows. In Section
II we briefly review the existing work. Section III provides
an overview about the models an d caching scenario we use,
and also highlights our basic idea. Section IV presents our
experimen tal investigation results on the refreshing patterns
of real web sites. Sections V and VI describe the d etails of
our proposed cache refreshing schemes. The results of trace-
driven performance evaluations are shown in Section VII, and
Section VIII concludes the paper.
II. R
ELATED WORK
Due to the intermittent network connectivity in opportunistic
mobile networks, data is forwarded in a β€œcarry-and-forward”
manner. Node mobility is exploited to let nodes physically
carry data as relays, and forward data opportunistically when
contacting others. The key problem is hence how to select the
most appropriate nodes as relays, based o n the prediction o f
node contacts in the future. Some forwarding schemes do such
prediction based on node mobility patterns [9], [33], [14]. In
some other schemes [4], [1], stochastic node contact process is
exploited for better prediction accuracy. Social contact patterns
of mobile users, such as centr ality and community structures,
have also been exploited for relay selection [10], [21], [18].
Based on this opportunistic communication paradigm, data
access can be provided to mobile users in various ways. In
some schemes [23], [16], data is actively disseminated to
specific users based on their interest profiles. Publish/subscribe
systems [32], [24] are also used for data dissemination by ex-
ploiting social commun ity structures to determine the brokers.
Caching is another way to provide data access. Determining
appropriate caching policies in opportunistic mobile networks
is complicated by the lack of global network information.
Some research efforts focus on improving data accessibility
from infrastructure networks such as WiFi [19] or Internet
[27], and some others study peer-to-peer data sharing among
mobile nodes. In [17], data is cached at specific nodes which
can be easily accessed by others. In [28], [22], caching policies
are dynamically determined based on data importance, so that
the aggregate utility of mobile nodes can be maximized.
When the versions of cached data in the network are het-
erogeneous and different from that of the source data, research
efforts have been focusing on maintaining the consistency of
these cache versions [7], [11], [5], [6]. Being different from
existing work, in this paper we focus on ensuring the freshness
of cached data, i.e., the version of any cached data should be
as close to that of the source data as possible. [22] discussed
the practical scenario in which data is periodically refreshed,
but did not provided specific solutions for maintaining cache
freshness. We propose methods to maintain cache freshness in
a distributed and hierarchical mann e r, and analytically ensure
that the freshness requirement of cached data can be satisfied.
Fig. 1. Data Access T ree (DAT). Each node in the DAT accesses data when
it contacts its parent node in the DAT.
III. OVERVIEW
A. Models
1) Network Model: Opportunistic contacts among nodes
are described by a network contact graph 𝐺(𝑉,𝐸),wherethe
contact process between a node pair 𝑖, 𝑗 ∈ 𝑉 is modeled as
an edge 𝑒
𝑖𝑗
∈ 𝐸. The characteristics of an edge 𝑒
𝑖𝑗
∈ 𝐸
are determined by the properties of inter-contact time among
nodes. Similar to previous work [1], [34], we consider the
pairwise node inter-contact time as exponentially distributed.
Contacts between nodes 𝑖 and 𝑗 then form a Poisson process
with con tact rate πœ†
𝑖𝑗
, which is calculated in real time from the
cumulative contacts b etween nodes 𝑖 and 𝑗.
2) Cache Freshness Model: We focus on ensuring the
freshness of cached data, i.e., the version of any cached data
should be as close to that of the source data as possible. Letting
𝑣
𝑑
𝑆
denote the version number of source data at time 𝑑 and 𝑣
𝑑
𝑗
denote that of data cached at node 𝑗, our requirement on cache
freshness is probabilistically described as
β„™(𝑣
𝑑
𝑗
β‰₯ 𝑣
π‘‘βˆ’Ξ”
𝑆
) β‰₯ 𝑝, (1)
for any time 𝑑 and any node 𝑗. The version number is
initialized as 0 when data is first generated and monotonically
increased by 1 every time the data is refr eshed.
Higher network storage and transmission overhead is gen-
erally required for decreasing Ξ” or increasing 𝑝. Hence, our
proposed model provides the flexib ility to tradeoff between
cache freshness and network maintenance overhead according
to the specific data characteristics and applications. For exam-
ple, news from CNN or the New York Times may be refreshed
frequently, and smaller Ξ” (e.g., 1 hour) should be applied
accordingly. In contrast, the local weather report may be
updated daily, and the requirement on Ξ” can hence be relaxed
to avoid unnecessary network cost. The value of 𝑝 may be
flexible based on user interests in the d ata. However, th ere are
cases where an application might have specific requirements
on Ξ” and 𝑝 to achieve sufficient levels of data freshness.
3) Data Update Model: Whenever data is refreshed, the
data source computes the difference between the current and
previous versions and generates a data update. Cached data is
refreshed by such update instead of complete data for better
storage and transmission efficiency. This technique is called
Delta encoding, which has been applied in web caching for
reducing Internet traffic[26].

(a) Intentional and opportunistic refreshing
1
12
3 3 3 1 1
23
3
5 3 3 3 3 1 3
1
13
3
34
1 1 1 1
1
2 2 2 1
4 3 3 3 1
6 4 3 4 3 1 3
(b) Temporal sequence of data access and refreshing operations
Fig. 2. Distributed and hierarchical maintenance of cache freshness
Letting 𝑒
𝑖𝑗
denote the update of data from version 𝑖 to
version 𝑗, we assume that any caching node is able to refresh
the cached data as 𝑑
𝑖
βŠ— 𝑒
𝑖𝑗
β†’ 𝑑
𝑗
,where𝑑
𝑖
and 𝑑
𝑗
denote the
data with version 𝑖 and 𝑗, respectively. We also assume that
any node is able to compute 𝑒
𝑖𝑗
from 𝑑
𝑖
and 𝑑
𝑗
.
When d ata has been refreshed multiple times, various up-
dates for the same data may co-exist in the network. We
assume that any node is able to merge consecutive data
updates, i.e., 𝑒
𝑖𝑗
βŠ•π‘’
π‘—π‘˜
β†’ 𝑒
π‘–π‘˜
.However,𝑑
𝑗
cannot be refreshed
to 𝑑
π‘˜
by 𝑒
π‘–π‘˜
even if 𝑗>𝑖. For example, 𝑒
14
which is produced
by merging 𝑒
13
and 𝑒
34
cannot be used to refresh 𝑑
3
to 𝑑
4
.
B. Caching Scenario
Mobile nodes share data generated by themselves or ob-
tained from the Internet. In this paper, we consider a generic
caching scenario which is also used in [22]. The query
generated by a node is satisfied as soon as this node contacts
some other node caching the data. During the mean time,
the query is stored at the requesting node. After the query
is satisfied, the requesting node caches the data locally for
answering possible queries in the future. Each cached data
item is associated with a finite lifetime and is automatically
removed from cache when it expires. The data lifetime may
change each time the cached data is refreshed.
In practice, when multiple data items with varied popularity
compete for the limited buffer of caching nodes, more popular
data is prioritized to ensure that the cumulative data access
delay is minimized. Such prioritization is generally formulated
as a knapsack problem [17] and can be solved in pseudo-
polynomial time using a dynamic programming approach
[25]. Hence, the rest of this paper will focus on ensuring
the freshness of cached copies of a specificdataitem.The
consideration of multiple data items and limited node buffer
is orthogonal to the major focus of this paper.
In the above scenario, data is essentially disseminated
among nodes interested in the data when they contact each
other, and these nodes form a β€œData Access Tree (DAT)” as
shown in Figure 1. Queries of nodes 𝐴 and 𝐡 are satisfied
when they contact the data source 𝑆. Data cached at 𝐴 and 𝐡
are then u sed for satisfying queries from nodes 𝐢, 𝐷 and 𝐸.
Due to intermittent network connectivity, each node in the
DAT only has knowledge about data cached at its children. For
example, after having its query satisfied by 𝑆, 𝐴 may lose its
connection with 𝑆 due to mobility, and hence 𝐴 is unaware of
the data cached at nodes 𝐡, 𝐷 and 𝐸. Similarly, 𝑆 may only
be aware o f data cached at nodes 𝐴 and 𝐡. Such limitation
makes it challenging to maintain cache freshness, because it
is difficult for the data source to determine β€œwhere to” and
β€œhow to” refresh the cached data.
C. Basic Idea
Our b asic idea for maintaining cache freshness is to refresh
the cached data in a distributed and hierarchical manner. As
illustrated in Figure 2, this refreshing process is split into
two parts, i.e., the intentional refreshing and the opportunistic
refreshing, according to whether the refreshing node has the
knowledge about the cached data to be refreshed.
In intentional refreshing, each node is only responsible for
refreshing data cached at its children in the DAT. For example,
in Figure 2(a) node 𝑆 is only responsible for refreshing data
cached at 𝐴 and
𝐡.Since𝐴 and 𝐡 obtain their cached
data from 𝑆, 𝑆 has knowledge about the versions of their
cached data and is able to prepare the appropriate data updates
accordingly. In the example shown in Figure 2(b), 𝑆 refreshes
data cached at 𝐴 and 𝐡 using updates 𝑒
23
and 𝑒
13
,when𝑆
contacts 𝐴 and 𝐡 at tim e 𝑑
3
and 𝑑
4
respectively. In Section
V, these updates are also opportunistically replicated to ensure
that they can be delivered to 𝐴 and 𝐡 on time. Particularly,
the topology of DAT may change due to the expiration of
cached data. When 𝐴 is removed from the DAT due to cache
expiration, its child 𝐢 only re-connects to the DAT and gets
updated when 𝐢 contacts another node in the DAT.
In opportunistic refreshing, a node refreshes any cached
data with older versions whenever possible upon opportunistic
contact. For example in Figure 2(a), when node 𝐴 contacts
node 𝐷 at time 𝑑
6
, 𝐴 updates the data cached at 𝐷 from
𝑑
1
to 𝑑
3
.Since𝐴 does not know the version of the data
cached at 𝐷, it cannot prepare 𝑒
13
for 𝐷 in advance
2
.
Instead, 𝐴 has to transmit the complete data 𝑑
3
to 𝐷 with
2
The update 𝑒
13
can only be calculated using 𝑑
1
and 𝑑
3
.

(a) CNN Top Stories (b) BBC Politics (c) NYTimes Sports (d) Business W eek Daily
Fig. 3. CCDF of inter-refreshing time of individual RSS feeds
Avg. inter-
No. RSS feed Number of refreshing time
updates (hours)
1 CNN Top Stories 2051 0.2159
2 NYTimes US 4545 0.0954
3 CNN Politics 623 0.7166
4 BBC Politics 827 0.5429
5 ESPN Sports 2379 0.1856
6 NYTimes Sports 3344 0.1355
7 Business Week Daily 4783 0.0948
8 Google News Business 7266 0.061
9 Weather.com NYC 555 0.8247
10 Google News ShowBiz 5483 0.0808
11 BBC ShowBiz 531 0.8506
TABL E I
N
EWS UPDATES RETRI EVED FROM WEB RSS F EEDS
higher transmission overhead. In Section VI, we propose to
probabilistically determine whether to transmit the complete
data according to the chance of satisfying the requirement of
cache freshness, so as to optimize the tradeoff between cache
freshness and network transmission overhead.
IV. R
EFRESHING PATTERNS OF WEB CONTENTS
In this section, we investigate the refreshing patterns of real-
istic web contents, as well as their temporal variations during
different time periods in a day. These patterns highlight the
homogeneity of data refreshing behaviors among different data
sources and categories, and suggest appropriate calculation of
utilities of data updates for refreshing cached data.
A. Datasets
We investigate the refreshing patterns o f categorized web
news. We dynamically retrieved news updates from news
websites including CNN, New York Times, BBC, Google
News, etc, by subscribing to their public RSS feeds. During the
3-week experiment period between 10/3/2011 and 10/21/2011,
we have retrieved a total number of 32787 RSS updates from
11 RSS feeds in 7 news categories. The information about
these RSS feeds and retrieved news updates is summarized in
Table I, which shows that the RSS feeds differ in their numbers
of updates and the update frequencies.
B. Distribution of Inter-Refreshing Time
We provide both empirical and analytical evidence of a
dichotomy in the Complementary Cumulative Distribution
Function (CCDF) of the inter-refreshing time, which is defined
Fig. 4. Aggregate CCDF of the inter-refreshing time in log-log scale
as the time interval between two consecutive news updates
from the same RSS feed. Our results show that up to a
boundary on the order of several minutes, the decay of the
CCDF is well approximated as exponential. In contrast, the
decay exhibits power-law characteristics beyond this boundary.
1) Aggregate distribution: Figure 4 shows the aggregate
CCDF of inter-refreshing time for all the RSS feeds, in log-
log scale. The CCDF values exhibit slow decay over the range
spanning from a few seconds to 0.3047 hour. It suggests that
around 90% of inter-refreshing time falls into this range and
follows an exponential distribution. Figure 4 also shows that
the CCDF values of inter-refreshing time within this range is
accurately approximated by the random samples drawn from
an exponential distribution with the average inter-refreshing
time (0.1517 hours) as parameter.
For the remaining 10% of inter-refreshing time with values
larger than the boundary, the CCDF values exhibit linear decay
which suggests a power-law tail. To better examine such tail
characteristics, we also plot the CCDF of a generalized Pareto
distribution with the shape parameter πœ‰ =0.5, location param-
eter πœ‡ =0.1517 and scale parameter 𝜎 = πœ‡ β‹… πœ‰ =0.0759.As
shown in Figure 4, the Pareto CCDF closely approximates that
of the inter-refreshing time beyond the boundary. Especially
when inter-refreshing time is longer than 1 hour, the two
curves almost overlap with each other.
2) Distributions of individual RSS feeds: Surprisingly,
we found that the distributions of inter-refreshing time of
individual RSS feeds exhibit similar characteristics with that
of the aggregate distribution. For example, for the two RSS

(a) NYTimes US (b) CNN Politics (c) ESPN Sports (d) Google News Business
Fig. 5. Temporal distribution of news updates during different hours in a day
No. Boundary Exponential generalized Pareto
RSS (hours) percent. of 𝛼 (%) percent. of 𝛼 (%)
feed updates (%) updates (%)
1 0.2178 91.07 4.33 9.93 5.37
2 0.3245 84.24 6.71 15.76 3.28
3 1.9483 88.12 7.24 11.88 3.65
4 1.6237 86.75 5.69 13.25 4.45
5 0.2382 93.37 6.54 6.63 4.87
6 0.2754 92.28 6.73 7.72 2.12
7 0.3112 87.63 5.26 12.37 3.13
8 0.2466 89.37 8.45 10.63 2.64
9 1.7928 90.22 11.62 9.78 8.25
10 0.1928 88.57 6.75 11.43 3.58
11 2.0983 83.32 7.44 16.68 3.23
TABL E II
N
UMERICAL RES ULTS F OR DIS TRIBUTIONS OF INTER-REFRESHING TIME
OF I NDIVIDUAL
RSS F EEDS
feeds in Figure 3 with different news categories, the CCDF
decay of each RSS feed is analogous to that of the aggregate
CCDF in Figure 4. Figure 3 shows that the boundaries for
different RSS feeds are heterogeneous and mainly determined
by the average inter-refreshing time. These boundaries are
summarized in Table II.
To quantitatively justify the characteristics of exponential
and power-law decay in the CCDF of individual RSS feeds, we
perform a Kolmogorov-Smirnov goodness-of-fittest[30]on
each of the 11 RSS feeds listed in Table I. For each RSS feed,
we collect the inter-contact times smaller than its boundary
and test whether the null hypothesis β€œthese inter-contact times
are exponentially distributed” can be accepted. A similar test
is performed o n the inter-contact times with larger values for
the generalized Pareto distribution.
The significance levels (𝛼) for these null hypotheses being
accepted are listed in Table II. The lower the significance
level is, the more confident we are that the corresponding
hypothesis is statistically true. As shown in Table II, for all
the RSS feeds, the probability fo r erroneou sly accepting the
null hypotheses is lower than 10%, which is the significance
level usually being used for statistical hypothesis testing [8].
Particularly, the significance levels for accepting a generalized
Pareto distribution are generally better than those for accepting
an exponential distribution.
C. Temporal Variations
We are also interested in the temporal variations of the
RSS feeds’ updating patterns. Figure 5 shows the temporal
distribution of news updates from RSS feeds over d ifferent
Fig. 6. Standard deviation of the numbers of news updates during different
hours in a day
hours in a day. We observe that the characteristics of such
temporal variation are heterogeneous with different RSS feeds.
For example, the majority of news updates from NYTimes and
ESPN are generated during the time period from the afternoon
to the evening. Comparatively, the news updates from Google
News are evenly distributed among different hours in a day.
To better quantify the skewness of such temporal variation,
we calculate the standard deviation of the numbers of news
updates during different hours in a day for each of the 11 RSS
feeds listed in Table I, and the calculation results are shown in
Figure 6. By comparing Figure 6 with Figure 5, we conclude
that the temporal distributions of news updates from most RSS
feeds are highly skewed. The transient distribution of inter-
refresh ing time of a RSS feed during specific time periods
hence may differ a lot from its cumulative distribution. Such
temporal variation may affect the perfor mance of maintaining
cache freshness, and will be evaluated in detail via trace-driven
simulations in Section VII.
V. I
NTENTIONAL REFRESHING
In this section, we explain how to ensure that data updates
are delivered to the caching nodes on time, so that the
freshness requirements of cached data are satisfied. Based on
investigation results on the distribution of inter-refreshing time
in Section IV, we calculate the utility of each update which
estimates the chance for the requirement being satisfied by this
update. Such u tility is then used for opportunistic replication
of data updates.



Citations
More filters
Journal Articleβ€’DOIβ€’

[...]

TL;DR: A survey of the state-of-the-art research on SAN with focus on three aspects: routing and forwarding, incentive mechanisms, and data dissemination is presented.
Abstract: The widespread proliferation of handheld devices enables mobile carriers to be connected at anytime and anywhere. Meanwhile, the mobility patterns of mobile devices strongly depend on the users' movements, which are closely related to their social relationships and behaviors. Consequently, today's mobile networks are becoming increasingly human centric. This leads to the emergence of a new field which we call socially aware networking (SAN). One of the major features of SAN is that social awareness becomes indispensable information for the design of networking solutions. This emerging paradigm is applicable to various types of networks (e.g., opportunistic networks, mobile social networks, delay-tolerant networks, ad hoc networks, etc.) where the users have social relationships and interactions. By exploiting social properties of nodes, SAN can provide better networking support to innovative applications and services. In addition, it facilitates the convergence of human society and cyber-physical systems. In this paper, for the first time, to the best of our knowledge, we present a survey of this emerging field. Basic concepts of SAN are introduced. We intend to generalize the widely used social properties in this regard. The state-of-the-art research on SAN is reviewed with focus on three aspects: routing and forwarding, incentive mechanisms, and data dissemination. Some important open issues with respect to mobile social sensing and learning, privacy, node selfishness, and scalability are discussed.

129Β citations


Cites background from "Distributed Maintenance of Cache Fr..."

  • [...]

  • [...]

  • [...]

Journal Articleβ€’DOIβ€’

[...]

TL;DR: In this paper, the authors summarize recent contributions in the broad area of AoI and present general AoI evaluation analysis that are applicable to a wide variety of sources and systems, starting from elementary single-server queues, and applying these AoI methods to a range of increasingly complex systems, including energy harvesting sensors transmitting over noisy channels, parallel server systems, queueing networks, and various single-hop and multi-hop wireless networks.
Abstract: We summarize recent contributions in the broad area of age of information (AoI). In particular, we describe the current state of the art in the design and optimization of low-latency cyberphysical systems and applications in which sources send time-stamped status updates to interested recipients. These applications desire status updates at the recipients to be as timely as possible; however, this is typically constrained by limited system resources. We describe AoI timeliness metrics and present general methods of AoI evaluation analysis that are applicable to a wide variety of sources and systems. Starting from elementary single-server queues, we apply these AoI methods to a range of increasingly complex systems, including energy harvesting sensors transmitting over noisy channels, parallel server systems, queueing networks, and various single-hop and multi-hop wireless networks. We also explore how update age is related to MMSE methods of sampling, estimation and control of stochastic processes. The paper concludes with a review of efforts to employ age optimization in cyberphysical applications.

67Β citations

Book Chapterβ€’DOIβ€’

[...]

05 Aug 2014
TL;DR: Threshold Models, Information Diffusion Models, Social Influence And Influence Maximization, and Other Extensions.
Abstract: 1.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Social Influence And Influence Maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Information Diffusion Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4.1 Threshold Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4.1.1 Linear Threshold Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4.1.2 The Majority Threshold Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4.1.3 The Small Threshold Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4.1.4 The Unanimous Threshold Model . . . . . . . . . . . . . . . . . . . . . . . . . 10 Other Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4.2 Cascading Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

43Β citations

Journal Articleβ€’DOIβ€’

[...]

TL;DR: This paper proposes an incentive-driven and freshness-aware pub/sub Content Dissemination scheme, called ConDis, for selfish OppNets, and shows that ConDis is superior to other existing schemes in terms of total freshness value, total delivered contents, and total transmission cost.
Abstract: Recently, the content-based publish/subscribe (pub/sub) paradigm has been gaining popularity in opportunistic mobile networks (OppNets) for its flexibility and adaptability. Since nodes in OppNets are controlled by humans, they often behave selfishly. Therefore, stimulating nodes in selfish OppNets to collect, store, and share contents efficiently is one of the key challenges. Meanwhile, guaranteeing the freshness of contents is also a big problem for content dissemination in OppNets. In this paper, in order to solve these problems, we propose an incentive-driven and freshness-aware pub/sub Content Dissemination scheme, called ConDis , for selfish OppNets. In ConDis , the Tit-For-Tat (TFT) scheme is employed to deal with selfish behaviors of nodes in OppNets. Moreover, a novel content exchange protocol is proposed when nodes are in contact. Specifically, during each contact, the exchange order is determined by the content utility, which represents the usefulness of a content for a certain node, and the objective of nodes is to maximize the utility of the content inventory stored in their buffer. Extensive realistic trace-driven simulation results show that ConDis is superior to other existing schemes in terms of total freshness value, total delivered contents, and total transmission cost.

42Β citations


Cites background from "Distributed Maintenance of Cache Fr..."

  • [...]

Posted Contentβ€’

[...]

TL;DR: The current state of the art in the design and optimization of low-latency cyberphysical systems and applications in which sources send time-stamped status updates to interested recipients is described and AoI timeliness metrics are described.
Abstract: We summarize recent contributions in the broad area of age of information (AoI). In particular, we describe the current state of the art in the design and optimization of low-latency cyberphysical systems and applications in which sources send time-stamped status updates to interested recipients. These applications desire status updates at the recipients to be as timely as possible; however, this is typically constrained by limited system resources. We describe AoI timeliness metrics and present general methods of AoI evaluation analysis that are applicable to a wide variety of sources and systems. Starting from elementary single-server queues, we apply these AoI methods to a range of increasingly complex systems, including energy harvesting sensors transmitting over noisy channels, parallel server systems, queueing networks, and various single-hop and multi-hop wireless networks. We also explore how update age is related to MMSE methods of sampling, estimation and control of stochastic processes. The paper concludes with a review of efforts to employ age optimization in cyberphysical applications.

35Β citations


Cites background from "Distributed Maintenance of Cache Fr..."

  • [...]


References
More filters
Bookβ€’

[...]

24 Apr 1990

6,226Β citations

Journal Articleβ€’DOIβ€’

[...]

TL;DR: There is a comprehensive introduction to the applied models of probability that stresses intuition, and both professionals, researchers, and the interested reader will agree that this is the most solid and widely used book for probability theory.
Abstract: The Seventh Edition of the successful Introduction to Probability Models introduces elementary probability theory and stochastic processes. This book is particularly well-suited to those applying probability theory to the study of phenomena in engineering, management science, the physical and social sciences, and operations research. Skillfully organized, Introduction to Probability Models covers all essential topics. Sheldon Ross, a talented and prolific textbook author, distinguishes this book by his effort to develop in students an intuitive, and therefore lasting, grasp of probability theory. Ross' classic and best-selling text has been carefully and substantially revised. The Seventh Edition includes many new examples and exercises, with the majority of the new exercises being of the easier type. Also, the book introduces stochastic processes, stressing applications, in an easily understood manner. There is a comprehensive introduction to the applied models of probability that stresses intuition. Both professionals, researchers, and the interested reader will agree that this is the most solid and widely used book for probability theory. Features: * Provides a detailed coverage of the Markov Chain Monte Carlo methods and Markov Chain covertimes * Gives a thorough presentation of k-record values and the surprising Ignatov's * theorem * Includes examples relating to: "Random walks to circles," "The matching rounds problem," "The best prize problem," and many more * Contains a comprehensive appendix with the answers to approximately 100 exercises from throughout the text * Accompanied by a complete instructor's solutions manual with step-by-step solutions to all exercises New to this edition: * Includes many new and easier examples and exercises * Offers new material on utilizing probabilistic method in combinatorial optimization problems * Includes new material on suspended animation reliability models * Contains new material on random algorithms and cycles of random permutations

4,938Β citations

[...]

Amin Vahdat1β€’
01 Jan 2000
TL;DR: This work introduces Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery and achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.
Abstract: Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.

4,271Β citations


"Distributed Maintenance of Cache Fr..." refers background in this paper

  • [...]

Bookβ€’

[...]

01 Nov 1990
TL;DR: This paper focuses on the part of the knapsack problem where the problem of bin packing is concerned and investigates the role of computer codes in the solution of this problem.
Abstract: Introduction knapsack problem bounded knapsack problem subset-sum problem change-making problem multiple knapsack problem generalized assignment problem bin packing problem. Appendix: computer codes.

3,581Β citations


"Distributed Maintenance of Cache Fr..." refers methods in this paper

  • [...]

Proceedings Articleβ€’DOIβ€’

[...]

Kevin Fall1β€’
25 Aug 2003
TL;DR: This work proposes a network architecture and application interface structured around optionally-reliable asynchronous message forwarding, with limited expectations of end-to-end connectivity and node resources.
Abstract: The highly successful architecture and protocols of today's Internet may operate poorly in environments characterized by very long delay paths and frequent network partitions. These problems are exacerbated by end nodes with limited power or memory resources. Often deployed in mobile and extreme environments lacking continuous connectivity, many such networks have their own specialized protocols, and do not utilize IP. To achieve interoperability between them, we propose a network architecture and application interface structured around optionally-reliable asynchronous message forwarding, with limited expectations of end-to-end connectivity and node resources. The architecture operates as an overlay above the transport layers of the networks it interconnects, and provides key services such as in-network data storage and retransmission, interoperable naming, authenticated forwarding and a coarse-grained class of service.

3,431Β citations


"Distributed Maintenance of Cache Fr..." refers background in this paper

  • [...]



Frequently Asked Questions (7)
Q1. What are the contributions mentioned in the paper "Distributed maintenance of cache freshness in opportunistic mobile networks" ?

In this paper, the authors propose a scheme to efficiently maintain cache freshness.Β Extensive tracedriven simulations show that their scheme significantly improves cache freshness, and hence ensures the validity of data access provided to mobile users.Β 

Due to the intermittent network connectivity in opportunistic mobile networks, data is forwarded in a β€œcarry-and-forward” manner.Β 

Due to possible version inconsistency among different data copies cached in the DAT, opportunistic refreshing may have some side-effects on cache freshness.Β 

Their results show that up to a boundary on the order of several minutes, the decay of the CCDF is well approximated as exponential.Β 

since different values of 𝑝 do not affect the calculation of utilities of data updates, such increase of refreshing overhead is relatively smaller than that of decreasing Ξ”.Section IV-C shows that the refreshing patterns of web RSS data is temporally skewed, such that the majority of data updates are generated during specific time periods of a day.Β 

The performance of their proposed scheme on maintaining cache freshness is evaluated by extensive tracedriven simulations on realistic mobile traces.Β 

From Figure 12 the authors observe that, when the value of Ξ” is small, the cache freshness is mainly constrained by the network contact capability, and the actual refreshing delay is much higher than the required Ξ”. Such inability to satisfy the cache freshness requirements leads to more replications of data updates as described in Section V-B, and makes caching nodes more prone to perform opportunistic refreshing.Β 


Trending Questions (1)
How to clear browser cache in Robot Framework?

In this paper, we propose a scheme to efficiently maintain cache freshness.