scispace - formally typeset
Open AccessProceedings ArticleDOI

Location-aware cache replacement for mobile environments

TLDR
A mobility-aware cache replacement policy, called MARS, suitable for wireless environments, that consistently outperforms existing cache replacement policies and significantly improves mobile clients' cache hit ratio.
Abstract
Traditional cache replacement policies rely on the temporal locality of users' access pattern to improve cache performance. These policies, however, are not ideal in supporting mobile clients. As mobile clients can move freely from one location to another, their access pattern not only exhibits temporal locality, but also exhibits spatial locality. In order to ensure efficient cache utilisation, it is important to take into consideration the location and movement direction of mobile clients when performing cache replacement. In this paper. we propose a mobility-aware cache replacement policy, called MARS, suitable for wireless environments. MARS takes into account important factors (e.g. client access rate, access probability, update probability and client location) in order to improve the effectiveness of onboard caching for mobile clients. Test results show that MARS consistently outperforms existing cache replacement policies and significantly improves mobile clients' cache hit ratio.

read more

Content maybe subject to copyright    Report

Please do not remove this page
Location-aware cache replacement for mobile
environments
Lai, Kwong; Tari, Zahir; Bertok, Peter
https://researchrepository.rmit.edu.au/discovery/delivery/61RMIT_INST:ResearchRepository/12246353850001341?l#13248387980001341
Lai, Tari, Z., & Bertok, P. (2004). Location-aware cache replacement for mobile environments. Globecom
’04 IEEE Global Telecommunications Conference, 3441–3447.
https://doi.org/10.1109/GLOCOM.2004.1379006
Published Version: https://doi.org/10.1109/GLOCOM.2004.1379006
Downloaded On 2022/08/10 11:57:25 +1000
© 2004 IEEE
Repository homepage: https://researchrepository.rmit.edu.au
Please do not remove this page

Location-Aware Cache Replacement for Mobile
Environments
Kwong Yuen Lai Zahir Tari Peter Bertok
School of Computer Science and Information Technology
RMIT University, Melbourne, Australia
Email: {kwonlai, zahirt, pbertok}@cs.rmit.edu.au
Abstract Traditional cache replacement policies rely on the
temporal locality of users’ access pattern to improve cache perfor-
mance. These policies, however, are not ideal in supporting mobile
clients. As mobile clients can move freely from one location to
another, their access pattern not only exhibits temporal locality,
but also exhibits spatial locality. In order to ensure efficient cache
utilisation, it is important to take into consideration the location
and movement direction of mobile clients when performing
cache replacement. In this paper, we propose a mobility-aware
cache replacement policy, called MARS, suitable for wireless
environments. MARS takes into account important factors (e.g.
client access rate, access probability, update probability and
client location) in order to improve the effectiveness of on-
board caching for mobile clients. Test results show that MARS
consistently outperforms existing cache replacement policies and
significantly improves mobile clients’ cache hit ratio.
I. INTRODUCTION
Advances in mobile communication infrastructures and
global positioning technologies have resulted in a new class
of services, referred to as Location Dependent Information
Services (LDIS) [1], becoming available to users. Location
dependent information services provide users with the ability
to access information related to their current location. Ex-
amples of LDIS include providing users with local weather
information, access to news and information about nearby
businesses and facilities, etc.
There are many benefits in providing support for LDIS.
Firstly, with the help of location information, data objects can
be stored at servers nearest to where they are accessed most
frequently. This improves access time and reduce network
traffic. The use of location dependent data also provide con-
venience for mobile users, as mobile device can automatically
fetch data based on location, ensuring the most relevant
information is provided to the users.
Despite the benefits of LDIS, a number of challenges (e.g.
limited cache space, limited bandwidth, limited client trans-
mission power) [2], [1], [3] must be overcome before these
benefits can be realised. Existing research have shown that
caching is an important technique in combating the limitation
of mobile environments. By caching on mobile devices, data
availability is increased, access speed improved and the need
of transmission over the wireless channel reduced. In this
paper, we focus our attention on the issue of cache replacement
for mobile clients and how client-side caching can improve the
performance of mobile devices when utilising LDIS.
Tradition caching replacement policies rely on the temporal
locality of clients’ access pattern to determine which data ob-
jects to replace when a client’s cache becomes full. However,
in mobile networks where clients utilise location dependent
services, the access pattern of mobile clients do not depend
only on the temporal properties of data access, but is also
dependent on the location of data, the location of the clients
and the movement direction of clients [4]. Relying solely on
the temporal properties of data access when making cache
replacement decisions will result in poor cache hit ratio. In
order to improve the performance of mobile clients’ caches,
it is important to consider both temporal and spatial
1
locality
when making cache replacement decisions.
Many existing cache replacement policies (e.g. [5], [6]) use
cost functions to incorporate different factors including access
frequency, update rate, size of objects..etc, however very few
of these policies account for the location and movement of mo-
bile clients. Cache replacement policies such as LRU, LFU and
LRU-k [7] only take into account the temporal characteristics
of data access, while policies such as FAR [2] only deal with
the location dependent aspect of cache replacement but neglect
the temporal properties. One policy which does consider both
spatial and temporal properties of data objects is the PAID
proposed in [6]. However, PAID does not take into account
updates to data objects. It also deals with client movement
in a very simplistic way (i.e. only considers client’s current
movement direction).
In this paper, we propose a new gain-based cache replace-
ment policy that takes into account both the temporal and
spatial locality of clients’ access pattern. The proposed strat-
egy, called Mobility-Aware Replacement Scheme (MARS),
ties together various factors that are important when making
cache replacement decisions through a cost function. This
cost function comprises a temporal score and a spatial score.
The temporal score is calculated from access probabilities,
update and query rates, while the spatial score is calculated
based on client location, the location of data objects and
client movement direction. When the cache of a mobile client
becomes full and a new object needs to be cached, the cost
function is used to generate a cost value for each cached object.
The object with the lowest value is evicted from the client’s
cache and replaced by the new object. We show through
1
Spatial in terms of the geographical location associated with a data object
Globecom 2004 3441 0-7803-8794-5/04/$20.00 © 2004 IEEE
IEEE Communications Society
Authorized licensed use limited to: RMIT University. Downloaded on January 5, 2010 at 19:28 from IEEE Xplore. Restrictions apply.

simulation that mobile clients using MARS are able to achieve
an significant improvement in cache hit ratio compared to
clients using other existing cache replacement policies.
The rest of this paper is organised as follows. Section II
provides a survey of existing mobility-aware cache replace-
ment policies. Section III describes the location model used in
our work. In Section IV, we propose a new cache replacement
strategy called MARS, suitable for mobile clients using LDIS.
Simulation results are presented in Section V together with an
analysis of the results. This is followed by our conclusion and
a discussion of possible future work in Section VI.
II. R
ELATED WORK
A. Modelling Location Dependent Data
The issue of modelling location dependent data for mobile
environments was first addressed in [4]. Location dependent
objects can have both temporal replicas and spatial replicas. By
binding queries and spatial replicas to data regions, it becomes
possible for clients to perform location dependent queries.
While the issue of location dependent caching is discuss in
this work, it was not investigated in depth. A more advanced
semantic based location model was proposed in [8], where
attributes of data objects are either location related, or non-
location related. For example, a location related attribute, city,
may take values from a domain containing city names (e.g.
Melbourne, Sydney). While it is possible to define domains
of different granularity, this comes at a higher overhead. It is
also unclear from this work how a user’s location is determined
and mapped to the location-related domains.
B. Cache Replacement Policies
1) Temporal Locality Based Cache Replacement Policies:
There are many existing temporal-based cache replacement
strategies, including LRU, LFU and LRU-K [7]. These re-
placement polices are based on the assumption that clients’
access pattern exhibits temporal locality (i.e. recently accessed
objects are likely to be accessed again in the near future,
objects that were queried frequently in the past will continue
to be queried frequently in the future). While these policies
are suitable for a network with stationary clients, they are
unsuitable for supporting location dependent services. They do
not take into account the location and the movement of mobile
clients and data objects when making cache replacement
decisions. Location dependent data is cached in the same
way as non-location dependent data, resulting in inefficient
cache utilisation for clients who move frequently and access
information based on their location.
FAR (Furthest Away Replacement) [2] is one of the earliest
mobility-aware cache replacement policies. It uses the current
location of mobile clients and the movement direction of
clients to make cache replacement decisions. In FAR, cached
objects are grouped into two sets. Those in the moving
direction of the client are put into the in-direction set, while
those that are not are put into the out-direction set. Data objects
in the out-direction set are always evicted first before those
in the in-direction set. FAR improves the cache hit ratio of
existing temporal based policies as it considers the location
and the future movement of mobile clients. However it only
deals with spatial locality and neglects the temporal properties
of clients’ access pattern. It is also ineffective when mobile
clients change direction frequently as objects will often be
move between the in-direction and out-direction sets.
PA (Probability Area) [6] is a cost-based replacement policy,
where each cached object is associated with a cost calculated
with a predefined cost function. When cache replacement takes
place, the data object with the lowest cost is replaced. The cost
function used in PA takes into account the access probabilities
(P
i
) of data objects and their valid scope area A(v
i
).The
cost of an object d
i
is defined as c
i
= P
i
.A(v
i
). Although
the PA schemes takes into account both the valid scope area
of data objects, and the temporal property of objects through
their access probabilities, it does not consider the location
and future movement of clients. This leads to poor cache
performance, because objects that are close to the client are
often replaced before objects that are far away, simply because
their valid scope area is smaller.
PAID (Probability Area Inverse Distance) [6] is an extended
version of PA that deals with the problem of distance. In
PAID, the distance between mobile clients and data objects
included as part of the cost function used for making cache
replacement decisions. The cost function is defined as c
i
=
P
i
.A(v
i
)
D(v
i
)
, where D(v
i
) is the distance from the client to the
valid scope v
i
. Based on this function, objects that are further
away will have their cost reduced by a larger factor compared
to those objects nearer to the client. PAID also attempts to
cater for client movement direction, by multiplying the cost of
objects which are not in the direction of the client’s movement
with a large factor. Results from [6] shows that both PA and
PAID performs better than other existing schemes such as
LRU and FAR. However, these two schemes suffer a similar
problem in that the temporal property of data objects is only
represented by one parameter, the access probability. Other
factors such as the cost of retrieving a missing object, the size
of data objects, the combined cost of objects evicted and the
update probabilities have not been accounted for.
III. T
HE LOCATION MODEL
We assumed that mobile devices are equipped with posi-
tioning capabilities, such as GPS or power-based positioning.
We define a location L as a pair of coordinates, (x, y) where x
and y are the latitude and longitude respectively. There are two
main benefits in representing locations this way. Firstly, since
most current positioning systems are already capable of pro-
viding latitude and longitude coordinates, this location model
can be supported easily without having to upgrade existing
hardware. Secondly, the latitude-longitude model eliminates
ambiguities as any location can be uniquely represented by
a coordinate pair. Our location model only considers two-
dimensions space, however it can be easily extended to cater
for 3 dimensional space by including the third dimension z.
The distance between any two locations is the length of a direct
line connecting the two points (i.e. the Euclidean distance). For
Globecom 2004 3442 0-7803-8794-5/04/$20.00 © 2004 IEEE
IEEE Communications Society
Authorized licensed use limited to: RMIT University. Downloaded on January 5, 2010 at 19:28 from IEEE Xplore. Restrictions apply.

d
i
500
300
100
Fig. 1. Valid Scope Example
example, given a client located at (x1,y1) and a data object
located at (x2,y2), the distance between the client and the
object is equal to
|x1 x2|
2
+ |y1 y2|
2
.
In a traditional database system, each data object only has
one value, even when multiple replica of an object exist. The
value of each object may change over time, however, the value
of the replicas are the same at any one time. However, in a
wireless environment, where location dependent information
services are provided, data objects are also bounded to their
location by their valid scope. The valid scope of an object is
the area in which the value of the object is meaningful and cor-
rect. For example, a query for the data object local
time, will
return different results depending on where the query is issued.
Another example might be the data object nearest
hotel.We
refer to this type of data objects as Location Dependent Data
(LDD) objects. The value of a LDD object depends on the
location from which it is queried.
We will now provide a definition of the valid scope of data
objects. The valid scope vs(d
i
) of an object d
i
is defined as
a tuple (Lx
i
,Ly
i
,range
i
), where Lx
i
and Ly
i
is the center
reference point of vs(d
i
) and range
i
is the radius defining the
maximum euclidean distance from (Lx
i
,Ly
i
) in which d
i
is
valid. This representation of valid scopes is suitable for LDIS
in wireless environments because it is small in size, thus does
not introduce a high storage/communication overhead.
As an example, given a data object d
i
with a valid scope
500, 300, 100, the value of d
i
is valid within a 100 distance
units radius from the point (500, 300). The valid scope of d
i
is illustrated in Figure 1.
Lastly, a representation of velocity is needed to model the
movement of mobile clients. A velocity can be represented by
a tuple (Vx,Vy), where Vxis the speed along the x direction,
Vyis the speed along the y direction. In our model, we define
the x direction as running from west to east, and the y direction
runs from south to north.
IV. A N
EW LOCATION-DEPENDENT CAC HE
REPLACEMENT POLICY -MARS
As mobile clients move from location to location utilising
LDIS, the locality of their access pattern change dynamically
based on their movement pattern. It is important for cache
replacement policies to dynamically adapt to this change in
access locality to ensure high cache hit ratio is achieved. In
this section, we propose a new cache replacement policy called
MARS (Mobility-Aware Replacement Scheme), to address the
issue of mobility and caching of location-dependent data.
MARS is a cost based scheme. Associated with each cached
data object is a replacement cost. When a new data object
needs to be cached, and there is insufficient cache space, the
object with the lowest replacement cost is removed until there
is enough space to cache the new object. The calculation of
the replacement cost takes into account both temporal and
spatial properties of information access to cater for the mobile
nature of users in wireless environments. Temporal locality
is accounted for by considering access probability, access
frequency, update probability and update frequency of data
objects. On the other hand, clients’ current location, velocity
and location of data objects are used to account for spatial
locality. Cache size, object size, and retrieval cost are also
included in the cost function to capture the condition of the
client’s cache.
A. System Model
In order to present the MARS policy in details, we must
first describe the wireless network model used. We consider
a network which consists of both wired and wireless enti-
ties. Wired entities include stationary clients and information
servers. Information servers host copies of data objects that
are of interest to the clients. Also connected to the wired
network are base stations and access points that provide
wireless communication channels needed for mobile clients
to access information on the information servers. The area
covered by each base station or access point is called a cell.
Mobile clients can connect to the network wirelessly while
inside the coverage area of a cell. Although both stationary
and mobile clients exist in our network model, we focus on
mobile users as the goal of our work is on improving cache
performance for these users. Stationary clients can employ any
one of the many existing cache replacement policies designed
for stationary clients.
Mobile clients access data objects at the information servers
by issuing queries at a mean rate of λ queries per time unit.
When answering a query, a client first looks in its own onboard
cache to see if the objects needed are available. If the objects
are cached, the query is resolved immediately without the need
of communicating with the information servers. On the other
hand, if the objects needed are not cached, an uplink request
is sent to the nearest information server to request the objects.
We also consider the update pattern of data objects. In our
model, we assume objects are only updated on the s erver
side. Updates are generated at a rate of µ updates per time
unit. Updates and queries are distributed among data objects
following two distributions Pr(d
i
update) and Pr(d
i
query). The actual functions used for these distributions are
application dependent. For example, in applications where
client queries exhibit high locality, Pr(d
i
query) may
take the form of a Zipf distribution [9]. In applications where
Globecom 2004 3443 0-7803-8794-5/04/$20.00 © 2004 IEEE
IEEE Communications Society
Authorized licensed use limited to: RMIT University. Downloaded on January 5, 2010 at 19:28 from IEEE Xplore. Restrictions apply.

TABLE I
Cost function parameters
Parameter Value
d
i
Data object with ID i
vs(d
i
) The valid scope of d
i
λ Mean query rate
µ Mean update rate
L
i
=(Lx
i
,Ly
i
) Location of d
i
L
m
=(Lx
m
,Ly
m
) Location of client m
Pr(d
i
query) Probability that d
i
is chosen for a query
Pr(d
i
update) Probability that d
i
is chosen for an update
cacheSize
m
Size of client m’s cache
objSize
i
Size of d
i
t
current
Current time
t
q,i
Time d
i
was last queried
t
u,i
Time d
i
was last updated
c
i
Cost of retrieving d
i
from remote server
V
m
=(Vx
m
,Vy
m
) Velocity of client m
updates to data objects are randomly generated, Pr(d
i
update) can be represented by a uniform distribution (i.e.
Pr(d
i
update)=
1
N
where N is the number of data objects
on the server.)
In our model, objects are only evicted from a client’s cache
under two conditions. Firstly, invalidation reports are sent to
clients by the server periodically to inform them of objects
that have been updated. Clients use these invalidation reports
to identify out-dated objects and remove them from their cache
to maintain cache consistency. Secondly, when a client’s cache
becomes full and a new object needs to be cached, objects are
removed to make room for the new object.
Table I summarises the various parameters used in our cost
function.
B. The Replacement Cost Function
With the basic parameters defined, we can now described
the replacement cost function used in MARS. The cost of
replacing an object d
i
in client ms cache is calculated with
the following equation:
cost(i) = score
temp
(i) × score
spat
(i) × c
i
(1)
where score
temp
(i) is the temporal score of the object,
score
spat
(i) is the spatial score of the object and c
i
is the cost
of retrieving the object from the remote server. The definition
of score
temp
(i) and score
spat
(i) are given in Section IV-C
and Section IV-D respectively.
The goal of the MARS replacement policy is to determine
the s et of objects S to evict from the client’s cache, such that
the cost of the objects evicted are minimised. The problem
can be formally defined as follows. Find S, such that
min
d
i
S
cost(i)
and
d
i
S
objSize
i
d
new
where S D
(2)
This problem can be mapped to the classical 0/1 knapsack
optimisation problem in the following form:
max
d
i
S
cost(i)
and
d
i
S
objSize
i
cacheSize
m
(3)
The objective of the knapsack problem is to find the set
of objects that will maximise
d
i
S
cost(i) (i.e. finding the
maximum benefit that can be gained) while satisfying the
cache size restriction. It is well known that the knapsack
problem is NP complete, however heuristics and dynamic
programming techniques can be used to find a sub-optimal
solution in polynomial time. In order to reduce the complexity
of MARS and ensure timely cache replacement decisions are
made, we have chosen a heuristic algorithm to solve the
knapsack problem. When a client needs to insert a new object
into its cached and the cache is full, the cached object with the
lowest cost is removed until there is enough available space
in the cache to store the new object.
C. Temporal Score
The temporal score, score
temp
(i),isusedintheMARS
replacement cost function to capture the temporal locality of
data access, we define the temporal score of a data object d
i
as:
score
temp
(i)=
t
current
t
u,i
t
current
t
q,i
×
λ
i
µ
i
(4)
where λ
i
= λ.P r(d
i
query) is query rate of d
i
, and µ
i
=
µ.P r(d
i
update) is the update rate of d
i
. To avoid division
by zero in Equation 4, the first term is set to 1 if the value of
(t
current
t
q,i
) = 0. Similarly, if µ is 0, then
λ
i
µ
i
is set to 1.
Two aspects of temporal locality is catered for in the above
definition, recency of use and frequency of use. The first
term in Equation 4 is a ratio between the time d
i
was last
updated and the time d
i
was last queried. This term deals
with the recency of use. It is assumed that objects which
were queried recently are likely to be queried again soon, thus
(t
current
t
q,i
) is used as the denominator to ensure recently
queried objects will have a high temporal score. Similarly,
(t
current
t
u,i
) is placed in the numerator to reduce the
temporal score of objects that have been recently update, as it
is likely these objects will be updated again s oon and evicted
from the client’s cache. The second term in the equation caters
for the frequency of use. It is a ratio of the query rate and
update rate of d
i
. Objects that are queried frequently but hardly
updated will have a high ratio, while objects that are update
frequently but rarely queried will have a low ratio. This is
based on the assumption that clients are likely to continue
querying objects that were frequently queried in the past.
Update rate has a role in the calculation as it can be viewed
as a risk factor. Objects with a high update probability will
have a reduced temporal score because they carry more risk
of being evicted from the client’s cache due to updates from
the server than objects with a low update probability.
D. Spatial Score
As mobile clients move within the wireless network and
utilise location dependent information services, their access
pattern exhibits spatial locality. A mobile client is more likely
to access data objects related to its immediate surrounding
and along its path than objects that are far way from the
client’s current location. In order to capture this characteristic,
Globecom 2004 3444 0-7803-8794-5/04/$20.00 © 2004 IEEE
IEEE Communications Society
Authorized licensed use limited to: RMIT University. Downloaded on January 5, 2010 at 19:28 from IEEE Xplore. Restrictions apply.

Citations
More filters
Journal ArticleDOI

Toward green media delivery: location-aware opportunities and approaches

TL;DR: This article investigates the opportunities of exploiting location awareness to enable green end-to-end media delivery and discusses and proposes approaches for location-based adaptive video quality planning, in-network caching, content prefetching, and long-term radio resource management.
Proceedings ArticleDOI

A weighted cache replacement policy for location dependent data in mobile environments

TL;DR: This paper proposes a new cache replacement policy for location dependent data in mobile environment that is adaptive to client's movement pattern and provides importance to the regions around client's position, and calls it the Weighted Predicted Region based Cache Replacement Policy.
Journal ArticleDOI

A Predicted Region based Cache Replacement Policy for Location Dependent Data in Mobile Environment

TL;DR: A new cache replacement policy for location dependent data in mobile environment that uses a predicted region based cost function to select an item for eviction from cache and that significantly improves the system performance in comparison to previous schemes in terms of cache hit ratio.
Journal ArticleDOI

Energy saving strategies for cooperative cache replacement in mobile ad hoc networks

TL;DR: Simulations show that the proposed policies can significantly reduce energy consumption and access latency when compared to other replacement policies, and are presented to solve the Energy-efficient COordinated cache Replacement Problem (ECORP) as a 0-1 knapsack problem.
Journal ArticleDOI

Movement prediction based cooperative caching for location dependent information service in mobile ad hoc networks

TL;DR: Simulation results demonstrate that the proposed LDCC strategy significantly outperforms existing caching policies in providing LDIS in mobile ad hoc networks.
References
More filters
Proceedings ArticleDOI

Web caching and Zipf-like distributions: evidence and implications

TL;DR: This paper investigates the page request distribution seen by Web proxy caches using traces from a variety of sources and considers a simple model where the Web accesses are independent and the reference probability of the documents follows a Zipf-like distribution, suggesting that the various observed properties of hit-ratios and temporal locality are indeed inherent to Web accesse observed by proxies.
Proceedings ArticleDOI

The LRU-K page replacement algorithm for database disk buffering

TL;DR: The LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages, and adapts in real time to changing patterns of access.
Proceedings ArticleDOI

Using semantic caching to manage location dependent data in mobile computing

TL;DR: The results show that semantic caching is more flexible and effective for use in LDD applications than page caching, whose performance is quite sensitive to the database physical organization.
Journal ArticleDOI

Cache invalidation and replacement strategies for location-dependent data in mobile environments

TL;DR: A new performance criterion is introduced, called caching efficiency, and a generic method for location-dependent cache invalidation strategies is proposed, and two cache replacement policies, PA and PAID, are proposed.
Proceedings ArticleDOI

Location dependent query processing

TL;DR: This paper gives a formalization of location relatedness in queries and distinguishes location dependence and location awareness and provides thorough examples to support the approach.
Related Papers (5)
Frequently Asked Questions (14)
Q1. Why does the cache hit ratio increase as query rate increase?

The cache hit ratio increases as query rate increase because when query rate is high, more number of queries is executed at each location. 

When the percentage of location dependent queries are low (20%), MARS, FAR and PAID achieves a hit ratio of 30% on location dependent queries compared to 20% achieved by LRU and PA. 

It is important for cachereplacement policies to dynamically adapt to this change in access locality to ensure high cache hit ratio is achieved. 

In this paper, the authors have presented a mobility-aware cache replacement policy that is efficient in supporting mobile clients using location dependent information services. 

By anticipating clients’ future location when making cache replacement decisions, MARS is able to maintain good performance even for clients travelling at high speed. 

Test results show that MARS provide efficient cache replacement for mobile clients and is able to achieve a 20% improvement in cache hit ratio compared to existing replacement policies. 

The temporal score, scoretemp(i), is used in the MARS replacement cost function to capture the temporal locality of data access, the authors define the temporal score of a data object di as:scoretemp(i) = tcurrent − tu,i tcurrent − tq,i × λi µi(4)where λi = λ. 

the probability of it querying an object with a valid scope center reference point at Li is equal to :Pr(i) = 1 |Lm − Li| × 1∑ j∈N 1 |Lm−Lj | (6)Based on the definition in Equation 6, queries will be distributed among data objects based on their distance from the client current location. 

When the probability of location dependent queries is high, client caches are filled with information relevant to the clients’ current location, resulting in more queries being satisfied by the cache. 

In order to model the utilisation of location dependent services, percentLDQ% of queries performed by clients are location dependent and 1 − percentLDQ% are non-location dependent. 

At high location dependent query probability, LRU performs slightly better than MARS because more cache space is used by MARS to store objects obtained from location-dependent, thus reducing the number of objects cached from non-location-dependent queries. 

The cost of replacing an object di in client m’s cache is calculated with the following equation:cost(i) = scoretemp(i) × scorespat(i) × ci (1) where scoretemp(i) is the temporal score of the object, scorespat(i) is the spatial score of the object and ci is the cost of retrieving the object from the remote server. 

The graph in Figure 5 shows that the mobility-aware cache replacement policies perform significantly better than the temporal base policy (LRU) when it comes to location dependent queries. 

Restrictions apply.example, given a client located at (x1, y1) and a data object located at (x2, y2), the distance between the client and theobject is equal to √|x1 − x2|2 + |y1 − y2|2. 

Trending Questions (1)
How to clear browser cache in Robot Framework?

In order to ensure efficient cache utilisation, it is important to take into consideration the location and movement direction of mobile clients when performing cache replacement.