TL;DR: This article investigates the opportunities of exploiting location awareness to enable green end-to-end media delivery and discusses and proposes approaches for location-based adaptive video quality planning, in-network caching, content prefetching, and long-term radio resource management.
Abstract: Mobile media has undoubtedly become the predominant source of traffic in wireless networks. The result is not only congestion and poor quality of experience, but also an unprecedented energy drain at both the network and user devices. In order to sustain this continued growth, novel disruptive paradigms of media delivery are urgently needed. We envision that two key contemporary advancements can be leveraged to develop greener media delivery platforms: The proliferation of navigation hardware and software in mobile devices has created an era of location awareness, where both the current and future user locations can be predicted; and the rise of context-aware network architectures and self-organizing functionalities is enabling context signaling and in-network adaptation. With these developments in mind, this article investigates the opportunities of exploiting location awareness to enable green end-to-end media delivery. In particular, we discuss and propose approaches for location-based adaptive video quality planning, in-network caching, content prefetching, and long-term radio resource management. To provide insights on the energy savings, we then present a cross-layer framework that jointly optimizes resource allocation and multi-user video quality using location predictions. Finally, we highlight some of the future research directions for location-aware media delivery in the conclusion.
TL;DR: This paper proposes a new cache replacement policy for location dependent data in mobile environment that is adaptive to client's movement pattern and provides importance to the regions around client's position, and calls it the Weighted Predicted Region based Cache Replacement Policy.
Abstract: Developing widely useful mobile computing applications presents difficult challenges. On one hand, mobile users demand intuitive user interfaces, fast response times, and deep relevant content. On the other hand, mobile devices have limited processing, storage, power, display, and communication resources. Caching frequently accessed data items on the mobile client is an effective technique to improve the system performance in mobile environment. Due to cache size limitation, the choice of cache replacement technique to find a suitable subset of items for eviction from cache becomes important. In this paper, we propose a new cache replacement policy for location dependent data in mobile environment. The proposed policy selects the predicted region based on client's movement and uses it to calculate the weighted data distance of an item. This makes the policy adaptive to client's movement pattern and provides importance to the regions around client's position. This is unlike earlier policies that consider the directional/non-directional data distance only. We call our policy the Weighted Predicted Region based Cache Replacement Policy (WPRRP). Simulation results show that the proposed policy significantly improves the system performance in comparison to previous schemes in terms of cache hit ratio.
24 citations
Cites background from "Location-aware cache replacement fo..."
TL;DR: A new cache replacement policy for location dependent data in mobile environment that uses a predicted region based cost function to select an item for eviction from cache and that significantly improves the system performance in comparison to previous schemes in terms of cache hit ratio.
Abstract: Caching frequently accessed data items on the mobile client is an effective technique to improve the system performance in mobile environment. Proper choice of cache replacement technique to find a suitable subset of items for eviction from cache is very important because of limited cache size. Available policies do not take into account the movement patterns of the client. In this paper, we propose a new cache replacement policy for location dependent data in mobile environment. The proposed policy uses a predicted region based cost function to select an item for eviction from cache. The policy selects the predicted region based on client’s movement and uses it to calculate the data distance of an item. This makes the policy adaptive to client’s movement pattern unlike earlier policies that consider the directional / non-directional data distance only. We call our policy the Prioritized Predicted Region based Cache Replacement Policy (PPRRP). Simulation results show that the proposed policy significantly improves the system performance in comparison to previous schemes in terms of cache hit ratio.
17 citations
Cites background from "Location-aware cache replacement fo..."
TL;DR: Simulations show that the proposed policies can significantly reduce energy consumption and access latency when compared to other replacement policies, and are presented to solve the Energy-efficient COordinated cache Replacement Problem (ECORP) as a 0-1 knapsack problem.
Abstract: Data caching on mobile clients is widely seen as an effective solution to improve system performance. In particular, cooperative caching, based on the idea of sharing and coordination of cache data among multiple users, can be particularly effective for information access in mobile ad hoc networks where mobile clients are moving frequently and network topology is changing dynamically. Most existing cache strategies perform replacement independently, and they seldom consider coordinated replacement and energy saving issues in the context of a mobile ad hoc network. In this paper, we analyse the impact of energy on designing a cache replacement policy and formulate the Energy-efficient COordinated cache Replacement Problem (ECORP) as a 0-1 knapsack problem. A dynamic programming algorithm called ECORP-DP and a heuristic algorithm called ECORP-Greedy are presented to solve the problem. Simulations, using both synthetic workload traces and real workload traces in our experiments, show that the proposed policies can significantly reduce energy consumption and access latency when compared to other replacement policies.
TL;DR: SPMC-CRP is being proposed, which uses sequential pattern mining and clustering to remove random movement data in mobile user’s profiles and predict accurate next location in order to improve the effectiveness of previous cache replacement policy.
Abstract: Earlier cache replacement policies used in LDIS have not evolved any accurate next location prediction policy that can be used in cost computation of several data items. To overcome this limitation of previous policy and to ensure efficient cache utilization SPMC-CRPis being proposed. Here sequential pattern mining and clustering is used to remove random movement data in mobile user’s profiles and predict accurate next location. The proposed policy uses the mobility rules extracted from given client movement trajectories. The mobility rules taken here are derived from sequential pattern mining. Proceeding to accurate next location prediction, the policy considers the important factors such as client access probability, query rate, update rate and predicted next client’s location while estimating cache replacement cost in order to improve the effectiveness of previous cache replacement policy.
TL;DR: This paper investigates the page request distribution seen by Web proxy caches using traces from a variety of sources and considers a simple model where the Web accesses are independent and the reference probability of the documents follows a Zipf-like distribution, suggesting that the various observed properties of hit-ratios and temporal locality are indeed inherent to Web accesse observed by proxies.
Abstract: This paper addresses two unresolved issues about Web caching. The first issue is whether Web requests from a fixed user community are distributed according to Zipf's (1929) law. The second issue relates to a number of studies on the characteristics of Web proxy traces, which have shown that the hit-ratios and temporal locality of the traces exhibit certain asymptotic properties that are uniform across the different sets of the traces. In particular, the question is whether these properties are inherent to Web accesses or whether they are simply an artifact of the traces. An answer to these unresolved issues will facilitate both Web cache resource planning and cache hierarchy design. We show that the answers to the two questions are related. We first investigate the page request distribution seen by Web proxy caches using traces from a variety of sources. We find that the distribution does not follow Zipf's law precisely, but instead follows a Zipf-like distribution with the exponent varying from trace to trace. Furthermore, we find that there is only (i) a weak correlation between the access frequency of a Web page and its size and (ii) a weak correlation between access frequency and its rate of change. We then consider a simple model where the Web accesses are independent and the reference probability of the documents follows a Zipf-like distribution. We find that the model yields asymptotic behaviour that are consistent with the experimental observations, suggesting that the various observed properties of hit-ratios and temporal locality are indeed inherent to Web accesses observed by proxies. Finally, we revisit Web cache replacement algorithms and show that the algorithm that is suggested by this simple model performs best on real trace data. The results indicate that while page requests do indeed reveal short-term correlations and other structures, a simple model for an independent request stream following a Zipf-like distribution is sufficient to capture certain asymptotic properties observed at Web proxies.
3,418 citations
"Location-aware cache replacement fo..." refers background in this paper
TL;DR: The LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages, and adapts in real time to changing patterns of access.
Abstract: This paper introduces a new approach to database disk buffering, called the LRU-K method The basic idea of LRU-K is to keep track of the times of the last K references to popular database pages, using this information to statistically estimate the interarrival times of references on a page by page basis Although the LRU-K approach performs optimal statistical inference under relatively standard assumptions, it is fairly simple and incurs little bookkeeping overhead As we demonstrate with simulation experiments, the LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages In fact, LRU-K can approach the behavior of buffering algorithms in which page sets with known access frequencies are manually assigned to different buffer pools of specifically tuned sizes Unlike such customized buffering algorithms however, the LRU-K method is self-tuning, and does not rely on external hints about workload characteristics Furthermore, the LRU-K algorithm adapts in real time to changing patterns of access
968 citations
"Location-aware cache replacement fo..." refers background or methods in this paper
TL;DR: The results show that semantic caching is more flexible and effective for use in LDD applications than page caching, whose performance is quite sensitive to the database physical organization.
Abstract: Location-dependent applications are becoming very popular in mobile environments. To improve system performance and facilitate disconnection, caching is crucial to such applications. In this paper, a semantic caching scheme is used to access location dependent data in mobile computing. We first develop a mobility model to represent the moving behaviors of mobile users and formally define location dependent queries. We then investigate query processing and cache management strategies. The performance of the semantic caching scheme and its replacement strategy FAR is evaluated through a simulation study. Our results show that semantic caching is more flexible and effective for use in LDD applications than page caching, whose performance is quite sensitive to the database physical organization. We also notice that the semantic cache replacement strategy FAR, which utilizes the semantic locality in terms of locations, performs robustly under different kinds of workloads.
TL;DR: A new performance criterion is introduced, called caching efficiency, and a generic method for location-dependent cache invalidation strategies is proposed, and two cache replacement policies, PA and PAID, are proposed.
Abstract: Mobile location-dependent information services (LDISs) have become increasingly popular in recent years. However, data caching strategies for LDISs have thus far received little attention. In this paper, we study the issues of cache invalidation and cache replacement for location-dependent data under a geometric location model. We introduce a new performance criterion, called caching efficiency, and propose a generic method for location-dependent cache invalidation strategies. In addition, two cache replacement policies, PA and PAID, are proposed. Unlike the conventional replacement policies, PA and PAID take into consideration the valid scope area of a data value. We conduct a series of simulation experiments to study the performance of the proposed caching schemes. The experimental results show that the proposed location-dependent invalidation scheme is very effective and the PA and PAID policies significantly outperform the conventional replacement policies.
169 citations
"Location-aware cache replacement fo..." refers background in this paper
TL;DR: This paper gives a formalization of location relatedness in queries and distinguishes location dependence and location awareness and provides thorough examples to support the approach.
Abstract: The advances in wireless and mobile computing allow a mobile user to perform a wide range of aplications once limited to non-mobile hard wired computing environments As the geographical position of a mobile user is becoming more trackable, users need to pull data which are related to their location, perhaps seeking information about unfamiliar places or local lifestyle data In these requests, a location attribute has to be identified in order to provide more efficient access to location dependent data, whose value is determined by the location to which it is related Local yellow pages, local events, and weather information are some of the examples of these dataIn this paper, we give a formalization of location relatedness in queries We differentiate location dependence and location awareness and provide thorough examples to support our approach
129 citations
"Location-aware cache replacement fo..." refers background in this paper
Q1. Why does the cache hit ratio increase as query rate increase?
The cache hit ratio increases as query rate increase because when query rate is high, more number of queries is executed at each location.
Q2. How does MARS achieve a hit ratio of 30% on location dependent queries?
When the percentage of location dependent queries are low (20%), MARS, FAR and PAID achieves a hit ratio of 30% on location dependent queries compared to 20% achieved by LRU and PA.
Q3. What is the importance of a dynamically adapting cache replacement policy?
It is important for cachereplacement policies to dynamically adapt to this change in access locality to ensure high cache hit ratio is achieved.
Q4. What is the purpose of this paper?
In this paper, the authors have presented a mobility-aware cache replacement policy that is efficient in supporting mobile clients using location dependent information services.
Q5. What is the effect of location dependent queries on the performance of MARS?
By anticipating clients’ future location when making cache replacement decisions, MARS is able to maintain good performance even for clients travelling at high speed.
Q6. How does MARS perform in mobile clients?
Test results show that MARS provide efficient cache replacement for mobile clients and is able to achieve a 20% improvement in cache hit ratio compared to existing replacement policies.
Q7. What is the temporal score of a data object?
The temporal score, scoretemp(i), is used in the MARS replacement cost function to capture the temporal locality of data access, the authors define the temporal score of a data object di as:scoretemp(i) = tcurrent − tu,i tcurrent − tq,i × λi µi(4)where λi = λ.
Q8. What is the probability of a client querying an object with a valid scope center reference?
the probability of it querying an object with a valid scope center reference point at Li is equal to :Pr(i) = 1 |Lm − Li| × 1∑ j∈N 1 |Lm−Lj | (6)Based on the definition in Equation 6, queries will be distributed among data objects based on their distance from the client current location.
Q9. What is the effect of location dependent queries on the cache?
When the probability of location dependent queries is high, client caches are filled with information relevant to the clients’ current location, resulting in more queries being satisfied by the cache.
Q10. How many locations are used to query data objects?
In order to model the utilisation of location dependent services, percentLDQ% of queries performed by clients are location dependent and 1 − percentLDQ% are non-location dependent.
Q11. What is the difference between MARS and LRU?
At high location dependent query probability, LRU performs slightly better than MARS because more cache space is used by MARS to store objects obtained from location-dependent, thus reducing the number of objects cached from non-location-dependent queries.
Q12. What is the cost of replacing an object in a client’s cache?
The cost of replacing an object di in client m’s cache is calculated with the following equation:cost(i) = scoretemp(i) × scorespat(i) × ci (1) where scoretemp(i) is the temporal score of the object, scorespat(i) is the spatial score of the object and ci is the cost of retrieving the object from the remote server.
Q13. What is the performance of the graph in Figure 5?
The graph in Figure 5 shows that the mobility-aware cache replacement policies perform significantly better than the temporal base policy (LRU) when it comes to location dependent queries.
Q14. What is the distance between the client and the data object?
Restrictions apply.example, given a client located at (x1, y1) and a data object located at (x2, y2), the distance between the client and theobject is equal to √|x1 − x2|2 + |y1 − y2|2.
In order to ensure efficient cache utilisation, it is important to take into consideration the location and movement direction of mobile clients when performing cache replacement.