scispace - formally typeset
Search or ask a question

Showing papers on "Cache invalidation published in 2018"


Proceedings ArticleDOI
TL;DR: In this paper, the authors defined four important characteristics of a suitable eviction policy for information centric networks (ICN) and proposed a new eviction scheme which is well suitable for ICN type of cache networks.
Abstract: The information centric networks (ICN) can be viewed as a network of caches. Conversely, ICN type of cache networks has distinctive features e.g, contents popularity, usability time of content and other factors inflicts some diverse requirements for cache eviction policies. In this paper we defined four important characteristics of a suitable eviction policy for ICN. We analysed well known eviction policies in view of defined characteristics. Based upon analysis we propose a new eviction scheme which is well suitable for ICN type of cache networks.

20 citations


Journal ArticleDOI
TL;DR: This paper gives an analytical method to find the miss rate of L2 cache for various configurations from the RD profile with respect to L1 cache and considers all three types of cache inclusion policies namely (i) Strictly Inclusive, (ii) Mutually Exclusive and (iii) Non-Inclusive Non-Exclusive.
Abstract: Reuse distance is an important metric for analytical estimation of cache miss rate. To find the miss rate of a particular cache, the reuse distance profile has to be measured for that particular level and configuration of the cache. Significant amount of simulation time and overhead can be reduced if we can find the miss rate of higher level cache like L2 cache from the RD profile with respect to a lower level cache (i.e., cache that is closer to the processor) such as L1. The objective of this paper is to give an analytical method to find the miss rate of L2 cache for various configurations from the RD profile with respect to L1 cache. We consider all three types of cache inclusion policies namely (i) Strictly Inclusive, (ii) Mutually Exclusive and (iii) Non-Inclusive Non-Exclusive policy. We first prove some general results relating the RD profile of L1 cache to that of L2 cache. We use probabilistic analysis for our derivations. We validate our model against simulations, using the multi-core simulator Sniper with the PARSEC and the SPLASH benchmark suites.

17 citations


Book ChapterDOI
21 May 2018
TL;DR: This work involves design of new cache replacement policies, indexing, pre-fetching protocols with comparison of their performances from existing policies/protocols and reporting for future research directions.
Abstract: Location dependent information services can be characterized as the applications that coordinate a cell phone’s area or position with other data to give enhanced value of services to the client in the right place and at the right time from anywhere. Limited battery power and frequent disconnection due to moving environment prompts mobile distributed database to be a fertile land for many mobile databases researchers and specialists. New policies/protocols must be designed to efficiently handle the issued nearest neighbor queries. Our works involves design of new cache replacement policies, indexing, pre-fetching protocols with comparison of their performances from existing policies/protocols and reporting for future research directions.

10 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: An algorithm Caching Efficiency with Next Location Prediction Based (CELPB) has been developed that uses a newly developed metric i.e. caching efficiency with next location prediction (CelP) for the computation of valid scope in prediction interval for LDIS.
Abstract: Location dependent information services (LDIS) can be characterized as the applications that coordinate a cell phone's area or position with other data to give enhanced value of services to the client at right place in the right time from anywhere. In this paper, an algorithm Caching Efficiency with Next Location Prediction Based (CELPB) has been developed that uses a newly developed metric i.e. caching efficiency with next location prediction (CELP) for the computation of valid scope in prediction interval. This metric takes account the future movement behavior of client with the help of Sequential Pattern Mining and Clustering. The mobility rules have also been framed for the prediction of an accurate next location, which can be used in estimating the future movement path (edges) of client if he reached in valid scope area of any data item. Simulation results show that proposed policy achieves up to 10 percent performance improvement compared to earlier cache invalidation policy (CEBAB) for LDIS.

8 citations


Journal ArticleDOI
TL;DR: This article provides key elements to determine the subset of applications that should share the LLC (while remaining ones only use their smaller private cache), and designs efficient heuristics for Amdahl applications.
Abstract: Cache-partitioned architectures allow subsections of the shared last-level cache (LLC) to be exclusively reserved for some applications. This technique dramatically limits interactions between applications that are concurrently executing on a multi-core machine. Consider n applications that execute concurrently, with the objective to minimize the makespan, defined as the maximum completion time of the n applications. Key scheduling questions are: (i) which proportion of cache and (ii) how many processors should be given to each application? In this paper, we provide answers to (i) and (ii) for Amdahl applications. Even though the problem is shown to be NP-complete, we give key elements to determine the subset of applications that should share the LLC (while remaining ones only use their smaller private cache). Building upon these results, we design efficient heuristics for Amdahl applications. Extensive simulations demonstrate the usefulness of co-scheduling when our efficient cache partitioning strategies are deployed.

6 citations


Proceedings ArticleDOI
01 Dec 2018
TL;DR: ESFC, a user space service chain management framework to reallocate computing resource of each NF components dynamically based on real-time statistics, while minimizing performance overhead, is proposed.
Abstract: Enabling elastic scaling is the crucial advantage of Network Function Virtualization (NFV), of which the common usage is Service Function Chain (SFC). But how to scale SFC elastically on an NFV platform while ensuring performance and cost-effectiveness, remains to be vague and challenging. We propose ESFC, a user space service chain management framework to reallocate computing resource of each NF components dynamically based on real-time statistics, while minimizing performance overhead. The ESFC framework monitors load on a service chain at high frequency (1000Hz) and employs proactive resource allocation policy to enable elastic scaling. ESFC utilizes an asynchronous mechanism to notify the to-be-scaled NF, which will be reallocated with computing resources via thread. Moreover, ESFC adopts a perfect distribution algorithm to reduce cache invalidation and state migration while scaling, which are crucial to performance. The results show that ESFC could adjust the devoted resources dynamically and achieve up to 3x higher cost-effectiveness without compromising performance. Comparing with the latest popular SFC platform, ESFC achieves higher performance up to 1.3x under the same condition.

5 citations


Journal ArticleDOI
TL;DR: This work introduces a new technique for cache replacement in a mobile database that takes into consideration the impact of invalidation time for enhancing data availability in the mobile environment by using genetic programming.

5 citations


Journal ArticleDOI
TL;DR: The simulation results show that the proposed CCSP scheme improves significantly the cache effectiveness and the network performances, by improving data availability and reducing both overall network load and latencies perceived by end users.
Abstract: In wireless mobile Ad Hoc networks, cooperative cache management is considered as an efficient technique to increase data availability and improve access latency. This technique is based on coordination and sharing of cached data between nodes belonging to the same area. In this paper, we studied the cooperative cache management strategies. This has enabled us to propose a collaborative cache management scheme for mobile Ad Hoc networks, based on service cache providers (SCP), called cooperative caching based on service providers (CCSP). The proposed scheme enabled the election of some SCPs mobile nodes, which receive cache’s summaries of neighboring nodes. Thus, nodes belonging to the same zone can locate easily cached documents of that area. The election mechanism used in this approach is executed periodically to ensure load balancing. We further provided an evaluation of the proposed solution, in terms of request hit rate, byte hit rate and time gains. Compared with other caching management schemes, the simulation results show that the proposed CCSP scheme improves significantly the cache effectiveness and the network performances. This is achieved by improving data availability and reducing both overall network load and latencies perceived by end users.

5 citations


Journal ArticleDOI
TL;DR: PhLock as mentioned in this paper leverages an application's varying runtime characteristics to dynamically select the locked memory contents to optimize cache energy consumption, which is a popular cache optimization that loads and retains/locks selected memory contents from an executing application into the cache to increase cache's predictability.
Abstract: Caches are commonly used to bridge the processor-memory performance gap in embedded systems. Since embedded systems typically have stringent design constraints imposed by physical size, battery capacity, and real-time deadlines much research focuses on cache optimizations, such as improved performance and/or reduced energy consumption. Cache locking is a popular cache optimization that loads and retains/locks selected memory contents from an executing application into the cache to increase the cache’s predictability. Previous work has shown that cache locking also has the potential to improve cache energy consumption. In this paper, we introduce phase-based cache locking, PhLock , which leverages an application’s varying runtime characteristics to dynamically select the locked memory contents to optimize cache energy consumption. Using a variety of applications from the SPEC2006 and MiBench benchmark suites, experimental results show that PhLock is promising for reducing both the instruction and data caches’ energy consumption. As compared to a nonlocking cache, PhLock reduced the instruction and data cache energy consumption by an average of 5% and 39%, respectively, for SPEC2006 applications, and by 75% and 14%, respectively, for MiBench benchmarks.

5 citations


Journal ArticleDOI
TL;DR: It is proved that any non-redundant cache placement strategy can be transformed, with no additional cost, to a strategy in which at every node, each file is either cached completely or not cached at all.
Abstract: Considering cache enabled networks, optimal content placement minimizing the total cost of communication in such networks is studied, leading to a surprising fundamental 0–1 law for non-redundant cache placement strategies, where the total cache sizes associated with each file does not exceed the file size. In other words, for such strategies, we prove that any non-redundant cache placement strategy can be transformed, with no additional cost, to a strategy in which at every node, each file is either cached completely or not cached at all. Moreover, we obtain a sufficient condition under which the optimal cache placement strategy is in fact non-redundant. This result together with the 0–1 law reveals that situations exist, where optimal content placement is achieved just by uncoded placement of whole files in caches.

4 citations


Journal ArticleDOI
TL;DR: A replacement policy adaptable miss curve estimation (RME) which estimates dynamic workload patterns according to any arbitrary replacement policy and to given applications with low overhead is proposed which supports the efficiency of RME and shows that RME-based cache partitioning cooperated with high-performance replacement policies can minimize both inter- and intra-application interference successfully.
Abstract: Cache replacement policies and cache partitioning are well-known cache management techniques which aim to eliminate inter- and intra-application contention caused by co-running applications, respectively. Since replacement policies can change applications’ behavior on a shared last-level cache, they have a massive impact on cache partitioning. Furthermore, cache partitioning determines the capacity allocated to each application affecting incorporated replacement policy. However, their interoperability has not been thoroughly explored. Since existing cache partitioning methods are tailored to specific replacement policies to reduce overheads for characterization of applications’ behavior, they may lead to suboptimal partitioning results when incorporated with the up-to-date replacement policies. In cache partitioning, miss curve estimation is a key component to relax this restriction which can reflect the dependency between a replacement policy and cache partitioning on partitioning decision. To tackle this issue, we propose a replacement policy adaptable miss curve estimation (RME) which estimates dynamic workload patterns according to any arbitrary replacement policy and to given applications with low overhead. In addition, RME considers asymmetry of miss latency by miss type, thus the impact of miss curve on cache partitioning can be reflected more accurately. The experimental results support the efficiency of RME and show that RME-based cache partitioning cooperated with high-performance replacement policies can minimize both inter- and intra-application interference successfully.

Journal ArticleDOI
01 Nov 2018
TL;DR: The aim of this work is to propose a new technique for cache replacement in a mobile database that takes into consideration the impact of invalidation time for enhancing data availability in the mobile environment by using genetic programming.
Abstract: In the mobile environment, the movement of users, disconnected modes, many data updates, power battery consumption, limited cache size, and limited bandwidth impose significant challenges to information access. Caching is considered one of the most important concepts to deal with these challenges. There are two general topics related to the client cache policy: cache invalidation method keeps data in the cache up to date; and cache replacement method chooses the cached item(s) which should be deleted from the cache when the cache is full. The aim of this work is to propose a new technique for cache replacement in a mobile database that takes into consideration the impact of invalidation time for enhancing data availability in the mobile environment by using genetic programming. In this case, each client collects information for every cached item in the cache like access probability, cached document size, validation time and uses these factors in a fitness function to determine cached items that will be removed from the cache. The experiments were performed using Network Simulator 2 to evaluate the effectiveness of the proposed approach, and the results are compared with the existing cache replacement algorithms. It is concluded that the proposed approach performs significantly better than other approaches.

Journal ArticleDOI
TL;DR: This work proposes new adaptive multi-level exclusive caching policies that can dynamically adjust replacement and placement decisions in response to changing access patterns and achieves multi- level exclusive caching with significant cache performance improvement.

Journal ArticleDOI
TL;DR: The results suggest that variable cache line size can result in better performance and can also conserve power, and present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior.
Abstract: Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. Furthermore, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic application behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Our results suggest that variable cache line size can result in better performance and can also conserve power.

Book ChapterDOI
01 Sep 2018
TL;DR: This effort is to present enhanced invalidation policy that cooperates with a new cache replacement technique by using genetic programming to select the items that will be removed from the cache for improving data access in the wireless environment.
Abstract: Communication between mobile clients and database servers in wireless environment suffers from; the user’s movement, disconnected modes, lots of data updates, low battery power, cache size limitation, and bandwidth limitation Caching is used in wireless environment to overcome these challenges The aim of this effort is to present enhanced invalidation policy that cooperates with a new cache replacement technique by using genetic programming to select the items that will be removed from the cache for improving data access in the wireless environment Cooperation between servers and mobile clients to enhance data availability Each mobile client Collects data like access probability, size, and next validation time and uses these parameters in a genetic programming method for selecting cached items to be removed when the cache is full The experiments were carried using NS2 software to evaluate the efficiency of the suggested policy, and the results are compared with existing cache policies algorithms The experiments have shown that the proposed policy outperfomed the LRU by 24% in byte hit ratio, and 11% in cache hit ratio It is concluded that the presented policy achieves well than other policies

Journal ArticleDOI
01 Jul 2018
TL;DR: Another reserve reinstatement algorithm with SACCS in consideration is recommended in consideration that is dependent upon rule-based Least Profit Value and has demonstrated more versatility over others.
Abstract: Focus of health care data research has shifted towards configuring and handling e-health information from heterogeneous e-health administration entities in a Content Distribution Network (CDN) for gaining e-health benefits which may be a testing errand. In recent trends, CDNis typically is used to reserve e-health networking substance as ongoing images captured in real time sequences and ongoing videos. In mobile cloud computing, due to patient's movement, it is required for medicinal services professionals to bring information of patient health with quick access of e-health data to make effective choices and select medication. Caching and its invalidation mechanism will provide the solutions in effective e-health data availability. A lot of caching methodologies are proposed, such as the Scalable Asynchronous Cache Consistency Scheme (SACCS) which has demonstrated more versatility over others. In this article, the authors recommend another reserve reinstatement algorithm with SACCS in consideration that is dependent upon rule-based Least Profit Value.

Patent
31 May 2018
TL;DR: In this article, the matchline signal is latched in a latch controlled by a function of a single bit mismatch clock, wherein a rising edge of the single-bit mismatch clock is based on delay for determining a mismatch between the search word and the entry of the tag array.
Abstract: Systems and methods for cache invalidation, with support for different modes of cache invalidation include receiving a matchline signal, wherein the matchline signal indicates whether there is a match between a search word and an entry of a tag array of the cache. The matchline signal is latched in a latch controlled by a function of a single bit mismatch clock, wherein a rising edge of the single bit mismatch clock is based on delay for determining a single bit mismatch between the search word and the entry of the tag array. An invalidate signal for invalidating a cacheline corresponding to the entry of the tag array is generated at an output of the latch. Circuit complexity is reduced by gating a search word with a search-invalidate signal, such that the gated search word corresponds to the search word for a search-invalidate and to zero for a Flash-invalidate.

Patent
Fukuyama Tomohisa1
09 Aug 2018
TL;DR: In this article, the Release side processor issues a Store Fence instruction to request for a guarantee of completion of invalidation of the cache of the Acquire side processor when both the counters have come to indicate 0.
Abstract: On receiving a Store instruction from a Release side processor, a shared memory transmits a cache invalidation request to an Acquire side processor, increases the value of an execution counter, and transmits the count value to the Release side processor asynchronously with the receiving of the Store instruction. The Release side processor has: a store counter which increases its value when the Store instruction is issued and, when the count value of the execution counter is received, decreases its value by the count value; and a wait counter which, when the store counter has come to indicate 0, sets a value indicating a predetermined time and decreases its value every unit time. The Release side processor issues a Store Fence instruction to request for a guarantee of completion of invalidation of the cache of the Acquire side processor when both the counters have come to indicate 0.

Dissertation
01 Jan 2018
TL;DR: Cachematic provides a simple programming model, allowing developers to explicitly denote a function cacheable, and the result of a cacheable function will transparently be cached without the developer having to worry about cache management.
Abstract: Caching is a common method for improving the performance of modern web applications. Due to the varying architecture of web applications, and the lack of a standardized approach to cache management, ad-hoc solutions are common. These solutions tend to be hard to maintain as a code base grows, and are a common source of bugs. In this thesis we present Cachematic, a general purpose application-level caching system with an automatic cache management strategy. Cachematic provides a simple programming model, allowing developers to explicitly denote a function cacheable. The result of a cacheable function will transparently be cached without the developer having to worry about cache management. The core component of the system is a dependency graph containing relations between database entries and cached content. The dependency graph is constructed by having the system listen to queries executed in a database. When a select query is detected within the scope of a cacheable function, the query is parsed and used to derive the dependency graph. When inserts, updates and deletes are detected, the dependency graph is utilized to determine which cached entries are affected by the modification. To evaluate Cachematic, a reference implementation was developed in the python programming language. Our experiments showed that the deployment of Cachematic decreased response time for read requests, compared to a manual cache management strategy. We also found that, compared to the manual strategy, the cache hit rate was increased with a factor of around 1.64x. On the contrary, a significant increase in response time for write requests was observed from the experiments.

Patent
06 Jul 2018
TL;DR: In this article, the authors propose a processor and a method for caching an invalid command, where a plurality of threads and a command obtaining unit are configured to execute at least one command obtaining process on a first thread.
Abstract: The invention provides a processor and a method for caching an invalid command. The processor comprises a plurality of threads and a command obtaining unit, wherein the command obtaining unit is configured to execute at least one command obtaining process on a first thread in the plurality of threads, and the command obtaining process comprises a plurality of steps; and before an operation for execution of command cache invalidation, stopping a current step of the command obtaining process which is being executed on the first thread so that the first thread enters a dormant state, wherein thedormant state refers to a restartable state after the threads stop working and the command cache invalidation operation is completed. Therefore, in order to execute the command cache invalidation, thetime delay caused by waiting for an appropriate execution window on one or more threads can be reduced.