Topic
Cache invalidation
About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: This study reports how the data access rate and the data update distribution affect the cache performance in a wireless terminal and investigates the least-recently used replacement policy and two strongly consistent data access algorithms called poll-each-read and callback.
Abstract: In wireless data transmission, the capacity of wireless links is typically limited. Since many applications exhibit temporal locality for data access, the cache mechanism can be built in a wireless terminal to effectively reduce the data access time. This paper studies the cache performance of the wireless terminal by considering a business-card application. We investigate the least-recently used replacement policy and two strongly consistent data access algorithms called poll-each-read and callback. An analytic model is proposed to derive the effective hit ratio of data access, which is used to validate against simulation experiments. Our study reports how the data access rate and the data update distribution affect the cache performance in a wireless terminal.
77 citations
•
21 Jan 1998TL;DR: In this article, a multi-level cache and method for operation thereof is presented for processing multiple cache system accesses simultaneously and handling the interactions between the queues of the cache levels, where controller logic is also provided for controlling interaction between the miss queue and the write queue.
Abstract: A multi-level cache and method for operation thereof is presented for processing multiple cache system accesses simultaneously and handling the interactions between the queues of the cache levels. The cache unit includes a non-blocking cache receiving data access requests from a functional unit in a processor, and a miss queue storing entries corresponding to data access requests not serviced by the non-blocking cache. A victim queue stores entries of the non-blocking cache which have been evicted from the non-blocking cache, while a write queue buffers write requests into the non-blocking cache. Controller logic is provided for controlling interaction between the miss queue and the victim queue. Controller logic is also provided for controlling interaction between the miss queue and the write queue. Controller logic is also provided for controlling interaction between the victim queue and the miss queue for processing cache misses.
77 citations
••
20 Sep 2012TL;DR: Experimental results show that the novel cache system built on the top of the Hadoop Distributed File System can store files with a wide range in their sizes and has the access performance in a millisecond level in highly concurrent environments.
Abstract: The improvement of file access performance is a great challenge in real-time cloud services. In this paper, we analyze preconditions of dealing with this problem considering the aspects of requirements, hardware, software, and network environments in the cloud. Then we describe the design and implementation of a novel distributed layered cache system built on the top of the Hadoop Distributed File System which is named HDFS-based Distributed Cache System (HDCache). The cache system consists of a client library and multiple cache services. The cache services are designed with three access layers an in-memory cache, a snapshot of the local disk, and the actual disk view as provided by HDFS. The files loading from HDFS are cached in the shared memory which can be directly accessed by a client library. Multiple applications integrated with a client library can access a cache service simultaneously. Cache services are organized in the P2P style using a distributed hash table. Every file cached has three replicas in different cache service nodes in order to improve robustness and alleviates the workload. Experimental results show that the novel cache system can store files with a wide range in their sizes and has the access performance in a millisecond level in highly concurrent environments.
77 citations
••
03 Dec 2003TL;DR: This paper contributes a new technique to obtain predictability in preemptive multitasking systems in the presence of data caches by combining static cache analysis and cache locking mechanisms to ensure that all intra-task conflicts, and consequently, memory access times, are exactly predictable.
Abstract: Data caches are essential in modern processors, bridging the widening gap between main memory and processor speeds. However, they yield very complex performance models, which make it hard to bound execution times tightly. This paper contributes a new technique to obtain predictability in preemptive multitasking systems in the presence of data caches. We explore the use of cache partitioning, dynamic cache locking, and static cache analysis to provide worst-case performance estimates in a safe and tight way. Cache partitioning divides the cache among tasks to eliminate inter-task cache interferences. We combine static cache analysis and cache locking mechanisms to ensure that all intra-task conflicts, and consequently, memory access times, are exactly predictable. To minimize the performance degradation due to cache partitioning and locking, two strategies are employed. First, the cache is loaded with data likely to be accessed so that their cache utilization is maximized. Second, compiler optimizations such as tiling and padding are applied in order to reduce cache replacement misses. Experimental results show that this scheme is fully predictable, without compromising the performance of the transformed programs. Our method outperforms static cache locking for all analyzed task sets under various cache architectures, with a CPU utilization reduction ranging between 3.8 and 20.0 times for a high performance system.
77 citations
•
18 Sep 2009TL;DR: In this article, a plurality of mid-tier databases form a single, consistent cache grid for data in a one or more backend data sources, such as a database system, and consistency in the cache grid is maintained by ownership locks.
Abstract: A plurality of mid-tier databases form a single, consistent cache grid for data in a one or more backend data sources, such as a database system. The mid-tier databases may be standard relational databases. Cache agents at each mid-tier database swap in data from the backend database as needed. Consistency in the cache grid is maintained by ownership locks. Cache agents prevent database operations that will modify cached data in a mid-tier database unless and until ownership of the cached data can be acquired for the mid-tier database. Cache groups define what backend data may be cached, as well as a general structure in which the backend data is to be cached. Metadata for cache groups is shared to ensure that data is cached in the same form throughout the entire grid. Ownership of cached data can then be tracked through a mapping of cached instances of data to particular mid-tier databases.
77 citations