scispace - formally typeset
Search or ask a question
Topic

Cache invalidation

About: Cache invalidation is a research topic. Over the lifetime, 10539 publications have been published within this topic receiving 245409 citations.


Papers
More filters
Proceedings ArticleDOI
12 Oct 2009
TL;DR: A scheduling strategy for real-time tasks with both timing and cache space constraints is presented, which allows each task to use a fixed number of cache partitions, and makes sure that at any time a cache partition is occupied by at most one running task.
Abstract: The major obstacle to use multicores for real-time applications is that we may not predict and provide any guarantee on real-time properties of embedded software on such platforms; the way of handling the on-chip shared resources such as L2 cache may have a significant impact on the timing predictability. In this paper, we propose to use cache space isolation techniques to avoid cache contention for hard real-time tasks running on multicores with shared caches. We present a scheduling strategy for real-time tasks with both timing and cache space constraints, which allows each task to use a fixed number of cache partitions, and makes sure that at any time a cache partition is occupied by at most one running task. In this way, the cache spaces of tasks are isolated at run-time.As technical contributions, we have developed a sufficient schedulability test for non-preemptive fixed-priority scheduling for multicores with shared L2 cache, encoded as a linear programming problem. To improve the scalability of the test, we then present our second schedulability test of quadratic complexity, which is an over approximation of the first test. To evaluate the performance and scalability of our techniques, we use randomly generated task sets. Our experiments show that the first test which employs an LP solver can easily handle task sets with thousands of tasks in minutes using a desktop computer. It is also shown that the second test is comparable with the first one in terms of precision, but scales much better due to its low complexity, and is therefore a good candidate for efficient schedulability tests in the design loop for embedded systems or as an on-line test for admission control.

131 citations

Proceedings ArticleDOI
14 Jun 2015
TL;DR: A new information theoretic lower bound on the fundamental cache storage vs. transmission rate tradeoff is developed, which strictly improves upon the best known existing bounds.
Abstract: Caching is a viable solution for alleviating the severe capacity crunch in modern content centric wireless networks. Parts of popular files are pre-stored in users' cache memories such that at times of heavy demand, users can be served locally from their cache content thereby reducing the peak network load. In this work, we consider a central server assisted caching network where files are jointly delivered to users through multicast transmissions. For such a network, we develop a new information theoretic lower bound on the fundamental cache storage vs. transmission rate tradeoff, which strictly improves upon the best known existing bounds. The new bounds are used to establish the approximate storage vs. rate tradeoff of centralized caching to within a constant multiplicative factor of 8.

131 citations

Patent
08 May 2001
TL;DR: In this paper, an application caching system and method are provided wherein one or more applications may be cached throughout a distributed computer network (24), where the system may include a central cache directory server, a distributed master application server, and a distributed application cache server.
Abstract: An application caching system and method are provided wherein one or more applications may be cached throughout a distributed computer network (24). The system may include a central cache directory server (30), one or more distributed master application servers (28) and one or more distributed application cache servers (26). The system may permit a service, such as a search, to be provided to the user more quickly.

131 citations

Journal ArticleDOI
TL;DR: This paper develops an analytical model for cache-reload transients and compares the model to observations based on several address traces and shows that the size of the transient is related to the normal distribution function.
Abstract: This paper develops an analytical model for cache-reload transients and compares the model to observations based on several address traces. The cache-reload transient is the set of cache misses that occur when a process is reinitiated after being suspended temporarily. For example, an interrupt program that runs periodically experiences a reload transient at each initiation. The reload transient depends on the cache size and on the sizes of the footprints in the cache of the competing programs, where a program footprint is defined to be the set of lines in the cache in active use by the program. The model shows that the size of the transient is related to the normal distribution function. A simulation based on program-address traces shows excellent agreement between the model and the observations.

131 citations

Patent
25 Jul 2003
TL;DR: In this article, a disk drive having a cache control system that generates scan results that permit response to a host command using existing cached data having a logical block address (LBA) range that overlaps the host command LBA range.
Abstract: The present invention relates to disk drive having a cache control system that generates scan results that permit response to a host command using existing cached data having a logical block address (LBA) range that overlaps a host command LBA range. The cache control system forms variable length segments of memory clusters in a cache memory for caching disk data in contiguous LBA ranges. The cached LBA ranges are scanned for segments having LBA ranges overlapping with an LBA range of a host command. The cache control system is effective in exploiting any existing overlapping cache data.

131 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
93% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Network packet
159.7K papers, 2.2M citations
83% related
Dynamic Source Routing
32.2K papers, 695.7K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202344
2022117
20214
20208
20197
201820