scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Proceedings ArticleDOI
12 Oct 2009
TL;DR: A scheduling strategy for real-time tasks with both timing and cache space constraints is presented, which allows each task to use a fixed number of cache partitions, and makes sure that at any time a cache partition is occupied by at most one running task.
Abstract: The major obstacle to use multicores for real-time applications is that we may not predict and provide any guarantee on real-time properties of embedded software on such platforms; the way of handling the on-chip shared resources such as L2 cache may have a significant impact on the timing predictability. In this paper, we propose to use cache space isolation techniques to avoid cache contention for hard real-time tasks running on multicores with shared caches. We present a scheduling strategy for real-time tasks with both timing and cache space constraints, which allows each task to use a fixed number of cache partitions, and makes sure that at any time a cache partition is occupied by at most one running task. In this way, the cache spaces of tasks are isolated at run-time.As technical contributions, we have developed a sufficient schedulability test for non-preemptive fixed-priority scheduling for multicores with shared L2 cache, encoded as a linear programming problem. To improve the scalability of the test, we then present our second schedulability test of quadratic complexity, which is an over approximation of the first test. To evaluate the performance and scalability of our techniques, we use randomly generated task sets. Our experiments show that the first test which employs an LP solver can easily handle task sets with thousands of tasks in minutes using a desktop computer. It is also shown that the second test is comparable with the first one in terms of precision, but scales much better due to its low complexity, and is therefore a good candidate for efficient schedulability tests in the design loop for embedded systems or as an on-line test for admission control.

131 citations

Proceedings ArticleDOI
14 Jun 2015
TL;DR: A new information theoretic lower bound on the fundamental cache storage vs. transmission rate tradeoff is developed, which strictly improves upon the best known existing bounds.
Abstract: Caching is a viable solution for alleviating the severe capacity crunch in modern content centric wireless networks. Parts of popular files are pre-stored in users' cache memories such that at times of heavy demand, users can be served locally from their cache content thereby reducing the peak network load. In this work, we consider a central server assisted caching network where files are jointly delivered to users through multicast transmissions. For such a network, we develop a new information theoretic lower bound on the fundamental cache storage vs. transmission rate tradeoff, which strictly improves upon the best known existing bounds. The new bounds are used to establish the approximate storage vs. rate tradeoff of centralized caching to within a constant multiplicative factor of 8.

131 citations

Patent
08 May 2001
TL;DR: In this paper, an application caching system and method are provided wherein one or more applications may be cached throughout a distributed computer network (24), where the system may include a central cache directory server, a distributed master application server, and a distributed application cache server.
Abstract: An application caching system and method are provided wherein one or more applications may be cached throughout a distributed computer network (24). The system may include a central cache directory server (30), one or more distributed master application servers (28) and one or more distributed application cache servers (26). The system may permit a service, such as a search, to be provided to the user more quickly.

131 citations

Proceedings ArticleDOI
17 Jun 2001
TL;DR: In this paper, an analytical cache model for time-shared systems is presented, which estimates the overall cache miss-rate of a multiprocessing system with any cache size and time quanta.
Abstract: An accurate, tractable, analytic cache model for time-shared systems is presented, which estimates the overall cache miss-rate of a multiprocessing system with any cache size and time quanta. The input to the model consists of the isolated miss-rate curves for each process, the time quanta for each of the executing processes, and the total cache size. The output is the overall miss-rate. Trace-driven simulations demonstrate that the estimated miss-rate is very accurate. Since the model provides a fast and accurate way to estimate the effect of context switching, it is useful for both understanding the effect of context switching on caches and optimizing cache performance for time-shared systems. A cache partitioning mechanism is also presented and is shown to improve the cache miss-rate up to 25% over the normal LRU replacement policy.

130 citations

Journal ArticleDOI
TL;DR: An adaptive algorithm for managing fully associative cache memories shared by several identifiable processes is presented and it is shown that such an increase in hit-ratio in a system with a heavy throughput of I/O requests can provide a significant decrease in disk response time.
Abstract: An adaptive algorithm for managing fully associative cache memories shared by several identifiable processes is presented. The on-line algorithm extends an earlier model due to H.S. Stone et al. (1989) and partitions the cache storage in disjoint blocks whose sizes are determined by the locality of the processes accessing the cache. Simulation results of traces for 32-MB disk caches show a relative improvement in the overall and read hit-ratios in the range of 1% to 2% over those generated by a conventional least recently used replacement algorithm. The analysis of a queuing network model shows that such an increase in hit-ratio in a system with a heavy throughput of I/O requests can provide a significant decrease in disk response time. >

130 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818