scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Patent
Gregory Tad Kishi1
05 Sep 2003
TL;DR: In this paper, an apparatus, system, and method is provided for flushing data from a cache to secondary storage, identifying predefined high priority cache structures and predefined low priority cache structure.
Abstract: An apparatus, system, and method is provided for flushing data from a cache to secondary storage. The apparatus, system, and method identifies predefined high priority cache structures and predefined low priority cache structures. The apparatus, system, and method selectively flushes low priority cache structures according to a first scheme when the cache is under a demand load and according to a second scheme when the cache is under substantially no demand load. The first scheme is defined to flush low priority cache structures as efficiently as possible and the second scheme is defined to flush low priority cache structures in a less efficient manner.

50 citations

Patent
15 Feb 1996
TL;DR: In this article, the cache address invalidation queue is expanded by bit slices for holding odd invalidation addresses and even invalidation address and also by providing a more efficient series of transition cycles to accomplish cache invalidations both during a cache hit or a cache miss cycle, the present architecture and methodology permits a faster cycle of address invalidations when required and also permits a higher frequency of processor access to cache without the processor being completely locked out from cache memory access during heavy traffic and high level of cache invalidation conditions.
Abstract: By expanding the cache address invalidation queue into bit slices for holding odd invalidation addresses and even invalidation addresses and also by providing a more efficient series of transition cycles to accomplish cache address invalidations both during a cache hit or a cache miss cycle, the present architecture and methodology permits a faster cycle of cache address invalidations when required and also permits a higher frequency of processor access to cache without the processor being completely locked out from cache memory access during heavy traffic and high level of cache invalidation conditions.

50 citations

Proceedings ArticleDOI
19 Jun 2014
TL;DR: This work presents a distributed proactive caching approach that exploits user mobility information to decide where to proactively cache data to support seamless mobility, while efficiently utilizing cache storage using a congestion pricing scheme.
Abstract: We present a distributed proactive caching approach that exploits user mobility information to decide where to proactively cache data to support seamless mobility, while efficiently utilizing cache storage using a congestion pricing scheme. The proposed approach is applicable to the case where objects have different sizes and to a two-level cache hierarchy, for both of which the proactive caching problem is hard. Our evaluation results show how various system parameters influence the delay gains of the proposed approach, which achieves robust and good performance relative to an oracle and an optimal scheme for a flat cache structure.

50 citations

Proceedings ArticleDOI
18 Nov 2013
TL;DR: The reverse magnetic junction tunneling (MTJ) is introduced into MLC cell design, which offers a more balanced device and design tradeoff and enables 2x storage density than SLC.
Abstract: In this paper, we study the use of multi-level cell (MLC) spin-transfer torque RAM (STT-RAM) in cache design of embedded systems and microprocessors. Compared to the single level cell (SLC) design, a MLC STT-RAM cache is expected to offer higher density and faster system performance. However, the cell design constrains, such as the switching current requirement and asymmetry in write operations, severely limit the density benefit of the conventional MLC STT-RAM. The two-step read/write accesses and inflexible data mapping strategy in the existing MLC STT-RAM cache architecture may even result in system performance degradation. To unleash the real potential of MLC STT-RAM cache, we propose a cross-layer solution. First, we introduce the reverse magnetic junction tunneling (MTJ) into MLC cell design, which offers a more balanced device and design tradeoff and enables 2x storage density than SLC. At architectural level, we propose a cell split mapping method to divide cache lines into fast and slow regions and data migration policies to allocate the frequently-used data to fast regions. Furthermore, an application-aware speed enhancement mode is utilized to adaptively trade-off cache capacity and speed, satisfying different requirements of various applications. Simulation results show that the proposed techniques can improve the system performance by 10.3% and reduce the energy consumption on cache by 26.0% compared with conventional MLC STT-RAM.

50 citations

Proceedings ArticleDOI
02 Jul 2008
TL;DR: A combination of positioning and theWCET-driven Procedure Cloning optimization proposed in [14] is presented improving the WCET analysis by 36% on average.
Abstract: Procedure Positioning is a well known compiler optimization aiming at the improvement of the instruction cache behavior. A contiguous mapping of procedures calling each other frequently in the memory avoids overlapping of cache lines and thus decreases the number of cache conflict misses. In standard literature, these positioning techniques are guided by execution profile data and focus on an improved average-case performance. We present two novel positioning optimizations driven by worst-case execution time (WCET) information to effectively minimize the program's worst-case behavior. WCET reductions by 10% on average are achieved. Moreover, a combination of positioning and the WCET-driven Procedure Cloning optimization proposed in [14] is presented improving the WCET analysis by 36% on average.

50 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818