Topic
Smart Cache
About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.
Papers published on a yearly basis
Papers
More filters
••
04 Mar 2002TL;DR: A decode filter cache to provide a decoded instruction stream is introduced, to classify instructions into cacheable or uncacheable depending on the decoded width, which results in power savings in both instruction fetch and instruction decode.
Abstract: In embedded processors, instruction fetch and decode can consume more than 40% of processor power. An instruction filter cache can be placed between the CPU core and the instruction cache to service the instruction stream. Power savings in instruction fetch result from accesses to a small cache. In this paper, we introduce a decode filter cache to provide a decoded instruction stream. On a hit in the decode filter cache, fetching from the instruction cache and the subsequent decoding is eliminated, which results in power savings in both instruction fetch and instruction decode. We propose to classify instructions into cacheable or uncacheable depending on the decoded width. Then sectored cache design is used in the decode filter cache so that cacheable and uncacheable instructions can coexist in a decode filter cache sector. Finally, a prediction mechanism is presented to reduce the decode filter cache miss penalty. Experimental results show average 34% processor power reduction and less than 1% performance degradation.
57 citations
••
TL;DR: The CLCE replication scheme reduces the redundant caching of contents; hence improves the cache space utilization and LFRU approximates the least frequently used scheme coupled with the least recently used scheme and is practically implementable for rapidly changing cache networks like ICNs.
Abstract: To cope with the ongoing changing demands of the internet, ‘in-network caching’ has been presented as an application solution for two decades. With the advent of information-centric network (ICN) architecture, ‘in-network caching’ becomes a network level solution. Some unique features of the ICNs, e.g., rapidly changing cache states, higher request arrival rates, smaller cache sizes, and other factors, impose diverse requirements on the content eviction policies. In particular, eviction policies should be fast and lightweight. In this paper, we propose cache replication and eviction schemes, conditional leave cope everywhere (CLCE) and least frequent recently used (LFRU), which are well suited for the ICN type of cache networks (CNs). The CLCE replication scheme reduces the redundant caching of contents; hence improves the cache space utilization. LFRU approximates the least frequently used scheme coupled with the least recently used scheme and is practically implementable for rapidly changing cache networks like ICNs.
57 citations
••
TL;DR: Wang et al. as discussed by the authors proposed a QoS-adaptive proxy-caching scheme for multimedia streaming over the Internet, which improves the cache hit ratio of mixed media including continuous and noncontinuous media.
Abstract: This paper proposes a quality-of-service (QoS)-adaptive proxy-caching scheme for multimedia streaming over the Internet. Considering the heterogeneous network conditions and media characteristics, we present an end-to-end caching architecture for multimedia streaming. First, a media-characteristic-weighted replacement policy is proposed to improve the cache hit ratio of mixed media including continuous and noncontinuous media. Secondly, a network-condition- and media-quality-adaptive resource-management mechanism is introduced to dynamically re-allocate cache resource for different types of media according to their request patterns. Thirdly, a pre-fetching scheme is described based on the estimated network bandwidth, and a miss strategy to decide what to request from the server in case of cache miss based on real-time network conditions is presented. Lastly, request and send-back scheduling algorithms, integrating with unequal loss protection (ULP), are proposed to dynamically allocate network resource among different types of media. Simulation results demonstrate effectiveness of our proposed schemes.
57 citations
•
26 Aug 1980
TL;DR: In this article, a cache memory organization using a miss information collection and manipulation system is presented to insure the transparency of cache misses, which makes use of the fact that the cache memory has a faster rate of operation than the speed of operation of central memory.
Abstract: A cache memory organization is shown using a miss information collection and manipulation system to insure the transparency of cache misses. This system makes use of the fact that the cache memory has a faster rate of operation than the rate of operation of central memory. The cache memory consists of a set-associative cache section consisting of tag arrays and control with a cache buffer, a central memory interface block consisting of a memory requester and memory receiver together with miss information holding registers section consisting of a miss comparator and status collection device. The miss information holding register section allows for an almost continual stream of new requests for data to be supplied to the cache memory at the cache hit rate throughput.
56 citations
•
17 Mar 2014
TL;DR: In this article, a hybrid storage system is described having a mixture of different types of storage devices comprising rotational drives, flash devices, SDRAM, and SRAM, which is used as the main storage, providing lowest cost per unit of storage memory.
Abstract: A hybrid storage system is described having a mixture of different types of storage devices comprising rotational drives, flash devices, SDRAM, and SRAM. The rotational drives are used as the main storage, providing lowest cost per unit of storage memory. Flash memory is used as a higher-level cache for rotational drives. Methods for managing multiple levels of cache for this storage system is provided having a very fast Level 1 cache which consists of volatile memory (SRAM or SDRAM), and a non-volatile Level 2 cache using an array of flash devices. It describes a method of distributing the data across the rotational drives to make caching more efficient. It also describes efficient techniques for flushing data from L1 cache and L2 cache to the rotational drives, taking advantage of concurrent flash devices operations, concurrent rotational drive operations, and maximizing sequential access types in the rotational drives rather than random accesses which are relatively slower. Methods provided here may be extended for systems that have more than two cache levels.
56 citations