scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Patent
03 Mar 1982
TL;DR: In this article, a command queue is maintained for storing commands waiting to be executed, and each command is assigned a priority level for execution vis-a-vis other commands in the queue.
Abstract: In a system having a cache memory and a bulk memory, and wherein a command queue is maintained for storing commands waiting to be executed, each command is assigned a priority level for execution vis-a-vis other commands in the queue. Commands are generated for transferring from the cache memory to the bulk memory segments of data which have been written to while resident in the cache memory. Each generated command may be assigned a generated priority level which is dependent upon the number of segments in the cache memory which have been written to but not yet copied into the bulk memory. A second priority level may be generated which is dependent on the time which has elapsed since the first write into any of the cache memory segments, and the priority level assigned to any given generated command is the higher of the two generated priority levels.

50 citations

Proceedings ArticleDOI
01 Sep 2001
TL;DR: Experimental results show that the next fetch prediction reduces performance penalty by more than 91% and is more energy efficient than a conventional filter cache.
Abstract: Filter cache has been proposed as an energy saving architectural feature. A filter cache is placed between the CPU and the instruction cache (I-cache) to provide the instruction stream. Energy savings result from accesses to a small cache. There is however loss of performance when instructions are not found in the filter cache. The majority of the energy savings from the filter cache are due to the temporal reuse of instructions in small loops. We examine subsequent fetch addresses to predict whether the next fetch address is in the filter cache dynamically. In case a miss is predicted, we reduce miss penalty by accessing the I-cache directly. Experimental results show that our next fetch prediction reduces performance penalty by more than 91% and is more energy efficient than a conventional filter cache. Average I-cache energy savings of 31 % can be achieved by our filter cache design with around 1 % performance degradation.

49 citations

Proceedings ArticleDOI
10 Jun 2014
TL;DR: This paper proposes a novel caching scheme (CRCache) that utilizes a cross-layer design to cache contents in a few selected routers based on the correlation of content popularity and the network topology and aims to improve the cache hit rate and reduce the overall network traffic.
Abstract: Information-centric networking (ICN) is designed to decouple contents from hosts at the network layer, using in-network caching as a key feature to improve the overall performance. However, the en-route caching strategy used in many ICN implementations generally yields redundancies in the cached contents across different routers. There are also some recent works focusing on cache optimization by respectively exploiting either application layer or network layer, which we think is not sufficient to increase cache hit rate and reduce traffic. In this paper, we propose a novel caching scheme (CRCache) that utilizes a cross-layer design to cache contents in a few selected routers based on the correlation of content popularity and the network topology. Specifically, through exploiting information available at both application and network layers, CRCache aims to improve the cache hit rate and reduce the overall network traffic. We conduct a large scale and real traces-driven simulation with an underlying real Internet topology in China, and show that by using CRCache, the overall cache hit rate is increased by 62.5% and network traffic reduction is improved by at least 42% compared with recent single layer schemes.

49 citations

Patent
07 Dec 1994
TL;DR: In this paper, a master-slave cache system has a large master cache and smaller slave caches, including a slave data cache for supplying operands to an execution pipeline of a processor.
Abstract: A master-slave cache system has a large master cache and smaller slave caches, including a slave data cache for supplying operands to an execution pipeline of a processor. The master cache performs all cache coherency operations, freeing the slaves to supply the processor's pipelines at their maximum bandwidth. A store queue is shared between the master cache and the slave data cache. Store data from the processor's execute pipeline is written from the store queue directly into both the master cache and the slave data cache, eliminating the need for the slave data cache to write data back to the master cache. Additionally, fill data from the master cache to the slave data cache is first written to the store queue. This fill data is available for use while in the store queue because the store queue acts as an extension to the slave data cache. Cache operations, diagnostic stores and TLB entries are also loaded into the store queue. A new store or line fill can be merged into an existing store queue entry. Each entry has valid bits for the master cache, the slave data cache, and the slave's tag. Separate byte enables are provided for the master and slave caches, but a single physical address field in each store queue entry is used.

49 citations

Patent
03 Jun 2002
TL;DR: In this paper, a data communications system including a local beacon is characterized in accordance with the invention in that the local beacon provides a smart cache and/or processing functionality for the data to be communicated wireless.
Abstract: A data communications system including a local beacon ( 10; 9, 9 ′), on the one side ,being programmable and/or communicating with controlling and/or information communicating infrastructure means ( 16 ), for example a central service provider or the Internet and/or with one or more further local beacons and which, on the other side, contains for wireless communication with one or more end devices ( 17 ) located in its vicinity a transceiver combination or in special cases a transmitter only and which is located in or in place of a electric lighting equipment ( 1 ) is characterized in accordance with the invention in that the local beacon is provided with a smart cache and/or processing functionality for the data to be communicated wireless. The data communications system in accordance with the invention can be put to use for both communication and navigation by end device users.

49 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818