scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Proceedings ArticleDOI
20 Oct 2006
TL;DR: This paper presents a software system that allows all or part of an SRAM or scratchpad memory to be automatically managed as a cache, which provides the programming convenience of a cache for processors that lack dedicated caching hardware.
Abstract: While hardware instruction caches are present in virtually all general-purpose and high-performance microprocessors today, many embedded processors use SRAM or scratchpad memories instead. These are simple array memory structures that are directly addressed and explicitly managed by software. Compared to hardware caches of the same data capacity, they are smaller, have shorter access times and consume less energy per access. Access times are also easier to predict with simple memories since there is no possibility of a "miss." On the other hand, they are more difficult for the programmer to use since they are not automatically managed.In this paper, we present a software system that allows all or part of an SRAM or scratchpad memory to be automatically managed as a cache. This system provides the programming convenience of a cache for processors that lack dedicated caching hardware. It has been implemented for an actual processor and runs on real hardware. Our results show that a software-based instruction cache can be built that provides performance within 10% of a traditional hardware cache on many benchmarks while using a cheaper, simpler, SRAM memory. On these same benchmarks, energy consumption is up to 3% lower than it would be using a hardware cache.

55 citations

Proceedings ArticleDOI
Huan Liu1
15 Oct 2001
TL;DR: This paper is the first to evaluate the effectiveness of caching on routing prefix and proposes an on-chip routing prefix cache design for the network processor that performs much better than an IP address cache, even after factoring in the extra complexity involved.
Abstract: Cache has been time proven to be a very effective technique to improve memory access speed. It is based on the assumption that enough locality exists in memory access patterns, i.e., there is a high probability that an entry will be accessed again shortly after. However, it is questionable whether Internet traffic has enough locality, especially in the high speed backbone, to justify the use of cache for routing table lookup. We believe there is enough locality if routing prefixes are cached instead of individual IP addresses. This paper is the first to evaluate the effectiveness of caching on routing prefix. We propose an on-chip routing prefix cache design for the network processor. The control-path processor is used in our design to fulfill missed cache requests. In order to guarantee the correct lookup result, it is important that the control-path processor only returns cacheable routing prefixes. We present three implementations that all guarantee the correct lookup result. Simulation results show that our cache design performs much better than an IP address cache, even after factoring in the extra complexity involved.

55 citations

Patent
Sanjeev N. Trika1
19 Feb 2014
TL;DR: In this paper, the authors propose a method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device.
Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.

55 citations

Journal ArticleDOI
TL;DR: This paper uses a cover-set approach to solve the rule dependency problem and cache important rules to TCAM and proposes a rule cache replacement algorithm considering the temporal and spatial traffic localities.
Abstract: In software-defined networking, flow tables of OpenFlow switches are implemented by ternary content addressable memory (TCAM). Although TCAM can process input packets in high speed, it is a scarce and expensive resource providing only a few thousands of rule entries on a network switch. Rules caching is a technique to solve the TCAM capacity problem. However, the rule dependency problem is a challenging issue for wildcard rules caching where packets can mismatch rules. In this paper, we use a cover-set approach to solve the rule dependency problem and cache important rules to TCAM. We also propose a rule cache replacement algorithm considering the temporal and spatial traffic localities. Simulation results show that our algorithms have better cache hit ratio than previous works.

55 citations

Patent
14 Apr 1997
TL;DR: In this article, a method of allocating a cache used by a processor of a computer system between instructions and data is disclosed, where program instructions are loaded in the processor for monitoring relative usage of the cache by each value class and selecting a desired ratio of cache usage by the classes from among a plurality of available ratios, and cache blocks within the cache are evicted using a cache-replacement mechanism.
Abstract: A method of allocating a cache used by a processor of a computer system between instructions and data is disclosed. Program instructions are loaded in the processor for monitoring relative usage of the cache by each value class and selecting a desired ratio of cache usage by the classes from among a plurality of available ratios, and cache blocks within the cache are evicted using a cache-replacement mechanism which restricts replacement of an evicted cache to a particular one of the classes of values (instruction or data) based on the desired ratio of cache usage. A multi-bit facility may be provided to indicate how to confine a selected victim to certain cache blocks, and the program instructions select the desired ratio of cache usage by setting the multi-bit facility. The cache-replacement mechanism can be a modified least recently used replacement mechanism. Different instruction/data ratios thereby may be provided, such as 1:1, 1:2, and 2:1.

54 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818