scispace - formally typeset
Patent

Method and apparatus for curious and column caching

TLDR
Curious caching as mentioned in this paper improves upon cache snooping by allowing a cache to insert data from snooped bus operations that is not currently in the cache and independent of any prior accesses to the associated memory location.
Abstract
Curious caching improves upon cache snooping by allowing a snooping cache to insert data from snooped bus operations that is not currently in the cache and independent of any prior accesses to the associated memory location. In addition, curious caching allows software to specify which data producing bus operations, e.g., reads and writes, result in data being inserted into the cache. This is implemented by specifying “memory regions of curiosity” and insertion and replacement policy actions for those regions. In column caching, the replacement of data can be restricted to particular regions of the cache. By also making the replacement address-dependent, column caching allows different regions of memory to be mapped to different regions of the cache. In a set-associative cache, a replacement policy specifies the particular column(s) of the set-associative cache in which a page of data can be stored. The column specification is made in page table entries in a TLB that translates between virtual and physical addresses. The TLB includes a bit vector, one bit per column, which indicates the columns of the cache that are available for replacement.

read more

Citations
More filters
Patent

Method and apparatus for network packet capture distributed storage system

TL;DR: The Infinite Network Packet Capture System (INPCS) as mentioned in this paper is a high performance data capture recorder capable of capturing and archiving all network traffic present on a single network or multiple networks.
Patent

Method and apparatus for local and distributed data memory access ('dma') control

TL;DR: In this article, a processor unit stalls if a write to the staging register occurs when the plurality of data memory access designator holders contain a data memoryaccess designator, and ceases the stall when one of the plurality ceases to contain a Data Memory Access Designator holder.
Patent

Distributed consistent grid of in-memory database caches

TL;DR: In this article, a plurality of mid-tier databases form a single, consistent cache grid for data in a one or more backend data sources, such as a database system, and consistency in the cache grid is maintained by ownership locks.
Patent

Snoop filter line replacement for reduction of back invalidates in multi-node architectures

TL;DR: In this paper, a snoop filter selects for replacement the entry representing the cache line or lines owned by the fewest nodes, as determined from a presence vector in each of the entries.
Patent

Optimization of cache evictions through software hints

TL;DR: In this paper, the authors propose a technique to evict the identified data from a cache ahead of other eviction candidates that are likely to be used during further program execution, thus making better use of cache memory.
References
More filters
Proceedings ArticleDOI

Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers

TL;DR: In this article, a hardware technique to improve the performance of caches is presented, where a small fully-associative cache between a cache and its refill path is used to place prefetched data and not in the cache.
Proceedings ArticleDOI

Evaluating stream buffers as a secondary cache replacement

TL;DR: The results show that, for the majority of the benchmarks, stream buffers can attain hit rates that are comparable to typical hit rates of secondary caches, and as the data-set size of the scientific workload increases the performance of streams typically improves relative to secondary cache performance, showing that streams are more scalable to large data- set sizes.
Proceedings ArticleDOI

OS-controlled cache predictability for real-time systems

TL;DR: An OS-controlled application-transparent cache-partitioning technique that can be transparently assigned to tasks for their exclusive use and the interaction of both are analysed with regard to cache-induced worst case penalties.
Proceedings ArticleDOI

A modified approach to data cache management

TL;DR: The bare minimum amount of local memories that programs require to run without delay is measured by using the Value Reuse Profile, which contains the dynamic value reuse information of a program's execution, and by assuming the existence of efficient memory systems.
Related Papers (5)