scispace - formally typeset
Patent

System and method for indicating that a processor has prefetched data into a primary cache and not into a secondary cache

Reads0
Chats0
TLDR
In this paper, the prefetching of cache lines is performed in a progressive manner in a data processing system implementing L1 and L2 caches and stream filters and buffers, where data may not be prefetched.
Abstract
Within a data processing system implementing L1 and L2 caches and stream filters and buffers, prefetching of cache lines is performed in a progressive manner. In one mode, data may not be prefetched. In a second mode, two cache lines are prefetched wherein one line is prefetched into the L1 cache and the next line is prefetched into a stream buffer. In a third mode, more than two cache lines are prefetched at a time. In the third mode cache lines may be prefetched to the L1 cache and not the L2 cache, resulting in no inclusion between the L1 and L2 caches. A directory field entry provides an indication of whether or not a particular cache line in the L1 cache is also included in the L2 cache.

read more

Citations
More filters
Patent

Methods and systems of cache memory management and snapshot operations

TL;DR: In this article, a cache memory management system for snapshot applications is presented, which includes a cache directory including a hash table, hash table elements, cache line descriptors, and cache line functional pointers.
Patent

Demand-based larx-reserve protocol for SMP system buses

TL;DR: In this paper, the authors propose a method of accessing values in the computer's memory by loading the value from the memory device into all of the SMP caches, and sending a reserve bus operation from a higher-level cache to the next lower level cache only when the value is to be cast out of the higher cache, and thereafter casting out the value in the lower cache after sending the reserved bus operation.
Patent

Software prefetch system and method for predetermining amount of streamed data

TL;DR: In this article, a software instruction is used to accelerate the prefetch process by overriding the normal functionality of the hardware prefetch engine, which limits the amount of data to be prefetched.
Patent

Method and apparatus for identifying candidate virtual addresses in a content-aware prefetcher

TL;DR: In this paper, the upper bits of an address-sized word in the cache line are compared with the upper bit of the effective address, and the address sized word is identified as a candidate virtual address.
Patent

System and method for prefetching data to multiple levels of cache including selectively using a software hint to override a hardware prefetch mechanism

TL;DR: In this article, a data processing system and method for prefetching data in a multi-level code subsystem is described, which includes a processor having a first level cache and a prefetch engine.
References
More filters
Journal ArticleDOI

Effective hardware-based data prefetching for high-performance processors

TL;DR: The results show that the three hardware prefetching schemes all yield significant reductions in the data access penalty when compared with regular caches, the benefits are greater when the hardware assist augments small on-chip caches, and the lookahead scheme is the preferred one cost-performance wise.
Patent

Method and apparatus for searching for information in a network and for controlling the display of searchable information on display devices in the network

TL;DR: In this article, a method and apparatus for maintaining information in a network of computer systems and for controlling the display of searchable information is presented, which includes a first processor having a first display device and being coupled to an information storage device having information stored in at least one information source, where the first processor is coupled to a network.
Proceedings ArticleDOI

Evaluating stream buffers as a secondary cache replacement

TL;DR: The results show that, for the majority of the benchmarks, stream buffers can attain hit rates that are comparable to typical hit rates of secondary caches, and as the data-set size of the scientific workload increases the performance of streams typically improves relative to secondary cache performance, showing that streams are more scalable to large data- set sizes.
Patent

Multi-node cluster computer system incorporating an external coherency unit at each node to insure integrity of information stored in a shared, distributed memory

TL;DR: In this article, a computer cluster architecture including a plurality of CPUs at each of a plurality-of- nodes is described, where each CPU has the property of coherency and includes a primary cache.
Patent

Method and apparatus for achieving multilevel inclusion in multilevel cache hierarchies

TL;DR: In this article, the caches align on a "way" basis by their respective cache controllers communicating with each other which blocks of data they are replacing and which of their cache ways are being filled with data.
Related Papers (5)