scispace - formally typeset
Search or ask a question

Showing papers on "Cache coloring published in 1978"


Journal ArticleDOI
TL;DR: It is shown that prefetching all memory references in very fast computers can increase the effective CPU speed by 10 to 25 percent.
Abstract: Memory transfers due to a cache miss are costly. Prefetching all memory references in very fast computers can increase the effective CPU speed by 10 to 25 percent.

315 citations


Patent
Chang Shih-Jeh1, Toy Wing Noom1
08 Jun 1978
TL;DR: In this paper, a data processing system includes a memory arrangement comprising a main memory, and a cache memory including a validity bit per storage location to indicate the validity of data stored therein.
Abstract: A data processing system includes a memory arrangement comprising a main memory, and a cache memory including a validity bit per storage location to indicate the validity of data stored therein. Cache performance is improved by a special read operation to eliminate storage of data otherwise purged by a replacement scheme. A special read removes cache data after it is read and does not write data read from the main memory into the cache. Additional operations include: normal read, where data is read from the cache memory if available, or, from main memory and written into cache; normal write, where data is written into main memory and the cache is interrogated, in the event of a hit, the data is either updated or effectively removed from the cache by invalidating its associated validity bit; and special write, where data is written both into main memory and the cache.

65 citations


Patent
02 Oct 1978
TL;DR: In this paper, the authors propose an approach for avoiding ambiguous data in a multi-requestor computing system of the type where each of the requestors has its own dedicated cache memory.
Abstract: Apparatus for avoiding ambiguous data in a multi-requestor computing system of the type wherein each of the requestors has its own dedicated cache memory. Each requestor has access to its own dedicated cache memory for purposes of ascertaining whether a particular data word is present in its cache memory and of obtaining that data word directly from its cache memory without the necessity of referencing main memory. Each requestor also has access to all other dedicated cache memories for purposes of invalidating a particular data word contained therein when that same particular data word has been written by that requestor into its own dedicated cache memory. Requestors and addresses in a particular cache memory are time multiplexed in such a way as to allow a particular dedicated cache memory to service invalidate requests from other requestors without sacrificing speed of reference or cycle time of the particular dedicated cache memory from servicing read requests from its own requestor.

49 citations


Patent
07 Mar 1978
TL;DR: In this article, the cache is accessible to the processor during one of the cache timing cycles and to the main storage during the other cache timing cycle, but no alternately accessible modules, buffering, delay, or interruption is provided for main storage line transfers to the cache.
Abstract: The disclosure enables concurrent access to a cache by main storage and a processor by means of a cache control which provides two cache access timing cycles during each processor storage request cycle. The cache is accessible to the processor during one of the cache timing cycles and is accessible to main storage during the other cache timing cycle. No alternately accessible modules, buffering, delay, or interruption is provided for main storage line transfers to the cache.

33 citations


Patent
11 Dec 1978
TL;DR: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words as mentioned in this paper, and a cache unit further includes a detection apparatus for detecting a conflict condition resulting in an improper assignment, upon detecting such a condition, advances the relacement circuits forward for assigning the next sequential group of locations or level inhibiting it from making its normal location assignment.
Abstract: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words. Directory circuits, associated with the cache store, contain address information identifying those instructions and data words stored in the cache store. The cache unit has at least one instruction register for storing address and level signals for specifying the location of the next instruction to be fetched and transferred to the processing unit. Replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information. The cache unit further includes detection apparatus for detecting a conflict condition resulting in an improper assignment. The detection apparatus, upon detecting such a condition, advances the relacement circuits forward for assigning the next sequential group of locations or level inhibiting it from making its normal location assignment. It also inhibits the directory circuits from writing the necessary information therein required for making the location assignment and prevents the information which produced the conflict from being written into cache store when received from memory.

32 citations


Patent
16 Mar 1978
TL;DR: In this article, the successive fetch requests by the I-unit for sublines (e.g. doublewords) of a variable length field operand are provided by the first through the highest-address fetched sublines in a line being accessed from main storage via a cache bypass.
Abstract: In the case of a cache miss, the successive fetch requests by the I-unit for sublines (e.g. doublewords) of a variable length field operand are provided by the first through the highest-address fetched sublines in a line being accessed from main storage via a cache bypass. This avoids the time delay for the I-unit caused by waiting until the complete line has been transferred to the cache before all required sublines in the line are obtainable from the cache. Address operand pairs (AOP's) consisting of request and buffer registers are provided in the I-unit to handle the fetched sublines as fast as the cache bypass can provide them from main storage. If there is a cache hit, the sublines are accessed from the cache.

31 citations



Book ChapterDOI
01 Jan 1978
TL;DR: This chapter focuses on cache memories for PDP-11 family computers, a small, fast, associative memory located between the central processor Pc and the primary memory Mp.
Abstract: Publisher Summary This chapter focuses on cache memories for PDP-11 family computers. One of the most important concepts in computer systems is that of a memory hierarchy. A memory hierarchy is simply a memory system built of two (or more) memory technologies. A cache memory is a small, fast, associative memory located between the central processor Pc and the primary memory Mp. Typically, the cache is implemented in bipolar technology, while Mp is implemented in MOS or magnetic core technology. The cache stores the address data AD pairs, consisting of an Mp address, and a copy of the contents of the Mp location corresponding to that address. The most common form of cache organization is fully associative with the data portion of the AD pair corresponding to basic addressable unit of memory. In a fully associative cache, any AD pair can be stored in any cache location. A set associative cache consists of a number of sets, which are accessed by indexing rather than by association. Each of the sets contains one or more AD pairs. The performance goals of the PDP-11/70 computer system require the typical miss ratio to be 0.1 or less.

1 citations