scispace - formally typeset
Search or ask a question

Showing papers on "Cache invalidation published in 1978"


Patent
Chang Shih-Jeh1, Toy Wing Noom1
08 Jun 1978
TL;DR: In this paper, a data processing system includes a memory arrangement comprising a main memory, and a cache memory including a validity bit per storage location to indicate the validity of data stored therein.
Abstract: A data processing system includes a memory arrangement comprising a main memory, and a cache memory including a validity bit per storage location to indicate the validity of data stored therein. Cache performance is improved by a special read operation to eliminate storage of data otherwise purged by a replacement scheme. A special read removes cache data after it is read and does not write data read from the main memory into the cache. Additional operations include: normal read, where data is read from the cache memory if available, or, from main memory and written into cache; normal write, where data is written into main memory and the cache is interrogated, in the event of a hit, the data is either updated or effectively removed from the cache by invalidating its associated validity bit; and special write, where data is written both into main memory and the cache.

65 citations


Patent
07 Mar 1978
TL;DR: In this article, the cache is accessible to the processor during one of the cache timing cycles and to the main storage during the other cache timing cycle, but no alternately accessible modules, buffering, delay, or interruption is provided for main storage line transfers to the cache.
Abstract: The disclosure enables concurrent access to a cache by main storage and a processor by means of a cache control which provides two cache access timing cycles during each processor storage request cycle. The cache is accessible to the processor during one of the cache timing cycles and is accessible to main storage during the other cache timing cycle. No alternately accessible modules, buffering, delay, or interruption is provided for main storage line transfers to the cache.

33 citations


Patent
11 Dec 1978
TL;DR: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words as mentioned in this paper, and a cache unit further includes a detection apparatus for detecting a conflict condition resulting in an improper assignment, upon detecting such a condition, advances the relacement circuits forward for assigning the next sequential group of locations or level inhibiting it from making its normal location assignment.
Abstract: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words. Directory circuits, associated with the cache store, contain address information identifying those instructions and data words stored in the cache store. The cache unit has at least one instruction register for storing address and level signals for specifying the location of the next instruction to be fetched and transferred to the processing unit. Replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information. The cache unit further includes detection apparatus for detecting a conflict condition resulting in an improper assignment. The detection apparatus, upon detecting such a condition, advances the relacement circuits forward for assigning the next sequential group of locations or level inhibiting it from making its normal location assignment. It also inhibits the directory circuits from writing the necessary information therein required for making the location assignment and prevents the information which produced the conflict from being written into cache store when received from memory.

32 citations


Patent
16 Mar 1978
TL;DR: In this article, the successive fetch requests by the I-unit for sublines (e.g. doublewords) of a variable length field operand are provided by the first through the highest-address fetched sublines in a line being accessed from main storage via a cache bypass.
Abstract: In the case of a cache miss, the successive fetch requests by the I-unit for sublines (e.g. doublewords) of a variable length field operand are provided by the first through the highest-address fetched sublines in a line being accessed from main storage via a cache bypass. This avoids the time delay for the I-unit caused by waiting until the complete line has been transferred to the cache before all required sublines in the line are obtainable from the cache. Address operand pairs (AOP's) consisting of request and buffer registers are provided in the I-unit to handle the fetched sublines as fast as the cache bypass can provide them from main storage. If there is a cache hit, the sublines are accessed from the cache.

31 citations


Patent
11 Dec 1978
TL;DR: A cache system includes a high speed storage unit organized into a plurality of levels, each including a number of multiword blocks and at least one multiposition address selection switch and address register as discussed by the authors.
Abstract: A cache system includes a high speed storage unit organized into a plurality of levels, each including a number of multiword blocks and at least one multiposition address selection switch and address register. The address switch is connected to receive address signals from a plurality of address sources. The system further includes a directory organized into a plurality of levels for storing address information required for accessing blocks from the cache storage unit and timing circuits for defining first and second halves of a cache cycle of operation. Control circuits coupled to the timing circuits generate control signals for controlling the operation of the address selection switch. During the previous cycle, the control circuits condition the address selector switch to select an address which is loaded into the address register during the previous half cycle. This enables either the accessing of instructions from cache or the writing of data into cache during the first half of the next cache cycle. During the first half of the cycle, the address selected by the address switch in response to control signals from the control circuits is clocked into the address register. This permits processor operations, such as the accessing of operand data or the writing of data into cache to be performed during the second half of the same cycle.

30 citations



Journal ArticleDOI
01 Nov 1978
TL;DR: The results indicate the viability of systems utilising cache memories and pipelined switches which exhibit performance comparable to systems with crosspoint switches, and suggest that m.m.i.d. systems with pipelining binary switches can be implemented at a lower cost than those withCrosspoint switches.
Abstract: Simulation results of a multiple-instruction multiple-data-stream (m.i.m.d.) organisation are presented. The results deal with the behaviour of throughput performance with respect to variations in cache-memory parameters, number of processors and processing time, of a m.i.m.d. system in which a pipelined binary switch is used as the interconnection network. The results indicate the viability of systems utilising cache memories and pipelined switches which exhibit performance comparable to systems with crosspoint switches. This aspect is attractive, since it is likely that m.i.m.d. systems with pipelined binary switches can be implemented at a lower cost than those with crosspoint switches.

1 citations