scispace - formally typeset
Search or ask a question

Showing papers on "Cache invalidation published in 1977"


Patent
28 Nov 1977
TL;DR: In this article, a filter memory is provided with each buffer invalidation address stack (BIAS) in a multiprocessor (MP) system to reduce unnecessary interrogations of the cache directories of processors.
Abstract: The disclosed embodiments filter out many unnecessary interrogations of the cache directories of processors in a multiprocessor (MP) system, thereby reducing the required size of the buffer invalidation address stack (BIAS) with each associated processor, and increasing the efficiency of each processor by allowing it to access its cache during the machine cycles which in prior MP's had been required for invalidation interrogation. Invalidation interrogation of each remote processor cache directory may be done when each channel or processor generates a store request to a shared main storage. A filter memory is provided with each BIAS in the MP. The filter memory records the cache block address in each invalidation request transferred to its associated BIAS. The filter memory deletes an address when it is deleted from the cache directory and retains the most recent cache access requests. The filter memory may have one or more registers, or be an array. Invalidation interrogation addresses from each remote processor and from local and/or remote channels are received and compared against each valid address recorded in the filter memory. If they compare unequal, the received address is recorded in the filter memory as a valid address, and it is gated into BIAS to perform a cache interrogation. If equal, the inputted address is prevented from entering the filter memory or the BIAS, so that it cannot cause any cache interrogation. Deletion from the filter memory is done when the associated processor fetches a block of data into its cache. Deletion may be of all entries in the filter memory, or of only a valid entry having an address equal to the block fetch address in a fetch address register (FAR). Deletion may be done by resetting a valid bit with each entry.

59 citations


Patent
Thomas F. Joyce1
22 Dec 1977
TL;DR: A first in-first out buffer memory coupled to a system bus receives all information transferred over the bus as mentioned in this paper, and checks if the information received is intended to update main memory or is in response to a cache request.
Abstract: A first in-first out buffer memory coupled to a system bus receives all information transferred over the bus. Logic associated with the buffer memory tests if the information received is intended to update main memory or is in response to a cache request. The information is written into cache if the main memory address location is stored in a cache directory. The information received in response to a cache request is stored in a cache data buffer. Other information is discarded.

40 citations


Patent
22 Dec 1977
TL;DR: In this article, the cache store monitors each communication between system units to determine if it is a communication from a system unit to main memory which will update a word location in main memory.
Abstract: A data processing system includes a plurality of system units all connected in common to a system bus. Included are a main memory system and a high speed buffer or cache store. System units communicate with each other over the system bus. Apparatus in the cache store monitors each communication between system units to determine if it is a communication from a system unit to main memory which will update a word location in main memory. If that word location is also stored in cache then the word location in cache will be updated in addition to the word location in main memory.

35 citations


Patent
22 Dec 1977
TL;DR: In this paper, the cache system is word oriented and comprises a directory, a data buffer and associated control logic, and the CPU requests data words by sending a main memory address of the requested data word to the cache.
Abstract: A data processing system includes a plurality of system units all connected in common to a system bus. The system units include a central processor (CPU), a memory system and a high speed buffer or cache system. The cache system is word oriented and comprises a directory, a data buffer and associated control logic. The CPU requests data words by sending a main memory address of the requested data word to the cache system. If the cache does not have the information, apparatus in the cache requests the information from main memory, and in addition, the apparatus requests additional information from consecutively higher addresses. If main memory is busy, the cache has apparatus to request fewer words.

31 citations


Patent
22 Dec 1977
TL;DR: A Data Processing System comprises a central processor unit, a main memory and a cache, all coupled in common to a system bus as discussed by the authors, which is also separately coupled to the cache.
Abstract: A Data Processing System comprises a central processor unit, a main memory and a cache, all coupled in common to a system bus. The central processor unit is also separately coupled to the cache. Apparatus in cache is responsive to signals received from the central processor unit to initiate a test and verification mode of operation in cache. This mode enables the cache to exercise various logic areas of cache and to indicate to the central processor unit hardware faults.

26 citations


Patent
22 Dec 1977
TL;DR: In this article, a word oriented data processing system includes a plurality of system units all connected in common to a system bus, including a central processor unit (CPU), a memory system and a high speed buffer or cache system.
Abstract: A word oriented data processing system includes a plurality of system units all connected in common to a system bus. Included are a central processor unit (CPU), a memory system and a high speed buffer or cache system. The cache system is also coupled to the CPU. The cache includes an address directory and a data store with each address location of directory addressing its respective word in data store. The CPU requests a word of cache by sending a memory request to cache which includes a memory address location. If the requested word is stored in the data store, then it is sent to the CPU. If the word is not stored in cache, the cache requests the word of memory. When the cache receives the word from memory, the word is sent to the CPU and also stored in the data store.

19 citations


Patent
22 Dec 1977
TL;DR: In this article, the cache subsystem is coupled with a central processor, a main memory subsystem and a cache subsystem, all coupled in common to a system bus, and the transfer of information from the main memory to the cache starts from the lowest order address locations in main memory and continues from successive address locations until the cache is fully loaded.
Abstract: A data processing system includes a central processor subsystem, a main memory subsystem and a cache subsystem, all coupled in common to a system bus. During the overall system initialization process, apparatus in the cache subsystem effects the transfer of information from the main memory subsystem to the cache subsystem to load all address locations of the cache subsystem. The transfer of information from the main memory subsystem to the cache subsystem starts from the lowest order address locations in main memory and continues from successive address locations until the cache subsystem is fully loaded. This assures that the cache subsystem contains valid information during normal data processing.

17 citations


Patent
23 Dec 1977

10 citations


Book ChapterDOI
31 Mar 1977
TL;DR: It is concluded that very simple models can quite accurately predict such performance improvements if properly abstracted from the actual or proposed system architecture and can do so with a small expenditure of computer time and human effort.
Abstract: It is reasonable to attempt to improve performance of an existing computer system by incorporation of a cache or buffer memory. Furthermore, it is also reasonable to attempt to predict the effect of that inclusion by system models. This paper reports on such an effort. We begin by describing the system, devising a methodology to use a processor dedicated cache in the multi-processor system, and conclude by examining a series of modeling efforts germane to predicting the performance effects of the cache. We are interested in and conclude that very simple models can quite accurately predict such performance improvements if properly abstracted from the actual or proposed system architecture and can do so with a small expenditure of computer time and human effort.