scispace - formally typeset
Search or ask a question

Showing papers on "Cache algorithms published in 1977"


Patent
28 Nov 1977
TL;DR: In this article, a filter memory is provided with each buffer invalidation address stack (BIAS) in a multiprocessor (MP) system to reduce unnecessary interrogations of the cache directories of processors.
Abstract: The disclosed embodiments filter out many unnecessary interrogations of the cache directories of processors in a multiprocessor (MP) system, thereby reducing the required size of the buffer invalidation address stack (BIAS) with each associated processor, and increasing the efficiency of each processor by allowing it to access its cache during the machine cycles which in prior MP's had been required for invalidation interrogation. Invalidation interrogation of each remote processor cache directory may be done when each channel or processor generates a store request to a shared main storage. A filter memory is provided with each BIAS in the MP. The filter memory records the cache block address in each invalidation request transferred to its associated BIAS. The filter memory deletes an address when it is deleted from the cache directory and retains the most recent cache access requests. The filter memory may have one or more registers, or be an array. Invalidation interrogation addresses from each remote processor and from local and/or remote channels are received and compared against each valid address recorded in the filter memory. If they compare unequal, the received address is recorded in the filter memory as a valid address, and it is gated into BIAS to perform a cache interrogation. If equal, the inputted address is prevented from entering the filter memory or the BIAS, so that it cannot cause any cache interrogation. Deletion from the filter memory is done when the associated processor fetches a block of data into its cache. Deletion may be of all entries in the filter memory, or of only a valid entry having an address equal to the block fetch address in a fetch address register (FAR). Deletion may be done by resetting a valid bit with each entry.

59 citations


Patent
22 Dec 1977
TL;DR: In this paper, a data processing system having a system bus; a plurality of system units including a main memory, a cache memory, and a central processing unit (CPU) and a communications controller all connected in parallel to the system bus is described.
Abstract: A data processing system having a system bus; a plurality of system units including a main memory, a cache memory, a central processing unit (CPU) and a communications controller all connected in parallel to the system bus. The controller operates to supervise interconnection between the units via the system bus to transfer data therebetween, and the CPU includes a memory request device for generating data requests in response to the CPU. The cache memory includes a private interface connecting the CPU to the cache memory for permitting direct transmission of data requests from the CPU to the cache memory and direct transmission of requested data from the cache memory to the CPU; a cache directory and data buffer for evaluating the data requests to determine when the requested data is not present in the cache memory; and a system bus interface connecting the cache memory to the system bus for obtaining CPU requested data not found in the cache memory from the main memory via the system bus in response to the cache directory and data buffer. The cache memory may also include replacement and update apparatus for determining when the system bus is transmitting data to be written into a specific address in main memory and for replacing the data in a corresponding specific address in the cache memory with the data then on the system bus.

53 citations


Patent
18 Feb 1977
TL;DR: In this article, the cache store includes parity generation circuits which generate check bits for the addresses to be written into a directory associated therewith, and parity check circuits for detecting errors in the addresses and information read from the cache stores during a read cycle of operation.
Abstract: A memory system includes a cache store and a backing store. The cache store provides fast access to blocks of information previously fetched from the backing store in response to commands. The backing store includes error detection and correction apparatus for detecting and correcting errors in the information read from backing store during a backing store cycle of operation. The cache store includes parity generation circuits which generate check bits for the addresses to be written into a directory associated therewith. Additionally, the cache store includes parity check circuits for detecting errors in the addresses and information read from the cache store during a read cycle of operation. The memory system further includes control apparatus for enabling for operation, the backing store and cache store in response to the commands. The control apparatus includes circuits which couples to the parity check circuits. Such circuits are operative upon detecting an error in either an address or information read from the cache store to simulate a condition that the information requested was not stored in cache store. This causes the control apparatus to initiate a backing store cycle of operation for read out of a correct version of the requested information thereby eliminating the necessity of including in cache store more complex detection and correction circuits.

47 citations


Patent
22 Dec 1977
TL;DR: In this article, the cache store monitors each communication between system units to determine if it is a communication from a system unit to main memory which will update a word location in main memory.
Abstract: A data processing system includes a plurality of system units all connected in common to a system bus. Included are a main memory system and a high speed buffer or cache store. System units communicate with each other over the system bus. Apparatus in the cache store monitors each communication between system units to determine if it is a communication from a system unit to main memory which will update a word location in main memory. If that word location is also stored in cache then the word location in cache will be updated in addition to the word location in main memory.

35 citations


Patent
22 Dec 1977
TL;DR: In this paper, the cache system is word oriented and comprises a directory, a data buffer and associated control logic, and the CPU requests data words by sending a main memory address of the requested data word to the cache.
Abstract: A data processing system includes a plurality of system units all connected in common to a system bus. The system units include a central processor (CPU), a memory system and a high speed buffer or cache system. The cache system is word oriented and comprises a directory, a data buffer and associated control logic. The CPU requests data words by sending a main memory address of the requested data word to the cache system. If the cache does not have the information, apparatus in the cache requests the information from main memory, and in addition, the apparatus requests additional information from consecutively higher addresses. If main memory is busy, the cache has apparatus to request fewer words.

31 citations


Patent
22 Dec 1977
TL;DR: A Data Processing System comprises a central processor unit, a main memory and a cache, all coupled in common to a system bus as discussed by the authors, which is also separately coupled to the cache.
Abstract: A Data Processing System comprises a central processor unit, a main memory and a cache, all coupled in common to a system bus. The central processor unit is also separately coupled to the cache. Apparatus in cache is responsive to signals received from the central processor unit to initiate a test and verification mode of operation in cache. This mode enables the cache to exercise various logic areas of cache and to indicate to the central processor unit hardware faults.

26 citations


Patent
22 Dec 1977
TL;DR: In this article, a word oriented data processing system includes a plurality of system units all connected in common to a system bus, including a central processor unit (CPU), a memory system and a high speed buffer or cache system.
Abstract: A word oriented data processing system includes a plurality of system units all connected in common to a system bus. Included are a central processor unit (CPU), a memory system and a high speed buffer or cache system. The cache system is also coupled to the CPU. The cache includes an address directory and a data store with each address location of directory addressing its respective word in data store. The CPU requests a word of cache by sending a memory request to cache which includes a memory address location. If the requested word is stored in the data store, then it is sent to the CPU. If the word is not stored in cache, the cache requests the word of memory. When the cache receives the word from memory, the word is sent to the CPU and also stored in the data store.

19 citations


Patent
22 Dec 1977
TL;DR: In this article, the cache subsystem is coupled with a central processor, a main memory subsystem and a cache subsystem, all coupled in common to a system bus, and the transfer of information from the main memory to the cache starts from the lowest order address locations in main memory and continues from successive address locations until the cache is fully loaded.
Abstract: A data processing system includes a central processor subsystem, a main memory subsystem and a cache subsystem, all coupled in common to a system bus. During the overall system initialization process, apparatus in the cache subsystem effects the transfer of information from the main memory subsystem to the cache subsystem to load all address locations of the cache subsystem. The transfer of information from the main memory subsystem to the cache subsystem starts from the lowest order address locations in main memory and continues from successive address locations until the cache subsystem is fully loaded. This assures that the cache subsystem contains valid information during normal data processing.

17 citations


Patent
23 Dec 1977

10 citations


Proceedings ArticleDOI
13 Jun 1977
TL;DR: By appropriate cache system design, adequate memory system speed can be achieved to keep the processors busy and smaller cache memories are required for dedicated processors than for standard processors.
Abstract: The performances of two types of multiprocessor systems with cache memories dedicated to each processor are analyzed. It is demonstrated that by appropriate cache system design, adequate memory system speed can be achieved to keep the processors busy. A write through algorithm is used for each cache to minimize directory searching and several main memory modules are used to provide interleaved write. In large memories a cost performance analysis shows that with an increase in per bit costs of 5 to 20 percent, the memory throughput can be enhanced by a factor of 10 and by a factor of 3 or more over simple interleaving of the modules for random memory requests. Experimental evidence indicates smaller cache memories are required for dedicated processors than for standard processors. All memories and buses can be of modest speed.

9 citations


Book ChapterDOI
31 Mar 1977
TL;DR: It is concluded that very simple models can quite accurately predict such performance improvements if properly abstracted from the actual or proposed system architecture and can do so with a small expenditure of computer time and human effort.
Abstract: It is reasonable to attempt to improve performance of an existing computer system by incorporation of a cache or buffer memory. Furthermore, it is also reasonable to attempt to predict the effect of that inclusion by system models. This paper reports on such an effort. We begin by describing the system, devising a methodology to use a processor dedicated cache in the multi-processor system, and conclude by examining a series of modeling efforts germane to predicting the performance effects of the cache. We are interested in and conclude that very simple models can quite accurately predict such performance improvements if properly abstracted from the actual or proposed system architecture and can do so with a small expenditure of computer time and human effort.