scispace - formally typeset
Search or ask a question

Showing papers on "Cache invalidation published in 1980"


Patent
24 Jan 1980
TL;DR: In this paper, an apparatus for providing faster memory access for a CPU by utilizing a least recently used scheme for selecting a storage location in which to store data retrieved from main memory upon a cache miss.
Abstract: An apparatus is disclosed herein for providing faster memory access for a CPU by utilizing a least recently used scheme for selecting a storage location in which to store data retrieved from main memory upon a cache miss. A duplicate directory arrangement is also disclosed for selective clearing of the cache in multiprocessor systems where data in a cache becomes obsolete by virtue of a change made to the corresponding data in main memory by another processor. The advantage of higher overall speed for CPU operations is achieved because of the higher hit ratio provided by the disclosed arrangement. In the preferred embodiment, the cache utilizes: a cache store for storing data; primary and duplicate directories for identifying the data stored in the cache; a full/empty array to mark the status of the storage locations; a least recently used array to indicate where incoming data should be stored; and a control means to orchestrate all these elements.

110 citations


Patent
14 Nov 1980
TL;DR: A cache/disk subsystem includes a storage control unit, a relatively low capacity high speed cache store, and a higher capacity slower memory such as a plurality of disk drive devices.
Abstract: A cache/disk subsystem includes a storage control unit, a relatively low capacity high speed cache store, and a higher capacity slower memory such as a plurality of disk drive devices. The storage control unit controls the subsystem to transfer from the cache store to the disk drive devices segments of data which have been modified while resident in the cache store and insures that space will be available in the cache store for new data if a particular operation requires that new data be transferred from a disk to the cache store for use. The subsystem may include plural storage control units and any of them may control the transfers of segments of data to the disks.

73 citations


Patent
26 Aug 1980
TL;DR: In this article, a cache memory organization using a miss information collection and manipulation system is presented to insure the transparency of cache misses, which makes use of the fact that the cache memory has a faster rate of operation than the speed of operation of central memory.
Abstract: A cache memory organization is shown using a miss information collection and manipulation system to insure the transparency of cache misses. This system makes use of the fact that the cache memory has a faster rate of operation than the rate of operation of central memory. The cache memory consists of a set-associative cache section consisting of tag arrays and control with a cache buffer, a central memory interface block consisting of a memory requester and memory receiver together with miss information holding registers section consisting of a miss comparator and status collection device. The miss information holding register section allows for an almost continual stream of new requests for data to be supplied to the cache memory at the cache hit rate throughput.

56 citations


Patent
28 Jan 1980
TL;DR: In this paper, a cached multiprocessor system operated in an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access.
Abstract: A cached multiprocessor system operated in an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access. Further, the time slot for data transfers to and from the processors succeeds the time slot for accessing the cache. The sequence is optimized for transactions that require only one cache access, e.g., read operations that hit the cacgenerally require a second cache access in order to update or allocate the cache. These transactions are entered into a queue with order preserved prior to permitting a second access to the cache. Also, a duplicate tag store is associated with the queue and maintained as a copy of the tag store in the cache. Whenever a cache tag is to be changed, a duplicate tag in the duplicate tag store is changed prior to changing the cache tag. The duplicate tag store thus always provides an accurate indication of the contents of the cache. The duplicate tag store is used to determine whether a second access to the cache for an update is necessary.

55 citations


Patent
14 Nov 1980
TL;DR: In this paper, a segment descriptor table is maintained and the entries in the table are linked by forward and backward age links, and the segment descriptor for these additional segments are linked into the age chain at an age which is intermediate the most recently used and the least recently used.
Abstract: When a processor issues a read or write command to read one or more words from a disk, a cache store is checked to see if a copy of the segment(s) containing the word(s) are present therein. If a copy of the segment is not present in the cache store then it is moved from disk to the cache store and sent to the processor. A segment descriptor table is maintained and the entries in the table are linked by forward and backward age links. When a segment is brought into the cache store from a disk because it contains the word or words specified by a command, its segment descriptor is linked in the age chain as the most recently used. Provision is made for reading into the cache store one or more segments in addition to the segment(s) containing the word(s) specified by a command, on speculation that the additional segment(s) contain words most likely to be accessed soon by the processor. The segment descriptor for these additional segments are linked into the age chain at an age which is intermediate the most recently used and the least recently used.

43 citations


Patent
28 Jan 1980
TL;DR: In this article, the authors propose an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access, and the time slots for data transfers to and from the processors succeed the timeslots for accessing the cache.
Abstract: A cached multiprocessor system operates in an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access. Further, the time slot for data transfers to and from the processors succeeds the time slot for accessing the cache. The sequence is optimized for transactions that require only one cache access, e.g., read operations that hit the cache. Transactions that require two cache accesses must complete the second cache access during a later available pipeline sequence. A processor indexed random access memory specifies when any given processor has a write operation outstanding for a location in the cache. This prevents the processor from reading the location before the write operation is completed.

42 citations


Patent
14 Nov 1980
TL;DR: In this paper, the storage control unit does not write a copy of the data into the cache store if the length of a data transfer exceeds a first threshold length and if the transfer begins on a segment boundary and comprises an integral number of segments.
Abstract: In a data processing system having a host processor, a cache store for storing segments of data, a bulk memory and a storage control unit for controlling transfers between the processor, cache store and bulk memory, the storage control unit normally responds to a read or write command from the host processor to control the transfer of data. If a copy of the data transferred is not resident in the cache store then a copy is written therein by the storage control unit. If the length of a data transfer exceeds a first threshold length then the storage control unit does not write a copy of the data into the cache store. If the length of a data transfer exceeds a second threshold length, and the transfer begins on a segment boundary and comprises an integral number of segments, then the storage control unit does not write a copy of the data into the cache store. The writing into the cache store is transparent to the host processor. The use of a transfer threshold prevents data from being entered into the cache store when it is not likely to be used again soon. Two thresholds are provided because the data transferred in long transfers of an integral number of segments is even less likely to be used again soon than the data transferred in long transfers that do not begin on a segment boundary or comprise an integral number of segments.

40 citations


Patent
15 Apr 1980
TL;DR: In this article, a cache store (10) is provided in conjunction with a main store (2) where each location has a changed bit (CH) and a list (4) is maintained of the addresses of those locations in the cache store which have been modified.
Abstract: A cache store (10) is provided in conjunction with a main store (2). Each location of the cache (10) has a changed bit (CH). A list (4) is maintained of the addresses of those locations in the cache store which have been modified. When the list (4) becomes full, the entry at the head of the list is read out and the corresponding data item is transferred from the cache (10) to the main store (2). Thus, there are never more than a limited number of items in the cache awaiting transfer. To clear the cache of all modified items, each entry in the list (4) is read out in turn and the corresponding items are transferred to the main store (2). The system reduces the amount of traffic between the cache (10) and the main store (2) and hence improves efficiency.

30 citations


Patent
Anthony J. Capozzi1
15 Jul 1980
TL;DR: In this paper, a two-level storage hierarchy for a data processing system is directly addressable by the main processor, which includes a high speed, small cache with a relatively slower, much larger main memory.
Abstract: A two-level storage hierarchy for a data processing system is directly addressable by the main processor. The two-level storage includes a high speed, small cache with a relatively slower, much larger main memory. The processor requests data from the cache and when the requested data is not resident in the cache, it is transferred from the main memory to the cache and then to the processor. The data transfer organization maximizes the system throughput and uses a single integrated control to accomplish data transfers.

26 citations


Patent
31 Dec 1980
TL;DR: In this paper, a group of diagnostic control registers supply signals for controlling the testing of the cache within the cache memory to determine the operability of the individual elements included in the cache.
Abstract: A cache memory wherein data words identified by odd address numbers are stored separately from data words identified by even address numbers. A group of diagnostic control registers supply signals for controlling the testing of the cache within the cache memory to determine the operability of the individual elements included in the cache memory.

20 citations


Patent
14 Nov 1980
TL;DR: In this article, a cache store for storing a segment descriptor table is described, where the table has an entry corresponding to each segment in the cache store and an entry in the table is formed or reformed each time data is written into its corresponding segment in cache store.
Abstract: A system including a host processor, a disk drive device and a cache store for storing segments of data most recently used or most likely to be used, is provided with a separate store for storing a segment descriptor table. The table has an entry therein corresponding to each segment in the cache store and an entry in the table is formed or reformed each time data is written into its corresponding segment in the cache store. A copy of the segment descriptor table entry thus formed is stored in the cache store at the end of its corresponding data segment. The segment descriptor table entries are normally utilized to manage their corresponding segments. If the store for the table should fail, the copies of the segment descriptor table entries in the cache store may be utilized to control the transfer of data segments in the cache store to their proper locations in disk storage.

Patent
03 Apr 1980
TL;DR: In this article, a data processing system includes a main memory shared by a plurality of central processor units (CPUs) which are also coupled in cascade in a closed circular path, each CPU has a cache buffer memory and two sets of transfer registers for receiving and transmitting cancel request signals which identify cache data which is no longer valid.
Abstract: This data processing system includes a main memory which is shared by a plurality of central processor units (CPUs) which are also coupled in cascade in a closed circular path. Each CPU has a cache buffer memory and two sets of transfer registers for receiving and transmitting cancel request signals which identify cache data which is no longer valid. Each CPU's receiving register subsystem includes circuitry for invalidating cache buffer data which has been updated or rewritten in main memory by another CPU in the loop. Each CPU's transmitting register subsystem includes circuitry for inhibiting the transmittal of a cancel request signal if the next CPU in the circle is the same one which originated the particular cache invalidation signal. Circuitry is also provided for propagating a cancel request signal around the loop in opposite directions simultaneously.