scispace - formally typeset
Search or ask a question

Showing papers on "Cache algorithms published in 1980"


Patent
24 Jan 1980
TL;DR: In this paper, an apparatus for providing faster memory access for a CPU by utilizing a least recently used scheme for selecting a storage location in which to store data retrieved from main memory upon a cache miss.
Abstract: An apparatus is disclosed herein for providing faster memory access for a CPU by utilizing a least recently used scheme for selecting a storage location in which to store data retrieved from main memory upon a cache miss. A duplicate directory arrangement is also disclosed for selective clearing of the cache in multiprocessor systems where data in a cache becomes obsolete by virtue of a change made to the corresponding data in main memory by another processor. The advantage of higher overall speed for CPU operations is achieved because of the higher hit ratio provided by the disclosed arrangement. In the preferred embodiment, the cache utilizes: a cache store for storing data; primary and duplicate directories for identifying the data stored in the cache; a full/empty array to mark the status of the storage locations; a least recently used array to indicate where incoming data should be stored; and a control means to orchestrate all these elements.

110 citations


Patent
14 Nov 1980
TL;DR: In this paper, a command queue is maintained for each disk device to store commands waiting to be executed by the disk device, and a priority value and a sequence number are assigned to each command as it is added to a queue so that the highest priority queued command with the lowest request number is executed when the disk devices corresponding to the queue becomes idle.
Abstract: One or more host processors issue commands to one or more storage control units which control data transfers between the host processors, a cache store and a plurality of disk devices. A command queue is maintained for each disk device to store commands waiting to be executed by the disk device. The cache store stores segments of data which have been read from, or are to be written to disk space. In response to a command from a host processor a corresponding command is added to one of the command queues. If the disk device is not busy and has no previously queued commands waiting to be executed the storage control unit issues a seek command to the disk drive device. If there are previously queued commands waiting to be executed, or if the disk device is busy, the cache store is checked to determine if it contains a copy of the data from the disk space specified by the host processor command. If a copy of the data from the specified disk space is resident in the cache store then a data transfer is initiated between the host processor and the cache store. A priority value and a sequence number are assigned to each command as it is added to a queue so that the highest priority queued command with the lowest request number is executed when the disk device corresponding to the queue becomes idle. The storage control unit may create commands and place them on command queues for execution, these commands being for the purpose of transferring the least recently used segments of data from the cache store to the disks if the segments have been written to while in the cache store.

100 citations


Journal ArticleDOI
Lee1, Ghani, Heron
TL;DR: In this article, the authors describe a recovery cache that has been built for the PDP-11 family of machines, which is designed to be an "add-on" unit which requires no hardware alterations to the host CPU but intersects the bus between the CPU and the memory modules.
Abstract: Backward error recovery is an integral part of the recovery block scheme that has been advanced as a method for providing tolerance against faults in software; the recovery cache has been proposed as a mechanism for providing this error recovery capability. This correspondence describes a recovery cache that has been built for the PDP-11 family of machines. This recovery cache has been designed to be an "add-on" unit which requires no hardware alterations to the host CPU but which intersects the bus between the CPU and the memory modules. Specially designed hardware enables concurrent operation of the recovery cache and the host system, and aims to minimize the overheads imposed on the host.

66 citations


Journal ArticleDOI
TL;DR: This paper presents a design for a 2.5-µm technology, 4 × 1K-bit cache chip with a nominal access time of about 500 ps as a basis and shows for the first time how these components are structured and interfaced.
Abstract: Design work on components for Josephson computer technology nondestructive read out cache memories has been published previously. In this paper, presenting a design for a 2.5-µm technology, 4 × 1K-bit cache chip with a nominal access time of about 500 ps as a basis, we show for the first time how these components are structured and interfaced. The cell, drivers, decoder, and a sense bus are based on designs which were experimentally verified in a 5-µm technology for which excellent agreement was found between computer simulations and measurements.

64 citations


Patent
26 Aug 1980
TL;DR: In this article, a cache memory organization using a miss information collection and manipulation system is presented to insure the transparency of cache misses, which makes use of the fact that the cache memory has a faster rate of operation than the speed of operation of central memory.
Abstract: A cache memory organization is shown using a miss information collection and manipulation system to insure the transparency of cache misses. This system makes use of the fact that the cache memory has a faster rate of operation than the rate of operation of central memory. The cache memory consists of a set-associative cache section consisting of tag arrays and control with a cache buffer, a central memory interface block consisting of a memory requester and memory receiver together with miss information holding registers section consisting of a miss comparator and status collection device. The miss information holding register section allows for an almost continual stream of new requests for data to be supplied to the cache memory at the cache hit rate throughput.

56 citations


Patent
28 Jan 1980
TL;DR: In this paper, a cached multiprocessor system operated in an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access.
Abstract: A cached multiprocessor system operated in an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access. Further, the time slot for data transfers to and from the processors succeeds the time slot for accessing the cache. The sequence is optimized for transactions that require only one cache access, e.g., read operations that hit the cacgenerally require a second cache access in order to update or allocate the cache. These transactions are entered into a queue with order preserved prior to permitting a second access to the cache. Also, a duplicate tag store is associated with the queue and maintained as a copy of the tag store in the cache. Whenever a cache tag is to be changed, a duplicate tag in the duplicate tag store is changed prior to changing the cache tag. The duplicate tag store thus always provides an accurate indication of the contents of the cache. The duplicate tag store is used to determine whether a second access to the cache for an update is necessary.

55 citations


Patent
14 Nov 1980
TL;DR: In this paper, a segment descriptor table is maintained and the entries in the table are linked by forward and backward age links, and the segment descriptor for these additional segments are linked into the age chain at an age which is intermediate the most recently used and the least recently used.
Abstract: When a processor issues a read or write command to read one or more words from a disk, a cache store is checked to see if a copy of the segment(s) containing the word(s) are present therein. If a copy of the segment is not present in the cache store then it is moved from disk to the cache store and sent to the processor. A segment descriptor table is maintained and the entries in the table are linked by forward and backward age links. When a segment is brought into the cache store from a disk because it contains the word or words specified by a command, its segment descriptor is linked in the age chain as the most recently used. Provision is made for reading into the cache store one or more segments in addition to the segment(s) containing the word(s) specified by a command, on speculation that the additional segment(s) contain words most likely to be accessed soon by the processor. The segment descriptor for these additional segments are linked into the age chain at an age which is intermediate the most recently used and the least recently used.

43 citations


Patent
28 Jan 1980
TL;DR: In this article, the authors propose an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access, and the time slots for data transfers to and from the processors succeed the timeslots for accessing the cache.
Abstract: A cached multiprocessor system operates in an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access. Further, the time slot for data transfers to and from the processors succeeds the time slot for accessing the cache. The sequence is optimized for transactions that require only one cache access, e.g., read operations that hit the cache. Transactions that require two cache accesses must complete the second cache access during a later available pipeline sequence. A processor indexed random access memory specifies when any given processor has a write operation outstanding for a location in the cache. This prevents the processor from reading the location before the write operation is completed.

42 citations


Patent
15 Apr 1980
TL;DR: In this article, a cache store (10) is provided in conjunction with a main store (2) where each location has a changed bit (CH) and a list (4) is maintained of the addresses of those locations in the cache store which have been modified.
Abstract: A cache store (10) is provided in conjunction with a main store (2). Each location of the cache (10) has a changed bit (CH). A list (4) is maintained of the addresses of those locations in the cache store which have been modified. When the list (4) becomes full, the entry at the head of the list is read out and the corresponding data item is transferred from the cache (10) to the main store (2). Thus, there are never more than a limited number of items in the cache awaiting transfer. To clear the cache of all modified items, each entry in the list (4) is read out in turn and the corresponding items are transferred to the main store (2). The system reduces the amount of traffic between the cache (10) and the main store (2) and hence improves efficiency.

30 citations


Patent
Anthony J. Capozzi1
15 Jul 1980
TL;DR: In this paper, a two-level storage hierarchy for a data processing system is directly addressable by the main processor, which includes a high speed, small cache with a relatively slower, much larger main memory.
Abstract: A two-level storage hierarchy for a data processing system is directly addressable by the main processor. The two-level storage includes a high speed, small cache with a relatively slower, much larger main memory. The processor requests data from the cache and when the requested data is not resident in the cache, it is transferred from the main memory to the cache and then to the processor. The data transfer organization maximizes the system throughput and uses a single integrated control to accomplish data transfers.

26 citations


Patent
31 Dec 1980
TL;DR: In this paper, a group of diagnostic control registers supply signals for controlling the testing of the cache within the cache memory to determine the operability of the individual elements included in the cache.
Abstract: A cache memory wherein data words identified by odd address numbers are stored separately from data words identified by even address numbers. A group of diagnostic control registers supply signals for controlling the testing of the cache within the cache memory to determine the operability of the individual elements included in the cache memory.

Journal ArticleDOI
28 May 1980
TL;DR: The applicability of this modelling methodology with respect to the design of new storage systems, as well as the improvement or comparison of existing systems, will be described by investigation of the efficiency of small cache memories.
Abstract: This paper proposes a modelling methodology combining simulation and analysis for computer performance evaluation and prediction. The methodology is based on a special workload model that is suitable for the generation and description of dynamic program behaviour. A description of this workload model is given in section 2. The applicability of this concept with respect to the design of new storage systems, as well as the improvement or comparison of existing systems, will be described by investigation of the efficiency of small cache memories in section 3.