scispace - formally typeset
Search or ask a question

Showing papers on "Cache coloring published in 1980"


Patent
24 Jan 1980
TL;DR: In this paper, an apparatus for providing faster memory access for a CPU by utilizing a least recently used scheme for selecting a storage location in which to store data retrieved from main memory upon a cache miss.
Abstract: An apparatus is disclosed herein for providing faster memory access for a CPU by utilizing a least recently used scheme for selecting a storage location in which to store data retrieved from main memory upon a cache miss. A duplicate directory arrangement is also disclosed for selective clearing of the cache in multiprocessor systems where data in a cache becomes obsolete by virtue of a change made to the corresponding data in main memory by another processor. The advantage of higher overall speed for CPU operations is achieved because of the higher hit ratio provided by the disclosed arrangement. In the preferred embodiment, the cache utilizes: a cache store for storing data; primary and duplicate directories for identifying the data stored in the cache; a full/empty array to mark the status of the storage locations; a least recently used array to indicate where incoming data should be stored; and a control means to orchestrate all these elements.

110 citations


Patent
05 May 1980
TL;DR: The addressable cache memory feature overcomes the latency delay which inherently occurs in seeking the beginning of a region to be accessed on the disk drive mass storage in a multiprocessor system as discussed by the authors.
Abstract: In a multiprocessor system, a controllable cache store interface to a shared disk memory employs a plurality of storage partitions whose access is interleaved in a time domain multiplexed manner on a common bus with the shared disk to enable high speed sharing of the disk storage by all of the processors. The communication between each processor and its corresponding cache memory partition can be overlapped with each other and with accesses between the cache memory and the commonly shared disk memory. The addressable cache memory feature overcomes the latency delay which inherently occurs in seeking the beginning of a region to be accessed on the disk drive mass storage.

95 citations


Patent
14 Nov 1980
TL;DR: A cache/disk subsystem includes a storage control unit, a relatively low capacity high speed cache store, and a higher capacity slower memory such as a plurality of disk drive devices.
Abstract: A cache/disk subsystem includes a storage control unit, a relatively low capacity high speed cache store, and a higher capacity slower memory such as a plurality of disk drive devices. The storage control unit controls the subsystem to transfer from the cache store to the disk drive devices segments of data which have been modified while resident in the cache store and insures that space will be available in the cache store for new data if a particular operation requires that new data be transferred from a disk to the cache store for use. The subsystem may include plural storage control units and any of them may control the transfers of segments of data to the disks.

73 citations


Journal ArticleDOI
TL;DR: This paper presents a design for a 2.5-µm technology, 4 × 1K-bit cache chip with a nominal access time of about 500 ps as a basis and shows for the first time how these components are structured and interfaced.
Abstract: Design work on components for Josephson computer technology nondestructive read out cache memories has been published previously. In this paper, presenting a design for a 2.5-µm technology, 4 × 1K-bit cache chip with a nominal access time of about 500 ps as a basis, we show for the first time how these components are structured and interfaced. The cell, drivers, decoder, and a sense bus are based on designs which were experimentally verified in a 5-µm technology for which excellent agreement was found between computer simulations and measurements.

64 citations


Patent
17 Nov 1980
TL;DR: In this article, it is assumed that an information processing unit and storage system comprising at least one low speed, high capacity main memory having relatively long access time and including a plurality of data pages stored therein and at least a high speed, low capacity cache memory having a relatively short access time adapted to store a predetermined plurality of subsets of the information stored in said main memory data pages.
Abstract: An information processing unit and storage system comprising at least one low speed, high capacity main memory having relatively long access time and including a plurality of data pages stored therein and at least one high speed, low capacity Cache memory means having a relatively short access time and adapted to store a predetermined plurality of subsets of the information stored in said main memory data pages. Instruction decoding means are located in the communication channel between the main Memory and the Cache which are operative to at least partially decode instructions being transferred from main Memory to Cache. The at least partial decoding comprising expanding the instruction format from that utilized in the main Memory storage to one more readily executable by the processor prior to storing said instructions in the Cache. Said decoding means includes a logic circuit means for determining whether a given instruction is susceptible of partial decoding and means for determining that a particular instruction has already been partially decoded (i.e., after a first accessing of said instruction by the processor from Cache). In the preferred embodiment the assumption is made that the system utilizes separate Cache storage means for data and instructions respectively whereby only instructions being transferred from main Memory to Cache will pass through said decoding means.

58 citations


Patent
26 Aug 1980
TL;DR: In this article, a cache memory organization using a miss information collection and manipulation system is presented to insure the transparency of cache misses, which makes use of the fact that the cache memory has a faster rate of operation than the speed of operation of central memory.
Abstract: A cache memory organization is shown using a miss information collection and manipulation system to insure the transparency of cache misses. This system makes use of the fact that the cache memory has a faster rate of operation than the rate of operation of central memory. The cache memory consists of a set-associative cache section consisting of tag arrays and control with a cache buffer, a central memory interface block consisting of a memory requester and memory receiver together with miss information holding registers section consisting of a miss comparator and status collection device. The miss information holding register section allows for an almost continual stream of new requests for data to be supplied to the cache memory at the cache hit rate throughput.

56 citations


Patent
28 Jan 1980
TL;DR: In this paper, a cached multiprocessor system operated in an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access.
Abstract: A cached multiprocessor system operated in an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access. Further, the time slot for data transfers to and from the processors succeeds the time slot for accessing the cache. The sequence is optimized for transactions that require only one cache access, e.g., read operations that hit the cacgenerally require a second cache access in order to update or allocate the cache. These transactions are entered into a queue with order preserved prior to permitting a second access to the cache. Also, a duplicate tag store is associated with the queue and maintained as a copy of the tag store in the cache. Whenever a cache tag is to be changed, a duplicate tag in the duplicate tag store is changed prior to changing the cache tag. The duplicate tag store thus always provides an accurate indication of the contents of the cache. The duplicate tag store is used to determine whether a second access to the cache for an update is necessary.

55 citations


Patent
10 Dec 1980
TL;DR: In this article, a memory control apparatus includes a pair of equal capacity cache memories, one for storing a portion of the instructions located in the main memory and the other for storing the operand data in the cache memory.
Abstract: A memory control apparatus includes a pair of equal capacity cache memories, one for storing a portion of the instructions located in the main memory and one for storing a portion of the operand data located in the main memory. Each cache memory has an address array and a data array which operate in response to an instruction address from an instruction fetch unit or an operand data address from an operand fetch unit or an operand execution unit. If the instruction or data called for is not in the cache memories, it is taken from the main memory and a copy is placed in an available storage location in the cache memory.

47 citations


Patent
14 Nov 1980
TL;DR: In this paper, a segment descriptor table is maintained and the entries in the table are linked by forward and backward age links, and the segment descriptor for these additional segments are linked into the age chain at an age which is intermediate the most recently used and the least recently used.
Abstract: When a processor issues a read or write command to read one or more words from a disk, a cache store is checked to see if a copy of the segment(s) containing the word(s) are present therein. If a copy of the segment is not present in the cache store then it is moved from disk to the cache store and sent to the processor. A segment descriptor table is maintained and the entries in the table are linked by forward and backward age links. When a segment is brought into the cache store from a disk because it contains the word or words specified by a command, its segment descriptor is linked in the age chain as the most recently used. Provision is made for reading into the cache store one or more segments in addition to the segment(s) containing the word(s) specified by a command, on speculation that the additional segment(s) contain words most likely to be accessed soon by the processor. The segment descriptor for these additional segments are linked into the age chain at an age which is intermediate the most recently used and the least recently used.

43 citations


Patent
28 Jan 1980
TL;DR: In this article, the authors propose an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access, and the time slots for data transfers to and from the processors succeed the timeslots for accessing the cache.
Abstract: A cached multiprocessor system operates in an ordered pipeline timing sequence in which the time slot for use of the cache is made long enough to permit only one cache access. Further, the time slot for data transfers to and from the processors succeeds the time slot for accessing the cache. The sequence is optimized for transactions that require only one cache access, e.g., read operations that hit the cache. Transactions that require two cache accesses must complete the second cache access during a later available pipeline sequence. A processor indexed random access memory specifies when any given processor has a write operation outstanding for a location in the cache. This prevents the processor from reading the location before the write operation is completed.

42 citations


Patent
14 Nov 1980
TL;DR: In this paper, the storage control unit does not write a copy of the data into the cache store if the length of a data transfer exceeds a first threshold length and if the transfer begins on a segment boundary and comprises an integral number of segments.
Abstract: In a data processing system having a host processor, a cache store for storing segments of data, a bulk memory and a storage control unit for controlling transfers between the processor, cache store and bulk memory, the storage control unit normally responds to a read or write command from the host processor to control the transfer of data. If a copy of the data transferred is not resident in the cache store then a copy is written therein by the storage control unit. If the length of a data transfer exceeds a first threshold length then the storage control unit does not write a copy of the data into the cache store. If the length of a data transfer exceeds a second threshold length, and the transfer begins on a segment boundary and comprises an integral number of segments, then the storage control unit does not write a copy of the data into the cache store. The writing into the cache store is transparent to the host processor. The use of a transfer threshold prevents data from being entered into the cache store when it is not likely to be used again soon. Two thresholds are provided because the data transferred in long transfers of an integral number of segments is even less likely to be used again soon than the data transferred in long transfers that do not begin on a segment boundary or comprise an integral number of segments.

Patent
31 Dec 1980
TL;DR: A cache memory for use in a data processing system wherein data words identified by even address numbers are stored separately from data words associated with odd address numbers to enable the simultaneous transfer of two successively addressed data words to or from the cache memory by transferring of a data word associated with an odd address number.
Abstract: A cache memory for use in a data processing system wherein data words identified by even address numbers are stored separately from data words associated with odd address numbers to enable the simultaneous transfer of two successively addressed data words to or from the cache memory by the transferring of a data word associated with an odd address number and a data word associated with an even address number.

Patent
Wing N. Toy1
25 Sep 1980
TL;DR: In this paper, a separate address translation buffer (ATB) is used for both the interrupt stack memory and cache memory to perform the virtual address to real address translations which are required to access the common data memory.
Abstract: The processor's interrupt stack memory and cache memory share a common data memory and are accessed using virtual addresses. A separate address translation buffer (ATB) is used for both the interrupt stack memory and cache memory to perform the virtual address to real address translations which are required to access the common data memory. The cache ATB and a cache controller provide the addressing to access cache data words in the common memory; whereas the interrupt stack ATB alone provides the addressing necessary to access the interrupt stack data words in the common memory.

Patent
15 Apr 1980
TL;DR: In this article, a cache store (10) is provided in conjunction with a main store (2) where each location has a changed bit (CH) and a list (4) is maintained of the addresses of those locations in the cache store which have been modified.
Abstract: A cache store (10) is provided in conjunction with a main store (2). Each location of the cache (10) has a changed bit (CH). A list (4) is maintained of the addresses of those locations in the cache store which have been modified. When the list (4) becomes full, the entry at the head of the list is read out and the corresponding data item is transferred from the cache (10) to the main store (2). Thus, there are never more than a limited number of items in the cache awaiting transfer. To clear the cache of all modified items, each entry in the list (4) is read out in turn and the corresponding items are transferred to the main store (2). The system reduces the amount of traffic between the cache (10) and the main store (2) and hence improves efficiency.

Patent
Anthony J. Capozzi1
15 Jul 1980
TL;DR: In this paper, a two-level storage hierarchy for a data processing system is directly addressable by the main processor, which includes a high speed, small cache with a relatively slower, much larger main memory.
Abstract: A two-level storage hierarchy for a data processing system is directly addressable by the main processor. The two-level storage includes a high speed, small cache with a relatively slower, much larger main memory. The processor requests data from the cache and when the requested data is not resident in the cache, it is transferred from the main memory to the cache and then to the processor. The data transfer organization maximizes the system throughput and uses a single integrated control to accomplish data transfers.

Patent
31 Dec 1980
TL;DR: In this paper, a group of diagnostic control registers supply signals for controlling the testing of the cache within the cache memory to determine the operability of the individual elements included in the cache.
Abstract: A cache memory wherein data words identified by odd address numbers are stored separately from data words identified by even address numbers. A group of diagnostic control registers supply signals for controlling the testing of the cache within the cache memory to determine the operability of the individual elements included in the cache memory.

Patent
03 Apr 1980
TL;DR: In this article, a data processing system includes a main memory shared by a plurality of central processor units (CPUs) which are also coupled in cascade in a closed circular path, each CPU has a cache buffer memory and two sets of transfer registers for receiving and transmitting cancel request signals which identify cache data which is no longer valid.
Abstract: This data processing system includes a main memory which is shared by a plurality of central processor units (CPUs) which are also coupled in cascade in a closed circular path. Each CPU has a cache buffer memory and two sets of transfer registers for receiving and transmitting cancel request signals which identify cache data which is no longer valid. Each CPU's receiving register subsystem includes circuitry for invalidating cache buffer data which has been updated or rewritten in main memory by another CPU in the loop. Each CPU's transmitting register subsystem includes circuitry for inhibiting the transmittal of a cancel request signal if the next CPU in the circle is the same one which originated the particular cache invalidation signal. Circuitry is also provided for propagating a cancel request signal around the loop in opposite directions simultaneously.

Patent
10 Dec 1980
TL;DR: In this article, the hit ratio for data fetch can be increased by using cache memory (26') which incorporates data buffer (34') that stores blocks of data that are varied in size and a set associative memory as index (32') which stores block addresses for main memory (14), which addresses are associated with the data blocks stored in buffer.
Abstract: Increasing the speed and hit ratio for data fetches in data processing systems is essential. Increased data fetch speed normally results from using a fast cache memory with a slower main memory. The hit ratio for such fetches can be increased by using cache memory (26') which incorporates data buffer (34') that stores blocks of data that are varied in size and a set associative memory as index (32') which stores block addresses for main memory (14), which addresses are associated with the data blocks stored in buffer (34'). The block sizes are varied by selectively inhibiting address bits provided to an input (40') of the index (32') by address inhibit circuit (54) in response to information stored in block size register (54). Such block size information is also provided to a fetch generate counter (60') and a fetch return counter (62') for controlling the number of words transferred as a block from main memory (14) to cache memory (26').