scispace - formally typeset
Search or ask a question

Showing papers on "Cache published in 1979"


Patent
19 Dec 1979
TL;DR: In this article, the index is a set associative memory and bits provided to an address input of the index are selectively inhibited by an address inhibit circuit when the size of the data blocks in the data buffer is to be varied.
Abstract: A cache memory has a data buffer for storing blocks of data from a main memory and an index for storing main memory addresses associated with the data blocks in the data buffer. The size of the blocks of data stored in the data buffer can be varied in order to increase the "hit ratio" of the cache memory. The index is a set associative memory and bits provided to an address input of the index are selectively inhibited by an address inhibit circuit when the size of the data blocks in the data buffer is to be varied. A block size register stores block size information that is provided to the address inhibit circuit. The block size information is also provided to a fetch generate counter and a fetch return counter that control the number of words transferred as a block from the main memory to the cache memory.

75 citations


01 Jan 1979
TL;DR: This correspondence describes a recovery cache that has been built for the PDP-11 family of machines, designed to be an "add-on" unit which requires no hardware alterations to the host CPU but which intersects the bus between the CPU and the memory modules.

66 citations


Patent
Anthony Joseph Capozzi1
26 Jan 1979
TL;DR: In this paper, a data processing system including at least one channel and a multilevel store has the capability of performing a channel write to main memory where the data to be written into main memory crosses a double word boundary in a partial write store.
Abstract: A data processing system including at least one channel and a multilevel store has the capability of performing a channel write to main memory where the data to be written into main memory crosses a double word boundary in a partial write store. The partial write store is accomplished by a merge operation which takes place in the memory system in a manner such that the main processor, channel and a cache store are freed up for further operation prior to the completion of the write to main memory.

58 citations


Book Chapter
01 Jan 1979
TL;DR: A spectrum of ways to exploit more registers in an architecture is discussed, ranging from programmer-managed cache (large numbers of explicitly-addressed registers, as in the Cray-1) to better schemes for automatically- managed cache.
Abstract: The advent of VLSI technology will allow the fabrication of complete computers plus memory on one chip. There will be an architectural challenge in the very near future to adjust to this trend by designing balanced architectures using hundreds or thousands of registers or other small blocks of memory. As the relative price of memory (vs. random logic) drops even further, the need for register-heavy architectures will become even more pronounced. In this paper, we discuss a spectrum of ways to exploit more registers in an architecture, ranging from programmer-managed cache (large numbers of explicitly-addressed registers, as in the Cray-1) to better schemes for automatically-managed cache. A combination of compiler and hardware techniques will be needed to maximize effective register use while minimizing transmission bandwidth between various memories. Discussed techniques include merging activation records at compile time, predictive cache loading, and "dribble-back" cache unloading.

45 citations


Patent
22 Jan 1979
TL;DR: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words as discussed by the authors, and replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information.
Abstract: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words. Directory circuits, associated with the cache store, contain address information identifying those instructions and data words stored in the cache store. The cache unit has at least one instruction register for storing address and level signals for specifying the location of the next instruction to be fetched and transferred to the processing unit. Replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information. The cache unit further includes apparatus operative in response to a first predetermined type of command specifying the fetching of data words to set an indicator flag to a predetermined state. The apparatus conditions the replacement circuits in response to each subsequent predetermined type of command to bypass storage of the subsequently fetched data words when the indicator flag is in the predetermined state preventing the replacement of extensive numbers of data and instruction words already stored in cache during the execution of the instruction.

44 citations


Patent
24 Sep 1979
TL;DR: In this article, the main memory is divided into sets, and a high speed cache or buffer memory is provided which holds four blocks of words from each set of main memory, and when a new block is to be written into the cache, one of the old blocks must be displaced to make way for it, and this is done in accordance with an algorithm determining which of the blocks was least recently used.
Abstract: The invention relates to computer memory systems in which the main memory is divided into sets, and a high speed cache or buffer memory is provided which holds four blocks of words from each set of the main memory. When a new block is to be written into the cache memory one of the old blocks must be displaced to make way for it, and this is done in accordance with an algorithm determining which of the blocks was least recently used. In the prior art this algorithm has required the storage in the high speed cache memory of four digits per block, that is to say, eight digits for the four blocks of each set, and this takes up a substantial part of the expensive high speed storage. The present invention reduces this require­ ment to only three digits for each set of four blocks. The up-date algorithm is shown in Figure 2. The blocks are regarded as being in two pairs, AB and CD; a first digit, digit 2 is set to 0 or 1 according to the pair in which the accessed block lies; the second digit is set to 0 or 1 if the block lies in one pair, CD, to indicate which block of that pair was accessed, and similarly the third digit, digit 0, is set to 0 or 1 when a block of the other pair AB is accessed. A degrade memory can be provided in which a digit is set to indicate a faulty block in the buffer. This digit is supplied to the comparator and to the age algorithm circuits of the buffer memory to prevent the faulty block being accessed and to force the age digits to a value such that the block will never be displaced into, or rewritten from, main memory.

14 citations


Patent
28 Sep 1979
TL;DR: In this article, the data are being transferred between the main memory and the cache memories, where each requestor has access to its own cache memories as well as to cache memories of the other requestors.
Abstract: Computer system with a main memory and a plurality of requestors (A, B ), each of which having its own dedicated high speed cache memory The data are being transferred between the main memory and the cache memories Each requestor has access to its own as well to the cache memories of the other requestors Each cache memory comprises a data buffer (104) for storing data and a tag buffer (100) for storing addresses of locations in the data buffer (104) If the same data are held in two or more cache memories, then upon writing into its own dedicated cache memory the requestor is connected to the other cache memories containing the corresponding data for purposes of invalidating these data The data buffer (104) has a substantially longer cycle time of the data buffer and after the tag buffer being addressed by its own dedicated requestor (A) a selector (96) is switched, so that another requestor (B) which has written into data can address the tag buffer (100) and invalidate any corresponding entry which may be present

6 citations


DOI
01 Jan 1979
TL;DR: A Liquefaction Potential Map for Cache Valley, Utah is presented in this article, with a focus on the Cache Valley region of Utah and its potential for gas extraction and mining. But
Abstract: A Liquefaction Potential Map for Cache Valley, Utah

6 citations


Proceedings ArticleDOI
A. Hotta1, K. Ogiue, K. Mitsusada, M. Hinai, K. Yamaguchi, M. Inadachi 
01 Jan 1979
TL;DR: This paper will assess a 3072b RAM with 470 gates developed for dynamic address translation and cache control, and a standard 1K × 1 b RAM with an access time of 5.5ns used for buffer storage.
Abstract: This paper will assess a 3072b RAM with 470 gates developed for dynamic address translation and cache control, and a standard 1K × 1b RAM with an access time of 5.5ns used for buffer storage.

4 citations


Patent
19 Jul 1979
TL;DR: In this paper, a simple constitution to attain memory control by adding hard ware which functions to make data on a cash memory in a multiprocessor system agree with that on a main memory unit.
Abstract: PURPOSE:To enable simple constitution to attain memory control by adding hard ware which functions to make data on a cash memory in a multiprocessor system agree with that on a main memory unit. CONSTITUTION:This system is equipped with main memory units 7 and 8 stored with data on a block unit and CPUs 1 and 2 which possess cash memory part 11 stored with data equivalent to blocks on the block unit. Further, this is provided with input-output transfer controllers 3 and 4 which transfer data from an input device to other units including CPUs 1 and 2 and data from any other unit to an output device, and main memory controllers 5 and 6 which transfer as one group of data write data, write address and requesting-device discriminating information to main memory units 7 and 8 answering to a write request from either CPUS 1 and 2 or units 3 and 4. Then, a block on memory part 11 indicated by the write address from main memory units 7 and 8 is made ineffective.

4 citations


Proceedings ArticleDOI
23 Apr 1979
TL;DR: An LSI bit-slice chip set is described which should reduce both controller cost and complexity of the cache controller and enable a memory designer to construct a wide variety of cache structures with a minimum number of components and interconnections.
Abstract: Cache storage is a proven memory speedup technique in large mainframe computers. Two of the main difficulties associated with the use of this concept in small machines are the high relative cost and complexity of the cache controller. An LSI bit-slice chip set is described which should reduce both controller cost and complexity. The set will enable a memory designer to construct a wide variety of cache structures with a minimum number of components and interconnections. Design parameters are based on the results of extensive simulation. Particular emphasis is placed on the need for design flexibility.The chip set consists of three devices - an address bit-slice, a data bit-slice and a central control unit. Circuit design has been completed to a gate level based on an ECL/EFL implementation. The proposed structure will accommodate cache sizes up to 2K words with access times as short as 25 ns.



01 Jun 1979
TL;DR: This thesis applies the state of the art techniques for methodical design of secure operating systems to a distributed, multi-microprocessor environment and designs a family of distributed operating systems that can provide the power of yesterdays large computer in a microprocessor environment.
Abstract: : This thesis applies the state of the art techniques for methodical design of secure operating systems to a distributed, multi-microprocessor environment. Explicit process structure and utilization of virtual environments are the fundamental concepts that form a basis for the design presented. The primary design techniques utilized in the design are segmentation, distributed operating system, security kernel, multiprocessing, 'cache' memory strategy and multiprogramming. The resulting design is for a family of distributed operating systems that can provide the power of yesterdays large computer in a microprocessor environment. Security, configuration independence, and a loop free structure are the primary characteristics of the design. The design, although hardware independent, was formulated with the Zilog Z8000 or similar microprocessor in mind. (Author)

01 Jan 1979
TL;DR: Greenwood et al. as discussed by the authors developed a liquefaction opportunity map for Cache Valley, Utah, based on a procedure developed by Youd and Perkins (1977) and combined it with a map delineating susceptible soils to produce a map.
Abstract: Development of a Liquefaction Opportunity Map for Cache Valley, Utah by Richard J. Greenwood, Master of Science Utah State University, 1979 Major Professor: Dr. Loren Runar Anderson Department: Civil and Environmental Engineering vii A liquefaction opportunity map was developed for Cache Valley, Utah. The study was the initial phase to determine the potential for liquefaction in Cache Valley. The method used in this study to develop the liquefaction opportuni y map was based on a procedure developed by Youd and Perkins (1977). This opportunity map is proposed to be combined with a map delineating liquefaction susceptible soils to produce a liquefaction potential map. The liquefaction susceptibility map is being developed in a companion study . The liquefaction potential map will assist in the evaluation of earthquake response in general and microzonation in particular. The liquefaction potential map may also be used by contractors, consultants, governmental organizations, etc., for preliminary planning and decision making to determine the suitability of a given site.



Patent
23 Oct 1979
TL;DR: In this article, the authors proposed to speed up the processing, by assembling the unit having no CM the same as unit having CM in the system, through the agreement of data in the cash memory CM of a plurality of CPU's and in the main memory unit MM.
Abstract: PURPOSE:To speed up the processing, by assembling the unit having no CM the same as the unit having CM in the system, through the agreement of data in the cash memory CM of a plurality of CPU's and in the main memory unit MM. CONSTITUTION:The system is constituted with the CPU l, 2 having the CM sections 3 and 4, input and output transfer control unit IOC7, and main memory unit MM8. The CM sections 3 and 4 have the address information of the data held at the block coresponding to CM and each block, and the address array storing the effective display bit. MM8 has the address arrays 9 and 10 with the equal constitution as the CM address array in the CPU's 1 and 2 and it is accessed commonly from the CPU's 1 and 2 and the IOC 7. When the MM8 receives the write-in request of CPU and IOC, it retrieves the address array corresponding to CPU except for the request unit, feeds information to the CPU having the block data rewritten, and rewrites the corresponding block of CM to make agreement with the content between CM of CPU and MM.


Patent
31 Jul 1979
TL;DR: In this article, the cache subsystem is coupled with a central processor, a main memory subsystem and a cache subsystem, all coupled in common to a system bus, and the transfer of information from the main memory to the cache starts from the lowest order address locations in main memory and continues from successive address locations until the cache is fully loaded.
Abstract: A data processing system includes a central processor subsystem, a main memory subsystem and a cache subsystem, all coupled in common to a system bus. During the overall system initialization process, apparatus in the cache subsystem effects the transfer of information from the main memory subsystem to the cache subsystem to load all address locations of the cache subsystem. The transfer of information from the main memory subsystem to the cache subsystem starts from the lowest order address locations in main memory and continues from successive address locations until the cache subsystem is fully loaded. This assures that the cache subsystem contains valid information during normal data processing.

Proceedings ArticleDOI
09 Apr 1979
TL;DR: The cache is a general buffer for addressable main memory; direct mapping, fully associative, and set associative are described using a single notation, and two caches are examined in detail.
Abstract: The cache is a general buffer for addressable main memory. The incentives for using a cache are discussed along with the objectives of the cache designer. Three cache organizations; direct mapping, fully associative, and set associative are described using a single notation, and two caches are examined in detail. Interactions between the replacement algorithm and the degree of associativity are discussed as well as other possible approaches to cache design.


Proceedings ArticleDOI
09 Apr 1979
TL;DR: This paper investigates more sophisticated algorithms for the management of intelligent cache systems using very simplistic paging algorithms implemented in hardware.
Abstract: Cache memory is now widely accepted as a cost effective way of improving system performance. Significant reductions in the average data access time have been achieved using very simplistic paging algorithms, such as LRU, implemented in hardware. In this paper we wish to investigate more sophisticated algorithms for the management of intelligent cache systems.