scispace - formally typeset
Search or ask a question

Showing papers on "Cache coloring published in 1979"


Book Chapter
01 Jan 1979
TL;DR: A spectrum of ways to exploit more registers in an architecture is discussed, ranging from programmer-managed cache (large numbers of explicitly-addressed registers, as in the Cray-1) to better schemes for automatically- managed cache.
Abstract: The advent of VLSI technology will allow the fabrication of complete computers plus memory on one chip. There will be an architectural challenge in the very near future to adjust to this trend by designing balanced architectures using hundreds or thousands of registers or other small blocks of memory. As the relative price of memory (vs. random logic) drops even further, the need for register-heavy architectures will become even more pronounced. In this paper, we discuss a spectrum of ways to exploit more registers in an architecture, ranging from programmer-managed cache (large numbers of explicitly-addressed registers, as in the Cray-1) to better schemes for automatically-managed cache. A combination of compiler and hardware techniques will be needed to maximize effective register use while minimizing transmission bandwidth between various memories. Discussed techniques include merging activation records at compile time, predictive cache loading, and "dribble-back" cache unloading.

45 citations


Patent
22 Jan 1979
TL;DR: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words as discussed by the authors, and replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information.
Abstract: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words. Directory circuits, associated with the cache store, contain address information identifying those instructions and data words stored in the cache store. The cache unit has at least one instruction register for storing address and level signals for specifying the location of the next instruction to be fetched and transferred to the processing unit. Replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information. The cache unit further includes apparatus operative in response to a first predetermined type of command specifying the fetching of data words to set an indicator flag to a predetermined state. The apparatus conditions the replacement circuits in response to each subsequent predetermined type of command to bypass storage of the subsequently fetched data words when the indicator flag is in the predetermined state preventing the replacement of extensive numbers of data and instruction words already stored in cache during the execution of the instruction.

44 citations


Patent
28 Sep 1979
TL;DR: In this article, the data are being transferred between the main memory and the cache memories, where each requestor has access to its own cache memories as well as to cache memories of the other requestors.
Abstract: Computer system with a main memory and a plurality of requestors (A, B ), each of which having its own dedicated high speed cache memory The data are being transferred between the main memory and the cache memories Each requestor has access to its own as well to the cache memories of the other requestors Each cache memory comprises a data buffer (104) for storing data and a tag buffer (100) for storing addresses of locations in the data buffer (104) If the same data are held in two or more cache memories, then upon writing into its own dedicated cache memory the requestor is connected to the other cache memories containing the corresponding data for purposes of invalidating these data The data buffer (104) has a substantially longer cycle time of the data buffer and after the tag buffer being addressed by its own dedicated requestor (A) a selector (96) is switched, so that another requestor (B) which has written into data can address the tag buffer (100) and invalidate any corresponding entry which may be present

6 citations



Proceedings ArticleDOI
23 Apr 1979
TL;DR: An LSI bit-slice chip set is described which should reduce both controller cost and complexity of the cache controller and enable a memory designer to construct a wide variety of cache structures with a minimum number of components and interconnections.
Abstract: Cache storage is a proven memory speedup technique in large mainframe computers. Two of the main difficulties associated with the use of this concept in small machines are the high relative cost and complexity of the cache controller. An LSI bit-slice chip set is described which should reduce both controller cost and complexity. The set will enable a memory designer to construct a wide variety of cache structures with a minimum number of components and interconnections. Design parameters are based on the results of extensive simulation. Particular emphasis is placed on the need for design flexibility.The chip set consists of three devices - an address bit-slice, a data bit-slice and a central control unit. Circuit design has been completed to a gate level based on an ECL/EFL implementation. The proposed structure will accommodate cache sizes up to 2K words with access times as short as 25 ns.

3 citations


Proceedings ArticleDOI
09 Apr 1979
TL;DR: A view of efficiency is proposed which tries to account for how much resource is used in the actual problem solution and how much in the control of the instruction stream.
Abstract: A view of efficiency is proposed which tries to account for how much resource is used in the actual problem solution and how much in the control of the instruction stream. Analyses are performed to determine the effects of two architectural modifications -- cache memory and memory mapped registers -- on the efficiency of a simple list merging process.

Proceedings ArticleDOI
09 Apr 1979
TL;DR: The cache is a general buffer for addressable main memory; direct mapping, fully associative, and set associative are described using a single notation, and two caches are examined in detail.
Abstract: The cache is a general buffer for addressable main memory. The incentives for using a cache are discussed along with the objectives of the cache designer. Three cache organizations; direct mapping, fully associative, and set associative are described using a single notation, and two caches are examined in detail. Interactions between the replacement algorithm and the degree of associativity are discussed as well as other possible approaches to cache design.

Proceedings ArticleDOI
09 Apr 1979
TL;DR: This paper investigates more sophisticated algorithms for the management of intelligent cache systems using very simplistic paging algorithms implemented in hardware.
Abstract: Cache memory is now widely accepted as a cost effective way of improving system performance. Significant reductions in the average data access time have been achieved using very simplistic paging algorithms, such as LRU, implemented in hardware. In this paper we wish to investigate more sophisticated algorithms for the management of intelligent cache systems.