scispace - formally typeset
Search or ask a question

Showing papers on "Cache invalidation published in 1979"


Patent
22 Jan 1979
TL;DR: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words as discussed by the authors, and replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information.
Abstract: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words. Directory circuits, associated with the cache store, contain address information identifying those instructions and data words stored in the cache store. The cache unit has at least one instruction register for storing address and level signals for specifying the location of the next instruction to be fetched and transferred to the processing unit. Replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information. The cache unit further includes apparatus operative in response to a first predetermined type of command specifying the fetching of data words to set an indicator flag to a predetermined state. The apparatus conditions the replacement circuits in response to each subsequent predetermined type of command to bypass storage of the subsequently fetched data words when the indicator flag is in the predetermined state preventing the replacement of extensive numbers of data and instruction words already stored in cache during the execution of the instruction.

44 citations



Proceedings ArticleDOI
23 Apr 1979
TL;DR: An LSI bit-slice chip set is described which should reduce both controller cost and complexity of the cache controller and enable a memory designer to construct a wide variety of cache structures with a minimum number of components and interconnections.
Abstract: Cache storage is a proven memory speedup technique in large mainframe computers. Two of the main difficulties associated with the use of this concept in small machines are the high relative cost and complexity of the cache controller. An LSI bit-slice chip set is described which should reduce both controller cost and complexity. The set will enable a memory designer to construct a wide variety of cache structures with a minimum number of components and interconnections. Design parameters are based on the results of extensive simulation. Particular emphasis is placed on the need for design flexibility.The chip set consists of three devices - an address bit-slice, a data bit-slice and a central control unit. Circuit design has been completed to a gate level based on an ECL/EFL implementation. The proposed structure will accommodate cache sizes up to 2K words with access times as short as 25 ns.

3 citations


Proceedings ArticleDOI
09 Apr 1979
TL;DR: The cache is a general buffer for addressable main memory; direct mapping, fully associative, and set associative are described using a single notation, and two caches are examined in detail.
Abstract: The cache is a general buffer for addressable main memory. The incentives for using a cache are discussed along with the objectives of the cache designer. Three cache organizations; direct mapping, fully associative, and set associative are described using a single notation, and two caches are examined in detail. Interactions between the replacement algorithm and the degree of associativity are discussed as well as other possible approaches to cache design.

Proceedings ArticleDOI
09 Apr 1979
TL;DR: This paper investigates more sophisticated algorithms for the management of intelligent cache systems using very simplistic paging algorithms implemented in hardware.
Abstract: Cache memory is now widely accepted as a cost effective way of improving system performance. Significant reductions in the average data access time have been achieved using very simplistic paging algorithms, such as LRU, implemented in hardware. In this paper we wish to investigate more sophisticated algorithms for the management of intelligent cache systems.