scispace - formally typeset
Search or ask a question

Showing papers on "Cache pollution published in 1974"


Patent
17 Jan 1974
TL;DR: In this article, the cache store is operated in parallel to the request for data information from the main memory store and a successful retrieval from the cache cache store aborts the retrieval from a main memory.
Abstract: A cache store located in the processor provides a fast access look-aside store to blocks of data information previously fetched from the main memory store. The request to the cache store is operated in parallel to the request for data information from the main memory store. A successful retrieval from the cache store aborts the retrieval from a main memory. Block loading of the cache store is performed autonomously from the processor operations. The cache store is cleared on cycles such as interrupts which require the processor to shift program execution. The store-aside configuration of the processor overlooks the backing store cycle on a store operand cycle and the cache store checking operations are performed next causing the cycles to be performed simultaneously.

77 citations


Journal ArticleDOI
TL;DR: An investigation of the various cache schemes that are practical for a minicomputer has been found to provide considerable insight into cache organization.
Abstract: An investigation of the various cache schemes that are practical for a minicomputer has been found to provide considerable insight into cache organization. Simulations are used to obtain data on the performance and sensitivity of organizational parameters of various writeback and lookahead schemes. Hardware considerations in the construction of the actual cache-minicomputer are also noted and a simple cost/performance analysis is presented.

63 citations


Patent
Reaman Paul Niguette1
01 Apr 1974
TL;DR: In this paper, random access storage facilities for the CPU of a computer are cascaded in that a facility of relatively fast access speed holds a subset of information held in a lower speed.
Abstract: Random access storage facilities for the CPU of a computer are cascaded in that a facility of relatively fast access speed holds a subset of information held in a facility of lower speed. Memory read requests are applied sequentially to these storage facilities, beginning always with the one of highest access speed, while requests satisfied by a lower speed facilities lead to storage of that information in all facilities of higher access speed. Write requests are made only to one facilities of lower speed, with algorthmic updating of the facility of lowest speed while the ones of higher speed are updated only on subsequent read requests for the same location. The several facilities which can be updated make storage space available on the basis of usages. The specific example explaining the invention has a conventional random access memory, a CPU associated cache and an buffer interposed between cache and memory with speed access ratio of 9:3:1, and cache and buffer sizes respectively leading to less then 50% cache access misses and less than 10% buffer misses, the average access time is better than half the access time of the memory alone.

62 citations


Patent
10 Apr 1974
TL;DR: In this paper, the cache store is selectively cleared of the information from the page whose data information is no longer needed by addressing each level of an associative tag directory to the cache.
Abstract: In a data processing system that uses segmentation and paging to access data information such as in a virtual memory machine, the cache store need not be entirely cleared each time an I/O operation is performed or each time the data in the cache has a possibility of being incorrect. With segmentation and paging, only a portion of the cache store need be cleared when a new page is obtained from the virtual memory. The entire cache store is cleared only when a new segment is indicated by the instruction. The cache store is selectively cleared of the information from the page whose data information is no longer needed by addressing each level of an associative tag directory to the cache store. The columns of each level are compared to the page address and if a comparison is signaled that column of the addressed level is cleared by clearing the flag indicating the full status of the column in the addressed level. Each level of the tag directory is addressed.

51 citations


Journal ArticleDOI
TL;DR: The method is independent of cache memory techniques, although aimed at the same problem, and could be combined with use of a cache memory to obtain still more speedup of processor execution.
Abstract: This paper discusses potential techniques for the dynamic generation of instructions in the instruction fetch unit of a processor. The chief advantage is the increased effective bandwidth in transfer of compressed program information from memory to the processor unit, which allows higher processor speed for a given memory access rate. The method is independent of cache memory techniques, although aimed at the same problem, and could be combined with use of a cache memory to obtain still more speedup of processor execution. The paper is largely at a conceptual level; work is planned to obtain data to facilitate design and simulation of a prototype machine.

2 citations