scispace - formally typeset
Patent

Cache memory organization utilizing miss information holding registers to prevent lockup from cache misses

Kroft David
TLDR
In this article, a cache memory organization using a miss information collection and manipulation system is presented to insure the transparency of cache misses, which makes use of the fact that the cache memory has a faster rate of operation than the speed of operation of central memory.
Abstract
A cache memory organization is shown using a miss information collection and manipulation system to insure the transparency of cache misses. This system makes use of the fact that the cache memory has a faster rate of operation than the rate of operation of central memory. The cache memory consists of a set-associative cache section consisting of tag arrays and control with a cache buffer, a central memory interface block consisting of a memory requester and memory receiver together with miss information holding registers section consisting of a miss comparator and status collection device. The miss information holding register section allows for an almost continual stream of new requests for data to be supplied to the cache memory at the cache hit rate throughput.

read more

Citations
More filters
Patent

System and method for network caching

TL;DR: In this article, the authors propose a system and method for caching network resources in an intermediary server topologically located between a client and a server in a network, where the intermediate server includes a cache and methods for loading content into the cache as according to rules specified by a site owner.
Patent

Scheme for insuring data consistency between a plurality of cache memories and the main memory in a multi-processor system

TL;DR: In this paper, a method for data consistency between a plurality of individual processor cache memories and the main memory in a multi-processor computer system is provided which is capable of detecting when one of a set of predefined data inconsistency states occurs as a data transaction request is being processed, and correcting the data inconsistencies states so that the operation may be executed in a correct and consistent manner.
Patent

Methods and apparatus for fairly scheduling queued packets using a ram-based search engine

TL;DR: In this article, a hierarchical addressing technique is used to find the first memory location of a calendar queue with a validity bit of "1" (that is, the lowest time stamp), and a scheduler based on the present invention can schedule large numbers of flows to be placed on a high speed data link (i.e., with a small time slot).
Patent

Method and apparatus for rapidly switching processes in a computer system

TL;DR: In this paper, an apparatus and method for switching the context of state elements of a very fast processor within a clock cycle when a cache miss occurs is presented, which is particularly useful for minimizing the average instruction cycle time for a processor with a main memory access time exceeding 15 processor clock cycles.
Patent

Set associative sector cache

TL;DR: In this article, the authors describe circuits for writing into the cache and adapting the cache to a multi-cache arrangement, where each tag word read out must compare equal (28) with the high order sector bits (A18-A31) of the address and an accompanying validity bit (Vi) for each accessed block location in its group.
References
More filters
Patent

Cache memory store in a processor of a data processing system

TL;DR: In this article, the cache store is operated in parallel to the request for data information from the main memory store and a successful retrieval from the cache cache store aborts the retrieval from a main memory.
Patent

Memory access technique

TL;DR: In this paper, a digital computer system has a main memory operable at a first speed, a high speed buffer operating at a second speed for temporarily storing selected portions of the main memory, an associative memory for storing selected main memory addresses and comparing the stored addresses with a newly received address in a read/write operation to generate comparison data.
Patent

Pipeline data processing apparatus with high speed slave store

TL;DR: In this paper, a pipeline data processor has a store associated with it, and when that stage requires to write data into a specified address within that store, but the data to be written is not yet available, an indication of the address is stored in a special reserved register, so as to permit subsequent accesses to the store without waiting for that data to become available.
Patent

Status indicator apparatus for tag directory in associative stores

TL;DR: In this paper, a four level directory is used to retain the address of data information stored in a cache store and a three bit storage unit stores a full/empty status indication of each level of the tag directory and an indication of the level or tag of the area to be loaded next in the cache store.
Patent

Instruction fetch apparatus with combined look-ahead and look-behind capability

TL;DR: In this article, the look-behind apparatus comprises a multi-word buffer with its associated data register, in addition to its function as part of the look behind apparatus, also providing an additional level of look-ahead.