Topic
Cache pollution
About: Cache pollution is a research topic. Over the lifetime, 11353 publications have been published within this topic receiving 262139 citations.
Papers published on a yearly basis
Papers
More filters
•
27 Mar 1996
TL;DR: A network cache system includes a shared cache server and a conventional file server connected to a computer network which serves a plurality of client workstation computers Each client computer may additionally include a local non-volatile cache storage unit for caching data transferred to the client from a network server or from the shared cache as mentioned in this paper.
Abstract: A network cache system includes a shared cache server and a conventional file server connected to a computer network which serves a plurality of client workstation computers Each client computer may additionally include a local non-volatile cache storage unit for caching data transferred to the client from a network server or from the shared cache server Each client computer further includes a resident redirector program which intercepts file manipulation requests from executing application programs and redirects these requests to either the shared network cache or the local non-volatile cache when appropriate
70 citations
•
IBM1
TL;DR: In this article, the authors propose a method for Direct (DASD) cache management that reduces the volume of data transfer between DASD and cache while avoiding the complexity of managing variable length records in the cache.
Abstract: A method for Direct (DASD) cache management that reduces the volume of data transfer between DASD and cache while avoiding the complexity of managing variable length records in the cache. This is achieved by always choosing the starting point for staging a record to be at the start of the missing record and, at the same time, allocating and managing cache space in fixed length blocks. The method steps require staging records, starting with the requested record and continuing until either the cache block is full, the end of track is reached, or a record already in the cache is encountered.
70 citations
•
28 Feb 1992TL;DR: In this paper, the cache control logic (60) provides an external transfer code signal which allows a user to know when a cache transaction is performed, and the external signals are used to provide external signals which are necessary to execute each of the control instructions.
Abstract: A circuit for allowing greater user control over a cache memory is implemented in a data processor (20). Cache control instructions have been implemented to perform touch load, flush, and allocate operations in data cache (54) of data cache unit (24). The control instructions are decoded by both instruction cache unit (26) and sequencer (34) to provide necessary control and address information to load/store unit (28). Load/store unit (28) sequences execution of each of the instructions, and provides necessary control and address information to data cache unit (24) at an appropriate point in time. Cache control logic (60) subsequently processes both the address and control information to provide external signals which are necessary to execute each of the cache control instructions. Additionally, cache control logic (60) provides an external transfer code signal which allows a user to know when a cache transaction is performed.
70 citations
•
01 Apr 1988
TL;DR: In this paper, the authors present a memory system that determines which blocks of a set of associative blocks in cache memory are unavailable for replacement by maintaining a duplicate set of tags which track block ownership for this cache pursuant to a "snoopy" protocol.
Abstract: This invention is directed to a memory system that determines which blocks of a set of associative blocks in cache memory are unavailable for replacement. This is accomplished by operating the memory system to maintain a duplicate set of tags which track block ownership for this cache pursuant to a "snoopy" protocol. In addition, the cache system maintains a bit associated with each memory address to indicate whether any data blocks resident in it have been locked. The interlock status of the data blocks in the cache is not communicated to the memory system. Once a block is locked, it cannot be allocated for replacement until it is unlocked. When the cache system encounters a locked block, it skips over that block and allocates the next block of the associative blocks. From this, the memory system infers, by means of a replacement algorithm, that block is being locked and, therefore, cannot be replaced. This enables the memory system to implement an irregular replacement policy for this cache when the block to be replaced is owned and locked.
70 citations
•
05 Apr 1993TL;DR: In this paper, a dual-purpose memory system consisting of multiple cache sets is described, where each cache set can be individually configured as either a cache set or as a static random access memory (SRAM) bank.
Abstract: A data processing system (10) having a dual purpose memory (14) comprising multiple cache sets. Each cache set can be individually configured as either a cache set or as a static random access memory (SRAM) bank. Based upon the configuration of the set, the tag store array (58) is used for storage of actual data, in the SRAM mode, or for storage of a set of tag entries in the cache mode. A module configuration register (40) specifies the mode of each set/bank. A set of base address registers (41-44) define the upper bits of a base address of SRAM banks. In SRAM mode, comparison logic (66) compares a tag field of the requested address (50) to the base address to determine an access hit. The least significant bit of the address, tag field is used to select either the tag store array (58) or the line array (60) for the requested address data read or write.
70 citations