scispace - formally typeset
Search or ask a question

Showing papers on "Smart Cache published in 1986"


Proceedings ArticleDOI
27 Oct 1986
TL;DR: This work presents new on-line algorithms which decide, for each cache, which blocks to retain and which to drop in order to minimize communication over the bus in a snoopy cache multiprocessor system.
Abstract: In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. For several of the proposed models of snoopy caching, we present new on-line algorithms which decide, for each cache, which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no on-line algorithm has this property with a smaller constant.

268 citations


Proceedings ArticleDOI
01 May 1986
TL;DR: An analytical model for a cache-reload transient is developed and it is shown that the reload transient is related to the area in the tail of a normal distribution whose mean is a function of the footprints of the programs that compete for the cache.
Abstract: This paper develops an analytical model for a cache-reload transient. When an interrupt program or system program runs periodically in a cache-based computer, a short cache-reload transient occurs each time the interrupt program is invoked. That transient depends on the size of the cache, the fraction of the cache used by the interrupt program, and the fraction of the cache used by background programs that run between interrupts. We call the portion of a cache used by a program its footprint in the cache, and we show that the reload transient is related to the area in the tail of a normal distribution whose mean is a function of the footprints of the programs that compete for the cache. We believe that the model may be useful as well for predicting paging behavior in virtual-memory systems with round-robin scheduling.

112 citations


Patent
12 Nov 1986
TL;DR: In this paper, the authors propose a system for maintaining data consistency among distributed processors, each having its associated cache memory, where a processor addresses data in its cache by specifying the virtual address.
Abstract: A system for maintaining data consistency among distributed processors, each having its associated cache memory. A processor addresses data in its cache by specifying the virtual address. The cache will search its cells for the data associatively. Each cell has a virtual address, a real address, flags and a plurality of associated data words. If there is no hit on the virtual address supplied by the processor, a map processor supplies the equivalent real address which the cache uses to access the data from another cache if one has it, or else from real memory. When a processor writes into a data word in the cache, the cache will update all other caches that share the data before allowing the write to the local cache.

106 citations


Journal ArticleDOI
01 May 1986
TL;DR: This paper shows how the VMP design provides the high memory bandwidth required by modern high-performance processors with a minimum of hardware complexity and cost, and describes simple solutions to the consistency problems associated with virtually addressed caches.
Abstract: VMP is an experimental multiprocessor that follows the familiar basic design of multiple processors, each with a cache, connected by a shared bus to global memory. Each processor has a synchronous, virtually addressed, single master connection to its cache, providing very high memory bandwidth. An unusually large cache page size and fast sequential memory copy hardware make it feasible for cache misses to be handled in software, analogously to the handling of virtual memory page faults. Hardware support for cache consistency is limited to a simple state machine that monitors the bus and interrupts the processor when a cache consistency action is required.In this paper, we show how the VMP design provides the high memory bandwidth required by modern high-performance processors with a minimum of hardware complexity and cost. We also describe simple solutions to the consistency problems associated with virtually addressed caches. Simulation results indicate that the design achieves good performance providing data contention is not excessive.

99 citations


Patent
Lishing Liu1
12 Sep 1986
TL;DR: In this article, a method and apparatus for associating in cache directories the Control Domain Identifications (CDIDs) of software covered by each cache line is provided, through the use of such provision and/or the addition of Identifications of users actively using lines, cache coherence of certain data is controlled without performing conventional Cross-Interrogates (XIs), if the accesses to such objects are properly synchronized with locking type concurrency controls.
Abstract: A method and apparatus is provided for associating in cache directories the Control Domain Identifications (CDIDs) of software covered by each cache line. Through the use of such provision and/or the addition of Identifications of users actively using lines, cache coherence of certain data is controlled without performing conventional Cross-Interrogates (XIs), if the accesses to such objects are properly synchronized with locking type concurrency controls. Software protocols to caches are provided for the resource kernel to control the flushing of released cache lines. The parameters of these protocols are high level Domain Identifications and Task Identifications.

86 citations



Proceedings Article
01 Jun 1986
TL;DR: The design and implementation issues associated with realizing an instruction cache for a machine that uses an "extended-" version of the delayed branch instruction are discussed and timing results are presented which indicate the performance of critica-i circuits.
Abstract: _ In this paper, we present the design ofan instruction cache for a machine that uses an "extended-" version of the delayed branch instruction. The extended delayed branch, which we iall the prepa,re to branch, or PBR instruction, permits the unconditional execution of between 0 and 7 instruition parcels after the branch instruction. The instruction cache is designed to fit o-l_t!e. same chip with the processor and takes advanlage of the PBR instruction to minimiZe the effective latencv associated with memory references and the filling of the instruction register. Tlis paper discusses the design and implementation issues associated with realizing such an instruction cache. We pres_ent-critical aspects of the design and the philosophy used to guidg the developm-ent 9{ the design. Finally, somir timing results are presented which indicate-the performance of critica-i circuits.

6 citations


Journal ArticleDOI
TL;DR: There is a significant improvement in the instruction execution rate due to the increase in bandwidth and decrease in access time and performance is further improved by using c access notion in the cache interleaving.

3 citations


01 Jan 1986
TL;DR: The style of use and performance improvement of caching in an existing file system is measured, and the protocol and interface architecture of the Caching Ring is developed, a combination of an intelligent network interface and an efficient network protocol that allows caching of all types file blocks at the client machines.
Abstract: Caching has long been recognized as a powerful performance enhancement technique in many areas of computer design. Most modern computer systems include a hardware cache between the processor and main memory, and many operating systems include a software cache between the file system routines and the disk hardware. In a distributed file system, where the file systems of several client machines are separated from the server backing store by a communications network, it is desirable to have a cache of recently used file blocks at the client, to avoid some of the communications overhead. In this configuration, special care must be taken to maintain consistency between the client caches, as some disk blocks may be in use by more than one client. For this reason, most current distributed file systems do not provide a cache at the client machine. Those systems that do place restrictions on the types of file blocks that may be shared, or require extra communication to confirm a cached block is still valid each time the block is to be used. The Caching Ring is a combination of an intelligent network interface and an efficient network protocol that allows caching of all types file blocks at the client machines. Blocks held in a client cache are guaranteed to be valid copies. We measure the style of use and performance improvement of caching in an existing file system, and develop the protocol and interface architecture of the Caching Ring. Using simulation, we study the performance of the Caching Ring and compare it to similar schemes using conventional network hardware.

2 citations


Journal ArticleDOI
TL;DR: A UK company has addressed the problem of disc caching and claims to have radically improved access time, and there are two types, prefetch and replacement.

2 citations


01 Jan 1986
TL;DR: This document summarizes current capabilities, research and operational priorities, and plans for further studies that were established at the 2015 USGS workshop on quantitative hazard assessments of earthquake-triggered landsliding and liquefaction at the 2013 USGS Deepwater Horizon disaster.
Abstract: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. xi

Patent
28 May 1986
TL;DR: In this paper, a magnetic disk controller incorporating a cache system having a cache memory is presented, which comprises cache operation designation data producing means ( 45) or uses a data transfer direction control code in cache operation designated data representing five modes of operations.
Abstract: The present invention provides a magnetic disk controller incorporating a cache system having a cache memory. In order to improve the operation of the cache memory the magnetic disk controller comprises cache operation designation data producing means ( 45) or uses a data transfer direction control code in cache operation designation data representing five modes of operations.