scispace - formally typeset
Proceedings ArticleDOI

Synapse tightly coupled multiprocessors: a new approach to solve old problems

Steve Frank, +1 more
- pp 41-50
Reads0
Chats0
TLDR
Using a non-write-through cache and the Synapse Expansion Bus, Synapse has designed a symmetric, tightly coupled multiprocessor system, capable of being expanded on line and under power from two through twenty-eight processors with a linear improvement in system performance.
Abstract
The theoretical merits of a tightly coupled multiple-processor/shared-memory architecture have long been recognized. Two major problems in designing such an architecture are the performance limitations imposed by shared-memory bus contention in cached processors and multiple-processor data coherency. In the Synapse system, memory contention was significantly reduced by designing a processor cache employing a non-write-through algorithm, which minimized bandwidth between cache and shared memory. The multicache coherency problem was solved by a new bussing scheme, the Synapse Expansion Bus, which includes an ownership level protocol between processor caches. Using a non-write-through cache and the Synapse Expansion Bus, Synapse has designed a symmetric, tightly coupled multiprocessor system, capable of being expanded on line and under power from two through twenty-eight processors with a linear improvement in system performance.

read more

Citations
More filters
Patent

Internet-based shared file service with native pc client access and semantics and distributed version control

TL;DR: In this article, a multi-user file storage service and system enable each user of a pre-subscribed user group to operate an arbitrary client node at an arbitrary geographic location, to communicate with a remote file server node via a wide area network and to access the files of the file group via the respective client node in communication with the remote file servers via the wide-area network.
Journal ArticleDOI

A characterization of sharing in parallel programs and its application to coherency protocol evaluation

TL;DR: Simulation results indicate that (1) neither protocol dominates in performance; and (2) the write run model is a good predictor of protocol performance when the unit of the coherency operations matches that in the sharing analysis.
Proceedings ArticleDOI

Correct memory operation of cache-based multiprocessors

TL;DR: It is shown that cache coherence protocols can implement indivisible synchronization primitives reliably and can also enforce sequential consistency and it is shown how such protocols can implementation atomic READ&MODIFY operations for synchronization purposes.
Patent

Internet-based shared file service with native pc client access and semantics

TL;DR: In this article, a multi-user file storage service and system enables each user of a user group to operate an arbitrary client node at an arbitrary geographic location to communicate with a remote file server node via a wide area network.
Patent

Cache MMU system

TL;DR: In this article, a cache and memory management system architecture and associated protocol is described, which is comprised of a set associative memory cache subsystem, a set associated translation logic memory subsystem, hardwired page translation, selectable access mode logic, and selectively enableable instruction prefetch logic.
References
More filters
Journal ArticleDOI

Cache Memories

TL;DR: Specific aspects of cache memories investigated include: the cache fetch algorithm (demand versus prefetch), the placement and replacement algorithms, line size, store-through versus copy-back updating of main memory, cold-start versus warm-start miss ratios, mulhcache consistency, the effect of input /output through the cache, the behavior of split data/instruction caches, and cache size.
Proceedings ArticleDOI

Using cache memory to reduce processor-memory traffic

TL;DR: It is demonstrated that a cache exploiting primarily temporal locality (look-behind) can indeed reduce traffic to memory greatly, and introduce an elegant solution to the cache coherency problem.
Journal ArticleDOI

Effects of Cache Coherency in Multiprocessors

Dubois, +1 more
TL;DR: In this article, an analytical model for the program behavior of a multitasked system is introduced, including the behavior of each process and the interactions between processes with regard to the sharing of data blocks.
Journal ArticleDOI

Effects of cache coherency in multiprocessors

TL;DR: An in-depth analysis of the effects of cache coherency in multiprocessors is presented and a novel analytical model for the program behavior of a multitasked system is introduced.
Proceedings ArticleDOI

A study of instruction cache organizations and replacement policies

TL;DR: It is concluded theoretically that random replacement is better than LRU and FIFO, and that under certain circumstances, a direct-mapped or set associative caches may perform better than a full associative cache organization.