scispace - formally typeset
Search or ask a question
Topic

Distributed shared memory

About: Distributed shared memory is a research topic. Over the lifetime, 6208 publications have been published within this topic receiving 136469 citations. The topic is also known as: DSM.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a ternary clas- sificatory scheme of memory is proposed in which procedural, semantic, and episodic memory constitute a "monohierarchical" arrangement.
Abstract: Memory is made up of a number of interrelated systems, organized structures of operating components consisting of neural substrates and their behavioral and cognitive correlates. A ternary clas- sificatory scheme of memory is proposed in which procedural, semantic, and episodic memory constitute a "monohierarchical" arrangement: Episodic memory is a specialized subsystem of semantic memory, and semantic memory is a specialized subsystem of procedural memory. The three memory systems differ from one another in a number of ways, including the kind of consciousness that characterizes their operations. The ternary scheme overlaps with di- chotomies and trichotomies of memory proposed by others. Evidence for multiple systems is derived from many sources. Illustrative data are provided by ex- periments in which direct priming effects are found to be both functionally and stochastically independent of recognition memory. Solving puzzles in science has much in common with solving puzzles for amusement, but the two differ in important respects. Consider, for instance, the jigsaw puzzle that scientific activity frequently imitates. The everyday version of the puzzle is determinate: It consists of a target picture and jigsaw pieces that, when properly assembled, are guaranteed to match the picture. Scientific puzzles are indeter- minate: The number of pieces required to complete a picture is unpredictable; a particular piece may fit many pictures or none; it may fit only one picture, but the picture itself may be unknown; or the hypothetical picture may be imagined, but its com- ponent pieces may remain undiscovered. This article is about a current puzzle in the science of memory. It entails an imaginary picture and a search for pieces that fit it. The picture, or the hypothesis, depicts memory as consisting of a number of systems, each system serving somewhat different purposes and operating according to some- what different principles. Together they form the marvelous capacity that we call by the single name of memory, the capacity that permits organisms to benefit from their past experiences. Such a picture is at variance with conventional wisdom that holds memory to be essentially a single system, the idea that "memory is memory." The article consists of three main sections. In the first, 1 present some pretheoretical reasons for hypothesizing the existence of multiple memory systems and briefly discuss the concept of memory system. In the second, I describe a ternary classifi- catory scheme of memory--consisting of procedural, semantic, and episodic memory--and briefly com- pare this scheme with those proposed by others. In the third, I discuss the nature and logic of evidence for multiple systems and describe some experiments that have yielded data revealing independent effects of one and the same act of learning, effects seemingly at variance with the idea of a single system. I answer the question posed in the title of the article in the short concluding section.

1,776 citations

Proceedings ArticleDOI
20 Aug 1995
TL;DR: STM is used to provide a general highly concurrent method for translating sequential object implementations to non-blocking ones based on implementing a k-word compare&swap STM-transaction, a novel software method for supporting flexible transactional programming of synchronization operations.
Abstract: As we learn from the literature, flexibility in choosing synchronization operations greatly simplifies the task of designing highly concurrent programs. Unfortunately, existing hardware is inflexible and is at best on the level of a Load–Linked/Store–Conditional operation on a single word. Building on the hardware based transactional synchronization methodology of Herlihy and Moss, we offer software transactional memory (STM), a novel software method for supporting flexible transactional programming of synchronization operations. STM is non-blocking, and can be implemented on existing machines using only a Load–Linked/Store–Conditional operation. We use STM to provide a general highly concurrent method for translating sequential object implementations to non-blocking ones based on implementing a k-word compare&swap STM-transaction. Empirical evidence collected on simulated multiprocessor architectures shows that our method always outperforms the non-blocking translation methods in the style of Barnes, and outperforms Herlihy’s translation method for sufficiently large numbers of processors. The key to the efficiency of our software-transactional approach is that unlike Barnes style methods, it is not based on a costly “recursive helping” policy.

1,369 citations

Proceedings Article
01 Jan 2015
TL;DR: This work describes a new class of learning models called memory networks, which reason with inference components combined with a long-term memory component; they learn how to use these jointly.
Abstract: We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.

1,220 citations

Proceedings ArticleDOI
01 May 1990
TL;DR: A new model of memory consistency, called release consistency, that allows for more buffering and pipelining than previously proposed models is introduced and is shown to be equivalent to the sequential consistency model for parallel programs with sufficient synchronization.
Abstract: Scalable shared-memory multiprocessors distribute memory among the processors and use scalable interconnection networks to provide high bandwidth and low latency communication. In addition, memory accesses are cached, buffered, and pipelined to bridge the gap between the slow shared memory and the fast processors. Unless carefully controlled, such architectural optimizations can cause memory accesses to be executed in an order different from what the programmer expects. The set of allowable memory access orderings forms the memory consistency model or event ordering model for an architecture.This paper introduces a new model of memory consistency, called release consistency, that allows for more buffering and pipelining than previously proposed models. A framework for classifying shared accesses and reasoning about event ordering is developed. The release consistency model is shown to be equivalent to the sequential consistency model for parallel programs with sufficient synchronization. Possible performance gains from the less strict constraints of the release consistency model are explored. Finally, practical implementation issues are discussed, concentrating on issues relevant to scalable architectures.

1,169 citations

Journal ArticleDOI
TL;DR: Both theoretical and practical results show that the memory coherence problem can indeed be solved efficiently on a loosely coupled multiprocessor.
Abstract: The memory coherence problem in designing and implementing a shared virtual memory on loosely coupled multiprocessors is studied in depth. Two classes of algorithms, centralized and distributed, for solving the problem are presented. A prototype shared virtual memory on an Apollo ring based on these algorithms has been implemented. Both theoretical and practical results show that the memory coherence problem can indeed be solved efficiently on a loosely coupled multiprocessor.

1,139 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
90% related
Compiler
26.3K papers, 578.5K citations
89% related
Data structure
28.1K papers, 608.6K citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Server
79.5K papers, 1.4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20232
202211
202113
202017
201926
201831