scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Patent
22 Jan 1988
TL;DR: In this paper, a modular, expandable, topologically-distributed-memory multiprocessor computer comprises a plurality of non-directly communicating slave processors under the control of a synchronizer and a master processor.
Abstract: A modular, expandable, topologically-distributed-memory multiprocessor computer comprises a plurality of non-directly communicating slave processors under the control of a synchronizer and a master processor. Memory space is partitioned into a plurality of memory cells. Dynamic variables may be mapped into the memory cells so that they depend upon processing in nearby partitions. Each slave processor is connected in a topologically well-defined way through a dynamic bi-directional switching system (gateway) to different respective ones of the memory cells. Access by the slave processors to their respective topologically similar memory cells occurs concurrently or in parallel in such a way that no data-flow conflicts occur. The topology of data distribution may be chosen to take advantage of symmetries which occur in broad classes of problems. The system may be tied to a host computer used for data storage and analysis of data not efficiently processed by the multiprocessor computer.

133 citations

Patent
19 Apr 2004
TL;DR: In this article, the virtualizing of virtual memory in virtual machine environment within a virtual machine monitor (VMM) is described. And the VMM may dynamically allocate memory resources to various virtual machines running in the platform.
Abstract: An embodiment of the present invention enables the virtualizing of virtual memory in a virtual machine environment within a virtual machine monitor (VMM). Memory required for direct memory access (DMA) for device drivers, for example, is pinned by the VMM and prevented from being swapped out. The VMM may dynamically allocated memory resources to various virtual machines running in the platform. Other embodiments may be described and claimed.

132 citations

Patent
14 Jun 2007
TL;DR: In this article, the authors present a memory module that includes at least one memory chip, and an intelligent chip coupled to the memory module, where the intelligent chip is configured to implement at least a part of a RAS feature.
Abstract: One embodiment of the present invention sets forth a memory module that includes at least one memory chip, and an intelligent chip coupled to the at least one memory chip and a memory controller, where the intelligent chip is configured to implement at least a part of a RAS feature. The disclosed architecture allows one or more RAS features to be implemented locally to the memory module using one or more intelligent register chips, one or more intelligent buffer chips, or some combination thereof. Such an approach not only increases the effectiveness of certain RAS features that were available in prior art systems, but also enables the implementation of certain RAS features that were not available in prior art systems.

132 citations

Journal ArticleDOI
G.J. Foschini1, B. Gopinath
TL;DR: The structure of optimal policies for the model considered with three types of users is determined, which consists of limiting the number of waiting requests of each type, and reserving a part of the memory to each type.
Abstract: Efficient design of service facilities, such as data or computer networks that meet random demands, often leads to the sharing of resources among users. Contention for the use of a resource results in queueing. The waiting room is a part of any such service facility. The number of accepted service requests per unit of time (throughput), or the fraction of the time the servers are busy (utilization), are often used as performance measures to compare designs. Most common models in queueing theory consider the design of the waiting rooms with the assumption that, although individual requests may differ from one another, they are statistically indistinguishable. However, there are several instances where available information allows us to classify the requests for service into different types. In such cases the design of the service facility not only involves the determination of an optimum size for the waiting room but also the rules of sharing it among the different types. Even with a fixed set of resources, the rules of sharing them can influence performance. In data networks (or computer networks) the "waiting room" consists of memory of one kind or another. Messages (jobs) destined for different locations (processors) sharing common storage is an important example of shared use of memory. Recently, Kleinrock and Kamoun have modeled such use of memory and computed the performance of various policies for managing the allocation of memory to several types of users. Decisions to accept or reject a demand for service were based on the number of waiting requests of each type. However, the optimal policy was not determined even in the case where there were only two types of users. We determine the structure of optimal policies for the model considered with three types of users. The optimal policy consists of limiting the number of waiting requests of each type, and reserving a part of the memory to each type.

132 citations

Journal ArticleDOI
TL;DR: This paper presents the solution of the following optimization problem that appears in the design of double-loop structures for local networks and also in data memory, allocation and data alignment in SIMD processors.
Abstract: This paper presents the solution of the following optimization problem that appears in the design of double-loop structures for local networks and also in data memory, allocation and data alignment in SIMD processors.

132 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591