scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Proceedings ArticleDOI
15 Nov 2015
TL;DR: GraphReduce is presented, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device's internal memory capacity and significantly outperforms other competing out-of-memory approaches.
Abstract: Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device's internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host and device. GraphReduce-based programming is performed via device functions that include gatherMap, gatherReduce, apply, and scatter, implemented by programmers for the graph algorithms they wish to realize. Extensive experimental evaluations for a wide variety of graph inputs and algorithms demonstrate that GraphReduce significantly outperforms other competing out-of-memory approaches.

81 citations

Patent
20 Mar 1991
TL;DR: In this paper, a method of managing the memory of a CM multiprocessor computer system is described, where the data and stack pages of a process are transferred to the coupled memory region of the CPU module to which the process is assigned, when the pages are called for by the process.
Abstract: A method of managing the memory of a CM multiprocessor computer system is disclosed. A CM multiprocessor computer system includes: a plurality of CPU modules 11a . . . 11n to which processes are assigned; one or more optional global memories 13a . . . 13n; a storage medium 15a, 15b . . . 15n; and a global interconnect 12. Each of the CPU modules 11a . . . 11n includes a processor 21 and a coupled memory 23 accessible by the local processor without using the global interconnect 12. Processors have access to remote coupled memory regions via the global interconnect 12. Memory is managed by transferring, from said storage medium, the data and stack pages of a process to be run to the coupled memory region of the CPU module to which the process is assigned, when the pages are called for by the process. Other pages are transferred to global memory, if available. At prescribed intervals, the free memory of each coupled memory region and global memory is evaluated to determine if it is below a threshold. If below the threshold, a predetermined number of pages of the memory region are scanned. Infrequently used pages are placed on the end of a list of pages that can be replaced with pages stored in the storage medium. Pages associated with processes that are terminating are placed at the head of the list of replacement pages.

81 citations

Patent
14 May 2003
TL;DR: Memory access requests are successively received in a memory request queue of a memory controller, and any conflicts or potential delays between temporally proximate requests that would occur if the memory access requests were to be executed in the received order are detected.
Abstract: Memory access requests are successively received in a memory request queue of a memory controller. Any conflicts or potential delays between temporally proximate requests that would occur if the memory access requests were to be executed in the received order are detected, and the received order of the memory access requests is rearranged to avoid or minimize the conflicts or delays and to optimize the flow of data to and from the memory data bus. The memory access requests are executed in the reordered sequence, while the originally received order of the requests is tracked. After execution, data read from the memory device by the execution of the read-type memory access requests are transferred to the respective requestors in the order in which the read requests were originally received.

81 citations

Patent
27 Dec 1991
TL;DR: In this paper, an apparatus for memory management in network systems provides added margins of reliability for the receipt of vital maintenance operations protocol (MOP) and station management packets (SMP), in addition, additional overflow allocations of buffers are assigned for receipt of critical system packets which otherwise would typically be discarded in the event of a highly congested system.
Abstract: An apparatus for memory management in network systems provides added margins of reliability for the receipt of vital maintenance operations protocol (MOP) and station management packets (SMP). In addition, additional overflow allocations of buffers are assigned for receipt of critical system packets which otherwise would typically be discarded in the event of a highly congested system. Thus, if a MOP or a SMP packet is received from the network when the allocated space for storing these types of packets in full, the packets are stored in the overflow allocations, and thus the critical packets are not lost.

81 citations

Patent
21 Nov 2011
TL;DR: In this article, the authors present a system and/or methods of managing content in providing a playback experience associated with a portable storage medium by detecting access to a first portable file system with multimedia content recorded on the first file system, evaluating content on the file system and evaluating local memory of the multimedia playback device.
Abstract: Some embodiments provide systems and/or methods of managing content in providing a playback experience associated with a portable storage medium by detecting access to a first portable storage medium with multimedia content recorded on the first portable storage medium; evaluating content on the first portable storage medium; evaluating local memory of the multimedia playback device; determining, in response to the evaluation of the content on the first portable storage medium and the evaluation of the local memory, whether memory on the local memory needs to be freed up in implementing playback of multimedia content in association with the first portable storage medium; and moving one or more contents stored on the local memory of the multimedia playback device to a virtual storage accessible by the multimedia playback device over a distributed network in response to determining that memory on the local memory needs to be freed up.

80 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591