scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Patent
13 Jul 1995
TL;DR: In this article, the authors propose an apparatus and method for dynamically adjusting the power/performance characteristics of a memory subsystem by dynamically tracking the behavior of the memory subsystem and predicting the probability that the next event will have certain characteristics, such as whether it will result in a memory cycle that requires the attention of a cache memory.
Abstract: An apparatus and method for dynamically adjusting the power/performance characteristics of a memory subsystem. Since the memory subsystem access requirements are heavily dependent on the application being executed, static methods of enabling or disabling the individual memory system components (as are used in prior art) are less than optimal from a power consumption perspective. By dynamically tracking the behavior of the memory subsystem, the invention predicts the probability that the next event will have certain characteristics, such as whether it will result in a memory cycle that requires the attention of a cache memory, whether that memory cycle will result in a cache memory hit, and whether a DRAM page hit in main memory will occur if the requested data is not in one of the levels of cache memory. Based on these probabilities, the invention dynamically enables or disables components of the subsystem. By intelligently adjusting the state of these components, significant power savings are achieved without degradation in performance.

179 citations

Journal ArticleDOI
01 Sep 1992
TL;DR: This work describes the design, implementation and evaluation of a virtual memory system that provides application control of physical memory using external page-cache management and claims that this approach can significantly improve performance for many memory-bound applications while reducing kernel complexity, yet does not complicate other applications or reduce their performance.
Abstract: Next generation computer systems will have gigabytes of physical memory and processors in the 100 MIPS range or higher. Contrary to some conjectures, this trend requires more sophisticated memory management support for memory-bound computations such as scientific simulations and systems such as large-scale database systems, even though memory management for most programs will be less of a concern. We describe the design, implementation and evaluation of a virtual memory system that provides application control of physical memory using external page-cache management. In this approach, a sophisticated application is able to monitor and control the amount of physical memory it has available for execution, the exact contents of this memory, and the scheduling and nature of page-in and page-out using the abstraction of a physical page cache provided by the kernel. We claim that this approach can significantly improve performance for many memory-bound applications while reducing kernel complexity, yet does not complicate other applications or reduce their performance.

178 citations

Patent
Darrell L. Cox1
12 Oct 1993
TL;DR: An expandable memory system as discussed by the authors includes a central memory controller and one or more plug-in memory modules, each memory module having an on-board memory module controller coupled in a serial network architecture which forms a memory command link.
Abstract: An expandable memory system including a central memory controller and one or more plug-in memory modules, each memory module having an on-board memory module controller coupled in a serial network architecture which forms a memory command link Each memory module controller is serially linked to the central memory controller. The memory system is automatically configured by the central controller, each memory module in the system is assigned a base address, in turn, to define a contiguous memory space without user intervention or the requirement to physically reset switches. The memory system includes the capability to disable and bypass bad memory modules and reassign memory addresses without leaving useable memory unallocated.

178 citations

Patent
02 Nov 2006
TL;DR: In this article, each of the physical memory locations associated with a logical address that is shared in common among the physical addresses is associated with the last write operations of a memory operation, and the available erased memory location can be split into a list of erased memory locations available to be used.
Abstract: Write operations store data in different physical memory locations. Each of the physical memory locations are associated with a logical address that is shared in common among the physical addresses. Sequence information stored in the physical memory location indicates which one of the write operations occurred last. The available erased memory location can be split into a list of erased memory locations available to be used and a list of erased memory locations not available to be used. Then, on a failure, only the list of erased memory locations available to be used needs to be analyzed to reconstruct the consumption states of memory locations.

178 citations

Proceedings ArticleDOI
Minsoo Rhu1, Natalia Gimelshein1, Jason Clemons1, Arslan Zulfiqar1, Stephen W. Keckler1 
15 Oct 2016
TL;DR: In this article, the authors propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNN.
Abstract: The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average GPU memory usage of AlexNet by up to 89%, OverFeat by 91%, and GoogLeNet by 95%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA Titan X GPU card containing 12 GB of memory, with 18% performance loss compared to a hypothetical, oracular GPU with enough memory to hold the entire DNN.

178 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591