scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Patent
27 Nov 1996
TL;DR: In this paper, a mechanism for maintaining a consistent, periodically updated state in main memory without constraining normal computer operation is provided, thereby enabling a computer system to recover from faults without loss of data or processing continuity.
Abstract: A mechanism for maintaining a consistent, periodically updated state in main memory without constraining normal computer operation is provided, thereby enabling a computer system to recover from faults without loss of data or processing continuity. In this invention, a first computer includes a processor and input/output elements connected to a main memory subsystem including a primary element. A second computer has a remote checkpoint memory element, which may include one or more buffer memories and a shadow memory, which is connected to the main memory subsystem of the first computer. During normal processing, an image of data written to the primary memory element is captured by the remote checkpoint memory element. When a new checkpoint is desired (thereby establishing a consistent state in main memory to which all executing applications can safely return following a fault), the data previously captured is used to establish a new checkpointed state in the second computer. In case of failure of the first computer, the second computer can be restarted to operate from the last checkpoint established for the first computer. This structure and protocol can guarantee a consistent state in main memory, thus enabling fault-tolerant operation.

160 citations

Journal ArticleDOI
Denis Foley1, John M. Danskin1
TL;DR: Nvidia's high-performance Pascal GPU GP100 features in-package high-bandwidth memory, support for efficient FP16 operations, unified memory, and instruction preemption, and incorporates Nvidia's NVLink I/O for high- Bandwidth connections between GPUs and between GPU and CPUs.
Abstract: This article introduces Nvidia's high-performance Pascal GPU. GP100 features in-package high-bandwidth memory, support for efficient FP16 operations, unified memory, and instruction preemption, and incorporates Nvidia's NVLink I/O for high-bandwidth connections between GPUs and between GPUs and CPUs.

159 citations

Journal ArticleDOI
01 Aug 2009
TL;DR: The Lazy-Adaptive Tree (LA-Tree) is presented, a novel index structure that is designed to improve performance by minimizing accesses to flash by amortizing the cost of node reads and writes by performing update operations in a lazy manner using cascaded buffers.
Abstract: Flash memories are in ubiquitous use for storage on sensor nodes, mobile devices, and enterprise servers. However, they present significant challenges in designing tree indexes due to their fundamentally different read and write characteristics in comparison to magnetic disks.In this paper, we present the Lazy-Adaptive Tree (LA-Tree), a novel index structure that is designed to improve performance by minimizing accesses to flash. The LA-tree has three key features: 1) it amortizes the cost of node reads and writes by performing update operations in a lazy manner using cascaded buffers, 2) it dynamically adapts buffer sizes to workload using an online algorithm, which we prove to be optimal under the cost model for raw NAND flashes, and 3) it optimizes index parameters, memory management, and storage reclamation to address flash constraints. Our performance results on raw NAND flashes show that the LA-Tree achieves 2x to 12x gains over the best of alternate schemes across a range of workloads and memory constraints. Initial results on SSDs are also promising, with 3x to 6x gains in most cases.

158 citations

Patent
Aram Lindahl1, Jesse Boettcher1, David J. Rempel1, Pulkit Desai1, Vincent Y. Wong1 
05 Sep 2008
TL;DR: In this paper, a technique for managing memory allocation in an electronic device is described, which includes loading a memory allocation strategy for an application executed by a processor of a device, and requesting memory for the application from various memory locations in accordance with the allocation strategy.
Abstract: A technique for managing memory allocation in an electronic device is provided. In one embodiment, a method includes loading a memory allocation strategy for an application executed by a processor of a device, and requesting memory for the application from various memory locations in accordance with the memory allocation strategy. In one embodiment, the device includes multiple sets of contiguous memory blocks and a memory heap, memory may be requested from at least one of these memory locations, and memory may then be allocated to the application in response to the request. In some embodiments, the memory allocation strategy may be stored in the device prior to execution of the application. Various other methods, devices, and manufactures are also provided.

157 citations

Patent
13 Sep 1996
TL;DR: In this paper, a computer system includes a memory controller, a unified system memory, and memory clients each having access to the system memory via the memory controller and translation hardware is included for mapping virtual addresses of pixel buffers to physical memory locations.
Abstract: A computer system provides dynamic memory allocation for graphics. The computer system includes a memory controller, a unified system memory, and memory clients each having access to the system memory via the memory controller. Memory clients can include a graphics rendering engine, a CPU, an image processor, a data compression/expansion device, an input/output device, a graphics back end device. The computer system provides read/write access to the unified system memory, through the memory controller, for each of the memory clients. Translation hardware is included for mapping virtual addresses of pixel buffers to physical memory locations in the unified system memory. Pixel buffers are dynamically allocated as tiles of physically contiguous memory. Translation hardware is implemented in each of the computational devices, which are included as memory clients in the computer system, including primarily the rendering engine.

156 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591