scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Proceedings ArticleDOI
12 Sep 2005
TL;DR: A monadic interface to low-level hardware features that is a suitable basis for building operating systems in Haskell and shows how a variety of simple O/S kernels can be constructed on top of the interface.
Abstract: We describe a monadic interface to low-level hardware features that is a suitable basis for building operating systems in Haskell. The interface includes primitives for controlling memory management hardware, user-mode process execution, and low-level device I/O. The interface enforces memory safety in nearly all circumstances. Its behavior is specified in part by formal assertions written in a programming logic called P-Logic. The interface has been implemented on bare IA32 hardware using the Glasgow Haskell Compiler (GHC) runtime system. We show how a variety of simple O/S kernels can be constructed on top of the interface, including a simple separation kernel and a demonstration system in which the kernel, window system, and all device drivers are written in Haskell.

82 citations

Journal ArticleDOI
01 Jan 2014
TL;DR: This work presents four different techniques to incorporate persistent memory into the database processing stack in light of a high-level API that enables the system to dynamically optimize the workflow of the algorithms; or, alternatively, allows the developer to tune the write profile of the algorithm.
Abstract: To mitigate the impact of the widening gap between the memory needs of CPUs and what standard memory technology can deliver, system architects have introduced a new class of memory technology termed persistent memory. Persistent memory is byte-addressable, but exhibits asymmetric I/O: writes are typically one order of magnitude more expensive than reads. Byte addressability combined with I/O asymmetry render the performance profile of persistent memory unique. Thus, it becomes imperative to find new ways to seamlessly incorporate it into database systems. We do so in the context of query processing. We focus on the fundamental operations of sort and join processing. We introduce the notion of write-limited algorithms that effectively minimize the I/O cost. We give a high-level API that enables the system to dynamically optimize the workflow of the algorithms; or, alternatively, allows the developer to tune the write profile of the algorithms. We present four different techniques to incorporate persistent memory into the database processing stack in light of this API. We have implemented and extensively evaluated all our proposals. Our results show that the algorithms deliver on their promise of I/O-minimality and tunable performance. We showcase the merits and deficiencies of each implementation technique, thus taking a solid first step towards incorporating persistent memory into query processing.

82 citations

Patent
29 Feb 2000
TL;DR: In this article, the authors propose a method of memory management in a computer system comprising memory, which includes the steps of: in response to requests for allocation of memory blocks that remain allocated for different durations, allocating each memory block from one of a plurality of regions in the memory based on the duration that the memory block is to remain allocated; and maintaining a plurality, of memory segments of one or more sizes, in memory.
Abstract: A method of memory management in a computer system comprising memory. The method includes the steps of: in response to requests for allocation of memory blocks that remain allocated for different durations, allocating each memory block from one of a plurality of regions in the memory based on the duration that the memory block is to remain allocated; and maintaining a plurality of memory segments of one or more sizes in the memory, and in response to a request for allocation of a memory block if the requested block size is less than a predetermined size, then allocating the requested block from among said segments, otherwise allocating the requested block from another portion of the memory. The number of data segments are changed in relation to memory requests. Further at least a portion of the memory is allocated to a cache having one or more buffers. The cache buffers can be allocated for non-cache use, including increase the number of said data segments, and are then deallocated back to the cache.

82 citations

Journal ArticleDOI
TL;DR: The goal is to design an efficient memory management protocol which guarantees that the memory consumption of parallel simulation is of the same order as sequential simulation, and proposes an optimal algorithm called artifical rollback.
Abstract: Recently there has been a great deal of interest in performance evalution of parallel simulation. Most work is devoted to the time complexity and assumes that the amount of memory available for parallel simulation is unlimited. This paper studies the space complexity of parallel simulation. Our goal is to design an efficient memory management protocol which guarantees that the memory consumption of parallel simulation is of the same order as sequential simulation. (Such an algorithm is referred to as a optimal.) First, we derive the relationships among the space complexities of sequential simulation, Chandy-Misra simulation [2], and Time Warp simulation [7]. We show that Chandy-Misra may consume more storage than sequential simulation, or vice versa. Then we show that Time Warp never consumes less memory than sequential simulation. Then we describe cancelback, an optimal Time Warp memory management protocol proposed by Jefferson. Although cancelback is considered to be complete solution for the storage management problem in Time Warp, some efficiency issues in implementing this algorithm must be considered. We propose an optimal algorithm called artifical rollback. We show that this algorithm is easy to implement and analyze. An implementation of artificial rollback is given, which is integrated with processor scheduling to adjust the memory consumption rate based on the amount of free storage available in the system.

82 citations

Proceedings ArticleDOI
31 May 2015
TL;DR: JAFAR, a Near-Data Processing (NDP) accelerator for pushing selects down to memory in modern column- Stores, is presented and it is shown that it has the potential to provide 9x improvement for selects in column-stores.
Abstract: The continuous growth of main memory size allows modern data systems to process entire large scale datasets in memory. The increase in memory capacity, however, is not matched by proportional decrease in memory latency, causing a mismatch for in-memory processing. As a result, data movement through the memory hierarchy is now one of the main performance bottlenecks for main memory data systems. Database systems researchers have proposed several innovative solutions to minimize data movement and to make data access patterns hardware-aware. Nevertheless, all relevant rows and columns for a given query have to be moved through the memory hierarchy; hence, movement of large data sets is on the critical path. In this paper, we present JAFAR, a Near-Data Processing (NDP) accelerator for pushing selects down to memory in modern column-stores. JAFAR implements the select operator and allows only qualifying data to travel up the memory hierarchy. Through a detailed simulation of JAFAR hardware we show that it has the potential to provide 9x improvement for selects in column-stores. In addition, we discuss both hardware and software challenges for using NDP in database systems as well as opportunities for further NDP accelerators to boost additional relational operators.

82 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591