scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Patent
James Chow1, Thomas K. Gender1
03 Jun 2002
TL;DR: In this paper, the authors present a flash memory management system and method with increased performance, which includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism.
Abstract: The present invention provides a flash memory management system and method with increased performance. The flash memory management system provides the ability to efficiently manage and allocate flash memory use in a way that improves reliability and longevity, while maintaining good performance levels. The flash memory management system includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism. The free block mechanism provides efficient sorting of free blocks to facilitate selecting low use blocks for writing. The disk maintenance mechanism provides for the ability to efficiently clean flash memory blocks during processor idle times. The bad block detection mechanism provides the ability to better detect when a block of flash memory is likely to go bad. The flash status mechanism stores information in fast access memory that describes the content and status of the data in the flash disk. The new bank detection mechanism provides the ability to automatically detect when new banks of flash memory are added to the system. Together, these mechanisms provide a flash memory management system that can improve the operational efficiency of systems that utilize flash memory.

112 citations

Proceedings ArticleDOI
24 Feb 2014
TL;DR: A novel unified working memory and persistent store architecture, NVM Duet, which provides the required consistency and durability guarantees for persistent store while relaxing these constraints if accesses to NVM are for working memory is proposed.
Abstract: Emerging non-volatile memory (NVM) technologies have gained a lot of attention recently. The byte-addressability and high density of NVM enable computer architects to build large-scale main memory systems. NVM has also been shown to be a promising alternative to conventional persistent store. With NVM, programmers can persistently retain in-memory data structures without writing them to disk. Therefore, one can envision that in the future, NVM will play the role of both working memory and persistent store at the same time. Persistent store demands consistency and durability guarantees, thereby imposing new design constraints on the memory system. Consistency is achieved at the expense of serializing multiple write operations. Durability requires memory cells to guarantee non-volatility and thus reduces the write speed. Therefore, a unified architecture oblivious to these two use cases would lead to suboptimal design. In this paper, we propose a novel unified working memory and persistent store architecture, NVM Duet, which provides the required consistency and durability guarantees for persistent store while relaxing these constraints if accesses to NVM are for working memory. A cross-layer design approach is adopted to achieve the design goal. Overall, simulation results demonstrate that NVM Duet achieves up to 1.68x (1.32x on average) speedup compared with the baseline design.

112 citations

Proceedings ArticleDOI
20 Jun 2002
TL;DR: Measurements of thread-local heaps with direct global allocation on a 4-way multiprocessor IBM Netfinity server show that the overall garbage collection times have been substantially reduced, and that most long pauses have been eliminated.
Abstract: We present a memory management scheme for Java based on thread-local heaps. Assuming most objects are created and used by a single thread, it is desirable to free the memory manager from redundant synchronization for thread-local objects. Therefore, in our scheme each thread receives a partition of the heap in which it allocates its objects and in which it does local garbage collection without synchronization with other threads. We dynamically monitor to determine which objects are local and which are global. Furthermore, we suggest using profiling to identify allocation sites that almost exclusively allocate global objects, and allocate objects at these sites directly in a global area.We have implemented the thread-local heap memory manager and a preliminary mechanism for direct global allocation on an IBM prototype of JDK 1.3.0 for Windows. Our measurements of thread-local heaps with direct global allocation on a 4-way multiprocessor IBM Netfinity server show that the overall garbage collection times have been substantially reduced, and that most long pauses have been eliminated.

112 citations

Patent
Barry A. Burke1
27 Jun 2007
TL;DR: In this article, a system for managing data includes providing at least one logical device having a table of information that maps sections of the logical device to sections of at least two storage areas.
Abstract: A system for managing data includes providing at least one logical device having a table of information that maps sections of the logical device to sections of at least two storage areas. Characteristics of data associated with a one section of the logical device may be evaluated. The section of the data may moved between the at least two storage areas from a first location to a second location according to a policy and based on the characteristics of the data. A copy of the data may be retained in the first location and a list maintained that indentifies the copy of the data in the first location. The system provides for garbage collection processing for memory management.

112 citations

Journal ArticleDOI
TL;DR: This survey introduces the problems and discusses solutions in the context of single-processor systems, to catalog all solutions, past and present, and to identify technology trends and attractive future approaches.
Abstract: This survey exposes the problems related to virtual caches in the context of uniprocessor (Part 1) and multiprocessor (Part 2) systems. We review proposed solutions that have been implemented or proposed in different contexts. The idea is to catalog all solutions, past and present, and to identify technology trends and attractive future approaches. We first overview the relevant properties of virtual memory and of physical caches. To solve the virtual-to-physical address bottle-neck, processors may access caches directly with virtual addresses. This survey introduces the problems and discusses solutions in the context of single-processor systems.

112 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591