scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Patent
26 Jul 2001
TL;DR: A Compressed Memory Management Unit (CMMU) as discussed by the authors allows a processor or I/O master to address more system memory than physically exists, by translating system addresses into physical addresses and managing the compression and/or decompression of data at the physical addresses as required.
Abstract: A method and system for allowing a processor or I/O master to address more system memory than physically exists are described. A Compressed Memory Management Unit (CMMU) may keep least recently used pages compressed, and most recently and/or frequently used pages uncompressed in physical memory. The CMMU translates system addresses into physical addresses, and may manage the compression and/or decompression of data at the physical addresses as required. The CMMU may provide data to be compressed or decompressed to a compression/decompression engine. In some embodiments, the data to be compressed or decompressed may be provided to a plurality of compression/decompression engines that may be configured to operate in parallel. The CMMU may pass the resulting physical address to the system memory controller to access the physical memory. A CMMU may be integrated in a processor, a system memory controller or elsewhere within the system.

76 citations

Patent
06 Feb 1981
TL;DR: In this article, the number of information units to be transferred is determined by operating the keyboard in a certain, preferably combinatory fashion, and each information unit or group of units which is transferred from the main memory to the accounting memory by operating a keyboard in some other, certain, but not necessarily combinatory, fashion will either be given a certain address code or allocated to a specific accounting memory.
Abstract: A device for the use of and easily portable by an individual, consisting of a main memory, a microprocessor, a keyboard for the control of the microprocessor, etc., and at least one accounting memory and a display. Operation of the keyboard in a pre-determined fashion will cause the microprocessor to transfer one or more units of information from the main memory to at least one accounting memory. The point in time at which this transfer is made directly or indirectly (for example via internal delay circuits) may be determined by the individual. The number of information units to be transferred is determined by operating the keyboard in a certain, preferably combinatory fashion. Each information unit or group of units which is transferred from the main memory to the accounting memory by operating the keyboard in some other, certain, preferably combinatory fashion will either be given a certain address code or will be allocated to a specific accounting memory.

76 citations

Patent
14 Oct 2003
TL;DR: In this paper, a power up and power down method for non-volatile memory systems with at least one reserved memory area is described. But the method does not address the problem of memory system initialization.
Abstract: Methods and apparatus for enabling a power up process of a non-volatile memory to occur efficiently are disclosed. According to one aspect of the present invention, a method for utilizing a memory system that has a non-volatile memory with at least one reserved memory area includes providing power to the memory system, initializing the non-volatile memory, and writing a first signature into the reserved memory area. The first signature is arranged to indicate that the memory system was successfully initialized. In one embodiment, the method also includes executing a power down process on the memory system, and writing a second signature into the reserved memory area which indicates that the power down process has been executed.

76 citations

Journal ArticleDOI
01 Mar 1993
TL;DR: A flexible memory man agement scheme which adapts well to a variation in the size of the working area and/or the number of pro cessors is designed, which is useful when designing an efficient memory management scheme for a wider range of parallel applications.
Abstract: This article addresses the problems of memory man agement in a parallel sparse matrix factorization based on a multifrontal approach. We describe how we have adapted and modified the ideas of Duff and Reid used in a sequential symmetric multifrontal method to de sign an efficient memory management scheme for parallel sparse matrix factorization. With our solution, using the minimum size of the working area to run the multifrontal method on a multiprocessor, we can ex ploit only a part of the parallelism of the method. If we slightly increase the size of the working space, then most of the potential parallelism of the method can be exploited. We have designed a flexible memory man agement scheme which adapts well to a variation in the size of the working area and/or the number of pro cessors. General parallel applications can always be represented in terms of a computational graph, which is effectively the underlying structure of a parallel mul tifrontal method. Therefore, we believe that the tech niques presented here are useful when designing an efficient memory management scheme for a wider range of parallel applications.

76 citations

Journal ArticleDOI
09 Jun 2012
TL;DR: The architectural supports for nested page table walks are revisited to incorporate the unique characteristics of memory management by hypervisors, and a speculative shadow paging mechanism is proposed, backed by non-speculative flat nested page tables.
Abstract: Recent improvements in architectural supports for virtualization have extended traditional hardware page walkers to traverse nested page tables However, current two-dimensional (2D) page walkers have been designed under the assumption that the usage patterns of guest and nested page tables are similar In this paper, we revisit the architectural supports for nested page table walks to incorporate the unique characteristics of memory management by hypervisors Unlike page tables in native systems, nested page table sizes do not impose significant overheads on the overall memory usage Based on this observation, we propose to use flat nested page tables to reduce unnecessary memory references for nested walks A competing mechanism to HW 2D page walkers is shadow paging, which duplicates guest page tables but provides direct translations from guest virtual to system physical addresses However, shadow paging has been suffering from the overheads of synchronization between guest and shadow page tables The second mechanism we propose is a speculative shadow paging mechanism, called speculative inverted shadow paging, which is backed by non-speculative flat nested page tables The speculative mechanism provides a direct translation with a single memory reference for common cases, and eliminates the page table synchronization overheads We evaluate the proposed schemes with the real Xen hypervisor running on a full system simulator The flat page tables improve a state-of-the-art 2D page walker with a page walk cache and nested TLB by 7% The speculative shadow paging improves the same 2D page walker by 14%

76 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591