scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Patent
Paul Crowley1, John M. Jaugilas1, David S. Lampert1, Alex Nash1, Senthil K. Natesan1 
24 Mar 1999
TL;DR: In this article, a method and system for managing memory resources in a system used in conjunction with a navigation application program that accesses geographic data is presented, where the data records in each portion of the plurality of data records that forms each parcel are accessed together.
Abstract: A method and system for managing memory resources in a system used in conjunction with a navigation application program that accesses geographic data. The geographic data are comprised of a plurality of data records. The plurality of data records are organized into parcels, each of which contains a portion of the plurality of data records, such that the data records in each portion of the plurality of data records that forms each parcel are accessed together. One or more buffers each that forms a contiguous portion of the memory of the navigation system is provided as a cache to store a plurality of parcels. One or more data structures located outside the contiguous portion of memory identify the parcels of data stored in the cache and the locations in the cache at which the parcels are stored. The one or more data structures located outside the contiguous portion of memory in which the parcels are cached are used to manage the parcel cache to use it efficiently. These one or more data structures located outside the contiguous memory in which the parcels are cached are also used to defragment the parcel cache.

85 citations

Patent
20 Dec 1985
TL;DR: In this article, a logic state analyzer allows a user to include symbols defined in source program listings, as well as other specially defined symbols, in the trace specification, and where possible, all address, operands, etc., are expressed in such terms.
Abstract: A logic state analyzer allows a user to include symbols defined in source program listings, as well as other specially defined symbols, in the trace specification. Such symbols represent unique individual values or ranges of values. The resulting trace list includes these symbols, and where possible, all address, operands, etc., are expressed in such terms. When those symbols are relocatable entities produced by compilers and assemblers the result is that the user is freed from having to duplicate the relocation process to specify absolute values in the trace specification, and later reverse it to interpret absolute values in the listing in terms of symbols originally defined in the source programming. A further result is that the states within an arbitrary finite state machine can be assigned descriptive labels, with the trace specification and trace listing subsequently expressed in those terms. Trace values can also be represented relative to a symbol. The same principles are extendable to handle memory segment offsets invoked by memory management units that automatically convert a relocated virtual address emitted by a processor into a dynamically adjusted run time physical address actually sent to the memory. According to a preferred embodiment of the invention the analyzer makes use of various symbol tables produced by any associated assemblers and compilers, as well as of any additional special symbol definitions desired by the user. The analyzer provides absolute values for these symbols by application of the load map produced during the relocation of the various programs into the target system monitored by the logic analyzer.

85 citations

Proceedings ArticleDOI
05 Dec 2011
TL;DR: This paper presents a novel real-time loop closure detection approach for large-scale and long-term SLAM based on a memory management method that keeps computation time for each new observation under a fixed limit.
Abstract: Loop closure detection is the process involved when trying to find a match between the current and a previously visited locations in SLAM. Over time, the amount of time required to process new observations increases with the size of the internal map, which may influence real-time processing. In this paper, we present a novel real-time loop closure detection approach for large-scale and long-term SLAM. Our approach is based on a memory management method that keeps computation time for each new observation under a fixed limit. Results demonstrate the approach's adaptability and scalability using four standard data sets.

84 citations

Proceedings ArticleDOI
21 May 2012
TL;DR: This paper proposes a novel approach for exploiting NVM as a secondary memory partition so that applications can explicitly allocate and manipulate memory regions therein, and proposes an NVMalloc library with a suite of services that enables applications to access a distributed NVM storage system.
Abstract: DRAM is a precious resource in extreme-scale machines and is increasingly becoming scarce, mainly due to the growing number of cores per node. On future multi-peta flop and exa flop machines, the memory pressure is likely to be so severe that we need to rethink our memory usage models. Fortunately, the advent of non-volatile memory (NVM) offers a unique opportunity in this space. Current NVM offerings possess several desirable properties, such as low cost and power efficiency, but suffer from high latency and lifetime issues. We need rich techniques to be able to use them alongside DRAM. In this paper, we propose a novel approach for exploiting NVM as a secondary memory partition so that applications can explicitly allocate and manipulate memory regions therein. More specifically, we propose an NVMalloc library with a suite of services that enables applications to access a distributed NVM storage system. We have devised ways within NVMalloc so that the storage system, built from compute node-local NVM devices, can be accessed in a byte-addressable fashion using the memory mapped I/O interface. Our approach has the potential to re-energize out-of-core computations on large-scale machines by having applications allocate certain variables through NVMalloc, thereby increasing the overall memory capacity available. Our evaluation on a 128-core cluster shows that NVMalloc enables applications to compute problem sizes larger than the physical memory in a cost-effective manner. It can bring more performance/efficiency gain with increased computation time between NVM memory accesses or increased data access locality. In addition, our results suggest that while NVMalloc enables transparent access to NVM-resident variables, the explicit control it provides is crucial to optimize application performance.

84 citations

Proceedings ArticleDOI
25 Apr 2006
TL;DR: This paper examines applying phase analysis algorithms and how to adapt them to parallel applications running on shared memory processors, and examines using the phase analysis to pick simulation points to guide multithreaded simulation.
Abstract: Most programs are repetitive, where similar behavior can be seen at different execution times. Algorithms have been proposed that automatically group similar portions of a program's execution into phases, where samples of execution in the same phase have homogeneous behavior and similar resource requirements. In this paper, we examine applying these phase analysis algorithms and how to adapt them to parallel applications running on shared memory processors. Our approach relies on a separate representation of each thread's activity. We first focus on showing its ability to identify similar intervals of execution across threads for a single run. We then show that it is effective at identifying similar behavior of a program when the number of threads is varied between runs. This can be used by developers to examine how different phases scale across different number of threads. Finally, we examine using the phase analysis to pick simulation points to guide multithreaded simulation.

84 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591