scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Journal ArticleDOI
22 Jan 1995
TL;DR: A novel scalable shared memory multiprocessor architecture that features the automatic data migration and replication capabilities of cache-only memory architecture (COMA) machines, without the accompanying hardware complexity.
Abstract: We present design details and some initial performance results of a novel scalable shared memory multiprocessor architecture. This architecture features the automatic data migration and replication capabilities of cache-only memory architecture (COMA) machines, without the accompanying hardware complexity. A software layer manages cache space allocation at a page-granularity-similarly to distributed virtual shared memory (DVSM) systems, leaving simpler hardware to maintain shared memory coherence at a cache line granularity. By reducing the hardware complexity, the machine cost and development time are reduced. We call the resulting hybrid hardware and software multiprocessor architecture Simple COMA. Preliminary results indicate that the performance of Simple COMA is comparable to that of more complex contemporary all hardware designs. >

132 citations

Journal ArticleDOI
TL;DR: This article proposes a tree-based management scheme which adopts multiple granularities in flash-memory management to not only reduce the run-time RAM footprint but also manage the write workload, due to housekeeping.
Abstract: Many existing approaches on flash-memory management are based on RAM-resident tables in which one single granularity size is used for both address translation and space management. As high-capacity flash memory is becoming more affordable than ever, the dilemma of how to manage the RAM space or how to improve the access performance is emerging for many vendors. In this article, we propose a tree-based management scheme which adopts multiple granularities in flash-memory management. Our objective is to not only reduce the run-time RAM footprint but also manage the write workload, due to housekeeping. The proposed method was evaluated under realistic workloads, where significant advantages over existing approaches were observed, in terms of the RAM space, access performance, and flash-memory lifetime.

132 citations

Journal ArticleDOI
TL;DR: An out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements using an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval.
Abstract: This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that, during the streamline construction, only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-15 megabytes. We also demonstrate that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

132 citations

Patent
Jay Wang1
20 Apr 1999
TL;DR: In this article, a method and system for storing data in data blocks of predetermined size in an electronic memory (e.g., FLASH memory), particularly data such as updatable record of database transactions, is presented.
Abstract: A method and system for storing data in data blocks of predetermined size in an electronic memory (e.g. FLASH memory), particularly data such as updatable record of database transactions. The FLASH operates logically as two stacks where data is pushed into either end of the memory in alternating cycles. Between each push or write cycle, a garbage collection cycle is performed whereby only the most recent transaction performed on any particular record is preserved at one end of the stack, while the rest of the stack is made available for new data. When database being monitored is written to permanent memory, the entire FLASH is again made available for new data. If the database is periodically backed up to permanent memory, it can be restored to RAM by reducing the copy from the permanent memory and modifying it according to the record of database transactions in the electronic memory.

132 citations

Proceedings ArticleDOI
02 Oct 2002
TL;DR: In this paper, a new software technique is presented which supports the use of an onchip scratchpad memory by dynamically copying program parts into it with an optimal algorithm using integer linear programming.
Abstract: The number of mobile embedded systems is increasing and all of them are limited in their uptime by their battery capacity. Several hardware changes have been introduced during the last years, but the steadily growing functionality still requires further energy reductions, e.g. through software optimizations. A significant amount of energy can be saved in the memory hierarchy where most of the energy is consumed. In this paper, a new software technique is presented which supports the use of an onchip scratchpad memory by dynamically copying program parts into it. The set of selected program parts are determined with an optimal algorithm using integer linear programming. Experimental results show a reduction of the energy consumption by nearly 30%, a performance increase by 25% against a common cache system and energy improvements against a static approach of up to 38%.

131 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591