scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Patent
08 Sep 2006
TL;DR: In this paper, an apparatus and method are described for identifying uncommitted memory in a system RAM during an initialization process of a computer system, such as a boot procedure or power-on self test, during which memory management is uncontrolled.
Abstract: An apparatus and method are described for identifying uncommitted memory in a system RAM during an initialization process of a computer system, such as a boot procedure or power-on self test, during which memory management is uncontrolled. In various embodiments of the invention, repeating patterns that are indicative of uncommitted memory blocks are identified within a conventional memory area of the system RAM. At least some of the uncommitted memory blocks are allocated for use by an option ROM or other BIOS data and a table is created identifying these uncommitted memory blocks. After the BIOS code exits the system RAM, the table is used to restore the uncommitted memory blocks into their previous data states.

104 citations

Proceedings ArticleDOI
Zhiwei Qin1, Yi Wang1, Duo Liu1, Zili Shao1, Yong Guan 
05 Jun 2011
TL;DR: This paper proposes two approaches, namely, concentrated mapping and postponed reclamation, to effective reduce the valid page copies in the design of MLC flash translation layer to reduce the garbage collection overhead.
Abstract: The new write constraints of multi-level cell (MLC) NAND flash memory make most of the existing flash translation layer (FTL) schemes inefficient or inapplicable. In this paper, we solve several fundamental problems in the design of MLC flash translation layer. The objective is to reduce the garbage collection overhead so as to reduce the average system response time. We make the key observation that the valid page copy is the essential garbage collection overhead. Based on this observation, we propose two approaches, namely, concentrated mapping and postponed reclamation, to effective reduce the valid page copies. We conduct experiments on a set of benchmarks from both the real world and synthetic traces. The experimental results show that our scheme can achieve a significant reduction in the average system response time compared with the previous work.

104 citations

Proceedings ArticleDOI
04 Dec 2010
TL;DR: This paper proposes a scalable approach to data-dependence profiling that addresses both runtime and memory overhead in a single framework, called SD3, and reduces the runtime overhead by parallelizing the dependence profiling step itself and compress memory accesses that exhibit stride patterns and compute data dependences directly in a compressed format.
Abstract: As multicore processors are deployed in mainstream computing, the need for software tools to help parallelize programs is increasing dramatically. Data-dependence profiling is an important technique to exploit parallelism in programs. More specifically, manual or automatic parallelization can use the outcomes of data-dependence profiling to guide where to parallelize in a program. However, state-of-the-art data-dependence profiling techniques are not scalable as they suffer from two major issues when profiling large and long-running applications: (1) runtime overhead and (2) memory overhead. Existing data-dependence profilers are either unable to profile large-scale applications or only report very limited information. In this paper, we propose a scalable approach to data-dependence profiling that addresses both runtime and memory overhead in a single framework. Our technique, called SD3, reduces the runtime overhead by parallelizing the dependence profiling step itself. To reduce the memory overhead, we compress memory accesses that exhibit stride patterns and compute data dependences directly in a compressed format. We demonstrate that SD3 reduces the runtime overhead when profiling SPEC 2006 by a factor of 4.1X and 9.7X on eight cores and 32 cores, respectively. For the memory overhead, we successfully profile SPEC 2006 with the reference input, while the previous approaches fail even with the train input. In some cases, we observe more than a 20X improvement in memory consumption and a 16X speedup in profiling time when 32 cores are used.

104 citations

Patent
David J. Zimmerman1
13 Nov 2003
TL;DR: In this article, the memory module can initiate commands and transmit those commands over its downstream memory channel port as if the commands originated from a host connected to the host-side memory channel.
Abstract: Method and apparatus for use with buffered memory modules are included among the embodiments. In exemplary systems, the memory module has a host-side memory channel port and a downstream memory channel port, allowing multiple modules to be chained point-to-point. In the present disclosure, a separate bus, such as a low-speed system management bus, connects to a memory module buffer. In response to commands received over the system management bus, the memory module can initiate commands and transmit those commands over its downstream memory channel port as if the commands originated from a host connected to the host-side memory channel port. This functionality allows module-to-module memory channels and memory modules to be tested independent of a host memory controller and host memory channel. Other embodiments are described and claimed.

104 citations

Patent
10 Feb 1997
TL;DR: In this paper, a hand-held video game system has a microprocessor controller with address and data buses for providing memory accesses during memory cycles to a plurality of cartridge slots for electrically connecting cartridges containing memory to the data buses.
Abstract: A hand-held video game system having a microprocessor controller with address and data buses for providing memory accesses during memory cycles to a plurality of cartridge slots for electrically connecting cartridges containing memory to the address and data buses. An output terminal of the microprocessor controller provides cartridge-select signal which identifies a first memory containing cartridge to be accessed during an initial memory cycle with the microprocessor controller controlling the output terminal to change the cartridge-select signal for transparently accessing a second memory containing cartridge for a subsequent memory cycle. The cartridge slot may also provide a port for transferring and receiving information over a bi-directional communication link in which a communication cartridge allows communication over the internet, and allows for interactive play of a video game.

104 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591