scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Patent
10 Jan 2006
TL;DR: In this article, a processor including a virtualization system of the processor with a memory virtualization support system is used to map a reference to guest-physical memory made by guest software executable on a virtual machine which in turn is executed on a host machine in which the processor is operable to a reference reference to host-Physical memory of the host machine.
Abstract: A processor including a virtualization system of the processor with a memory virtualization support system to map a reference to guest-physical memory made by guest software executable on a virtual machine which in turn is executable on a host machine in which the processor is operable to a reference to host-physical memory of the host machine.

98 citations

Proceedings ArticleDOI
12 Aug 1996
TL;DR: This formalized methodology is based on the observation that for this type of application the power consumption is dominated by the memory architecture and the first exploration stage should be to come up with an optimized memory organisation.
Abstract: In this paper we present our power exploration methodology for data dominated video applications. This formalized methodology is based on the observation that for this type of application the power consumption is dominated by the memory architecture. Hence, the first exploration stage should be to come up with an optimized memory organisation. Other important observations are that the power consumption of the address generators is of the same magnitude as that of the data-paths and that the address generators are better optimized using specialized techniques.

98 citations

Proceedings ArticleDOI
01 May 2000
TL;DR: An object inlining transformation is presented, focusing on a new algorithm which optimizes class field layout to minimize code expansion, and it is shown that, compared to traditional 1-CFA, that infrastructure provides better results and lower and more scalable cost.
Abstract: Automatic object inlining [19, 20] transforms heap data structures by fusing parent and child objects together. It can improve runtime by reducing object allocation and pointer dereference costs. We report continuing work studying object inlining optimizations. In particular, we present a new semantic derivation of the correctness conditions for object inlining, and program analysis which extends our previous work. And we present an object inlining transformation, focusing on a new algorithm which optimizes class field layout to minimize code expansion. Finally, we detail a fuller evaluation on eleven programs and libraries (including Xpdf, the 25,000 line Portable Document Format (PDF) file browser) that utilizes hardware measures of impact on the memory system. We show that our analysis scales effectively to large programs, finding many inlinable fields (45 in xpdf) at acceptable cost, and we show that, on some programs, it finds nearly all fields for which object inlining is correct, and averages 40% of such fields across our benchmarks. We implement our analyses in an advanced analysis infrastructure, and we show that, compared to traditional 1-CFA, that infrastructure provides better results and lower and more scalable cost. Across all programs, analysis identified about 30% of objects as inlinable on average. Our transformation increases code size by only 20% while inlining this 30% of fields. Inlining these objects eliminated on average 28% of field reads, 58% of object creations, 12% of all loads. Further, the optimized programs have significantly improved memory reference behavior, producing 25% fewer L1 data cache misses and 25% fewer read stalls. On average the runtime improved by 14%.

98 citations

Journal ArticleDOI
TL;DR: A survey of techniques for using compression in cache and main memory systems and classifies the techniques based on key parameters to highlight their similarities and differences is presented.
Abstract: As the number of cores on a chip increases and key applications become even more data-intensive, memory systems in modern processors have to deal with increasingly large amount of data. In face of such challenges, data compression presents as a promising approach to increase effective memory system capacity and also provide performance and energy advantages. This paper presents a survey of techniques for using compression in cache and main memory systems. It also classifies the techniques based on key parameters to highlight their similarities and differences. It discusses compression in CPUs and GPUs, conventional and non-volatile memory (NVM) systems, and 2D and 3D memory systems. We hope that this survey will help the researchers in gaining insight into the potential role of compression approach in memory components of future extreme-scale systems.

98 citations

Proceedings ArticleDOI
01 Dec 1993
TL;DR: The results show that for certain applications software solutions outperform solutions that rely on page-protection or other related virtual memory primitives, by comparing the performance of implementations that make use of the primitives with others that do not.
Abstract: Many operating systems allow user programs to specify the protection level (inaccessible, read-only, read-write) of pages in their virtual memory address space, and to handle any protection violations that may occur. Such page-protection techniques have been exploited by several user-level algorithms for applications including generational garbage collection and persistent stores. Unfortunately, modern hardware has made efficient handling of page protection faults more difficult. Moreover, page-sized granularity may not match the natural granularity of a given application. In light of these problems, we reevaluate the usefulness of page-protection primitives in such applications, by comparing the performance of implementations that make use of the primitives with others that do not. Our results show that for certain applications software solutions outperform solutions that rely on page-protection or other related virtual memory primitives.

98 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591