Topic
Memory management
About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: An adaptive memory management algorithm allows substantial improvement in locality of reference in garbage-collected systems and indicates that page-wait time typically is reduced by a factor of four with constant memory size and disk technology.
Abstract: Modern Lisp systems make heavy use of a garbage-collecting style of memory management. Generally, the locality of reference in garbage-collected systems has been very poor. In virtual memory systems, this poor locality of reference generally causes a large amount of wasted time waiting on page faults or uses excessively large amounts of main memory. An adaptive memory management algorithm, described in this article, allows substantial improvement in locality of reference. Performance measurements indicate that page-wait time typically is reduced by a factor of four with constant memory size and disk technology. Alternately, the size of memory typically can be reduced by a factor of two with constant performance.
142 citations
•
28 Jul 1980TL;DR: In this article, an extended memory system includes a processor, a plurality of input/output devices, a real memory having first and second portions thereof, the first portion storing a system operation program and the second portion storing the plurality of user programs.
Abstract: An extended memory system includes a processor, a plurality of input/output devices, a real memory having first and second portions thereof, the first portion storing a system operation program and the second portion storing a plurality of user programs. The processor produces logical addresses when executing user programs or input/output routines, which logical addresses are translated to real memory addresses by one of two first translation memories. System operation instructions common to all user programs are stored in a low order address portion of the real memory. Different user programs are mapped into mutually exclusive portions of the real memory by means of the two translation memories.
142 citations
•
16 Jul 2010
TL;DR: In this article, a method for caching in a processor system having virtual memory is presented, the method comprising: monitoring slow memory in the processor system to determine frequently accessed pages; copy the frequently accessed page from slow memory to a location in fast memory.
Abstract: In a first embodiment of the present invention, a method for caching in a processor system having virtual memory is provided, the method comprising: monitoring slow memory in the processor system to determine frequently accessed pages; for a frequently accessed page in slow memory: copy the frequently accessed page from slow memory to a location in fast memory; and update virtual address page tables to reflect the location of the frequently accessed page in fast memory.
142 citations
••
TL;DR: This paper surveys topics that presently define the state of the art in parallel simulation and includes discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic snchronization.
Abstract: This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.
142 citations
••
IBM1
TL;DR: The goal is to quantify the floating point, memory, I/O and communication requirements of highly parallel scientific applications that perform explicit communication and develop analytical models for the effects of changing the problem size and the degree of parallelism.
Abstract: This paper studies the behavior of scientific applications running on distributed memory parallel computers. Our goal is to quantify the floating point, memory, I/O and communication requirements of highly parallel scientific applications that perform explicit communication. In addition to quantifying these requirements for fixed problem sizes and numbers of processors, we develop analytical models for the effects of changing the problem size and the degree of parallelism for several of the applications. We use the results to evaluate the trade-offs in the design of multicomputer architectures.
141 citations