scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Proceedings ArticleDOI
18 Jun 2017
TL;DR: This paper proposes an ultra-efficient approximate processing in-memory architecture, called APIM, which exploits the analog characteristics of non-volatile memories to support addition and multiplication inside the crossbar memory, while storing the data.
Abstract: Recent years have witnessed a rapid growth in the domain of Internet of Things (IoT). This network of billions of devices generates and exchanges huge amount of data. The limited cache capacity and memory bandwidth make transferring and processing such data on traditional CPUs and GPUs highly inefficient, both in terms of energy consumption and delay. However, many IoT applications are statistical at heart and can accept a part of inaccuracy in their computation. This enables the designers to reduce complexity of processing by approximating the results for a desired accuracy. In this paper, we propose an ultra-efficient approximate processing in-memory architecture, called APIM, which exploits the analog characteristics of non-volatile memories to support addition and multiplication inside the crossbar memory, while storing the data. The proposed design eliminates the overhead involved in transferring data to processor by virtually bringing the processor inside memory. APIM dynamically configures the precision of computation for each application in order to tune the level of accuracy during runtime. Our experimental evaluation running six general OpenCL applications shows that the proposed design achieves up to 20× performance improvement and provides 480× improvement in energy-delay product, ensuring acceptable quality of service. In exact mode, it achieves 28× energy savings and 4.8× speed up compared to the state-of-the-art GPU cores.

79 citations

Patent
13 Feb 2001
TL;DR: In this article, a system and method for managing real memory usage comprising: a compressed memory device driver for receiving real-memory usage information from the compressed memory hardware controller, the information including a characterization of the real memory use state: and, a compression management subsystem for monitoring the memory usage and initiating memory allocation and memory recovery in accordance with thememory usage state, the subsystem including mechanism for adjusting memory usage thresholds for controlling memory state changes.
Abstract: In a computer system having an operating system and a compressed main memory defining a physical memory and a real memory characterized as an amount of main memory as seen by a processor, and including a compressed memory hardware controller device for controlling processor access to the compressed main memory, there is provided a system and method for managing real memory usage comprising: a compressed memory device driver for receiving real memory usage information from the compressed memory hardware controller, the information including a characterization of the real memory usage state: and, a compression management subsystem for monitoring the memory usage and initiating memory allocation and memory recovery in accordance with the memory usage state, the subsystem including mechanism for adjusting memory usage thresholds for controlling memory state changes. Such a system and method is implemented in software operating such that control of the real memory usage in the computer system is transparent to the operating system.

79 citations

Journal ArticleDOI
TL;DR: A new algorithm called Priority Adaptation Query Resource Scheduling (PAQRS) is introduced and evaluated for handling both single class and multiclass query workloads and confirms that PAQRS is very effective for real-time query scheduling.
Abstract: In recent years, a demand for real-time systems that can manipulate large amounts of shared data has led to the emergence of real-time database systems (RTDBS) as a research area. This paper focuses on the problem of scheduling queries in RTDBSs. We introduce and evaluate a new algorithm called Priority Adaptation Query Resource Scheduling (PAQRS) for handling both single class and multiclass query workloads. The performance objective of the algorithm is to minimize the number of missed deadlines, while at the same time ensuring that any deadline misses are scattered across the different classes according to an administratively-defined miss distribution. This objective is achieved by dynamically adapting the system's admission, memory allocation, and priority assignment policies according to its current resource configuration and workload characteristics. A series of experiments confirms that PAQRS is very effective for real-time query scheduling. >

79 citations

Journal ArticleDOI
TL;DR: A survey of the buffer management methods that have been proposed for shared-memory packet switches and their strengths and weaknesses are described and evaluated using computer simulations.
Abstract: In the shared-memory switch architecture, output links share a single large memory, in which logical FIFO queues are assigned to each link. Although memory sharing can provide a better queuing performance than physically separated buffers, it requires carefully designed buffer management schemes for a fair and robust operation. This article presents a survey of the buffer management methods that have been proposed for shared-memory packet switches. Several buffer management policies are described, and their strengths and weaknesses are examined. The performances of various policies are evaluated using computer simulations. A comparison of the most important schemes is obtained with the help of the simulation results and the results provided in the literature. The survey concludes with a discussion of the possible future research areas related to shared-memory ATM switches.

79 citations

Proceedings ArticleDOI
24 Feb 2008
TL;DR: CHiMPS is a C-based accelerator compiler for hybrid CPU-FPGA computing platforms that inputs generic ANSIC code and automatically generates VHDL blocks for an FPGA.
Abstract: This poster describes CHiMPS, a toolflow that aims to provide software developers with a way to program hybrid CPU-FPGA platforms using familiar tools, languages, and techniques. CHiMPS starts with C and produces a specialized spatial dataflow architecture that supports coherent caches and the shared-memory programming model. The toolflow is designed to abstract away the complex details of data movement and separate memories on the hybrid platforms, as well as take advantage of memory management and computation techniques unique to reconfigurable hardware. This poster focuses on the memory design for CHiMPS, particularly the use of numerous small caches customized for various phases of program execution. The poster also addresses area vs. performance tradeoffs for various configurations. Applications compiled using CHiMPS show performance improvements of more than 36x on simple compute-intensive kernels, and 4.3x on the difficult-to-parallelize STSWM application without any special optimizations compared to running only on the CPU. The toolflow supports full ANSI-C, and produces hardware that runs on platforms that are expected to be available within one year

79 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591