scispace - formally typeset
Search or ask a question
Topic

Memory management

About: Memory management is a research topic. Over the lifetime, 16743 publications have been published within this topic receiving 312028 citations. The topic is also known as: memory allocation.


Papers
More filters
Patent
10 May 2004
TL;DR: In this article, the memory modules are coupled serially in a chain to the host via a plurality of memory links, and each memory link may include an uplink for conveying transactions toward the host and a downlink for transferring transactions originating at the host to a next memory module in the chain.
Abstract: A system including a host coupled to a serially connected chain of memory modules. In one embodiment, each of the memory modules includes a memory control hub for controlling access to a plurality of memory chips on the memory module. The memory modules are coupled serially in a chain to the host via a plurality of memory links. Each memory link may include an uplink for conveying transactions toward the host and a downlink for conveying transactions originating at the host to a next memory module in the chain. The uplink and the downlink may convey transactions using packets that include control and configuration packets and memory access packets. The memory control hub may convey a transaction received on a first downlink of a first memory link on a second downlink of a second memory link independent of decoding the transaction.

123 citations

Proceedings ArticleDOI
Dan Kondratyuk1, Liangzhe Yuan1, Yandong Li1, Li Zhang1, Mingxing Tan1, Matthew Brown1, Boqing Gong1 
21 Mar 2021
TL;DR: MoViNets as mentioned in this paper proposes a three-step approach to improve computational efficiency while substantially reducing the peak memory usage of 3D convolutional neural networks, which can operate on streaming video for online inference.
Abstract: We present Mobile Video Networks (MoViNets), a family of computation and memory efficient video networks that can operate on streaming video for online inference. 3D convolutional neural networks (CNNs) are accurate at video recognition but require large computation and memory budgets and do not support online inference, making them difficult to work on mobile devices. We propose a three-step approach to improve computational efficiency while substantially reducing the peak memory usage of 3D CNNs. First, we design a video network search space and employ neural architecture search to generate efficient and diverse 3D CNN architectures. Second, we introduce the Stream Buffer technique that decouples memory from video clip duration, allowing 3D CNNs to embed arbitrary-length streaming video sequences for both training and inference with a small constant memory footprint. Third, we propose a simple ensembling technique to improve accuracy further without sacrificing efficiency. These three progressive techniques allow MoViNets to achieve state-of-the-art accuracy and efficiency on the Kinetics, Moments in Time, and Charades video action recognition datasets. For instance, MoViNet-A5-Stream achieves the same accuracy as X3D-XL on Kinetics 600 while requiring 80% fewer FLOPs and 65% less memory. Code is available at https://github.com/google-research/movinet.

123 citations

Patent
01 Aug 2008
TL;DR: In this article, a system and method for employing memory forensic techniques to determine operating system type, memory management configuration, and virtual machine status on a running computer system is presented, where advanced techniques apply advanced techniques in a fashion to make them usable and accessible by Information Technology professionals that may not necessarily be versed in the specifics of memory forensic methodologies and theory.
Abstract: A system and method for employing memory forensic techniques to determine operating system type, memory management configuration, and virtual machine status on a running computer system. The techniques apply advanced techniques in a fashion to make them usable and accessible by Information Technology professionals that may not necessarily be versed in the specifics of memory forensic methodologies and theory.

123 citations

Patent
07 Dec 2006
TL;DR: In this paper, a multi-stage video memory management system for a vehicle event recorder is provided that includes the management of a plurality of stage memories and the transfer of data there between.
Abstract: A multi-stage video memory management system for a vehicle event recorder is provided that includes the management of a plurality of stage memories and the transfer of data therebetween. A managed loop memory receives data from a video camera in real-time and continuously overwrites expired data determined to be no longer useful. Data in the managed loop memory is transferred to a more stable memory in response to an event to be recorded. An event trigger first produces a signal causing data transfer between the managed loop memory and an on-board, high-capacity buffer memory, suitable for storing video series associated with a plurality of events. Subsequently, a permanent data store receives data from the high-capacity buffer memory whenever the system reaches a predetermined distance from a download station.

123 citations

Journal Article
TL;DR: This paper gives an overview of self-tuning methods for a spectrum of memory management issues, ranging from traditional caching to exploiting distributed memory in a server cluster and speculative prefetching in a Web-based system.
Abstract: Although today’s computers provide huge amounts of main memory, the ever-increasing load of large data servers, imposed by resource-intensive decision-support queries and accesses to multimedia and other complex data, often leads to memory contention and may result in severe performance degradation. Therefore, careful tuning of memory mangement is crucial for heavy-load data servers. This paper gives an overview of self-tuning methods for a spectrum of memory management issues, ranging from traditional caching to exploiting distributed memory in a server cluster and speculative prefetching in a Web-based system. The common, fundamental elements in these methods include on-line load tracking, near-future access prediction based on stochastic models and the available on-line statistics, and dynamic and automatic adjustment of control parameters in a feedback loop. 1 The Need for Memory Tuning Although memory is relatively inexpensive and modern computer systems are amply equipped with it, memory contention on heavily loaded data servers is a common cause of performance problems. The reasons are threefold: Servers are operating with a multitude of complex software, ranging from the operating system to database systems, object request brokers, and application services. Much of this software has been written so as to quickly penetrate the market rather than optimizing memory usage and other resource consumption. The distinctive characteristic and key problem of a data server is that it operates in multi-user mode, serving many clients concurrently or in parallel. Therefore, a server needs to divide up its resources among the simultaneously active threads for executing queries, transactions, stored procedures, Web applications, etc. Often, multiple data-intensive decision-support queries compete for memory. The data volumes that need to be managed by a server seem to be growing without limits. One part of this trend is that multimedia data types such as images, speech, or video have become more popular and are being merged into conventional-data applications (e.g., images or videos for insurance claims). The other Copyright 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering

123 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
94% related
Scalability
50.9K papers, 931.6K citations
92% related
Server
79.5K papers, 1.4M citations
89% related
Virtual machine
43.9K papers, 718.3K citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202288
2021629
2020467
2019461
2018591