scispace - formally typeset
Search or ask a question
Topic

Encoding (memory)

About: Encoding (memory) is a research topic. Over the lifetime, 7547 publications have been published within this topic receiving 120214 citations. The topic is also known as: memory encoding & encoding of memories.


Papers
More filters
Journal ArticleDOI
TL;DR: In tasks that require actively manipulating information, persistent activity naturally emerges from learning, and the amount of persistent activity scales with the degree of manipulation required, suggesting that persistent activity can vary markedly between short-term memory tasks with different cognitive demands.
Abstract: Recently it has been proposed that information in working memory (WM) may not always be stored in persistent neuronal activity but can be maintained in 'activity-silent' hidden states, such as synaptic efficacies endowed with short-term synaptic plasticity. To test this idea computationally, we investigated recurrent neural network models trained to perform several WM-dependent tasks, in which WM representation emerges from learning and is not a priori assumed to depend on self-sustained persistent activity. We found that short-term synaptic plasticity can support the short-term maintenance of information, provided that the memory delay period is sufficiently short. However, in tasks that require actively manipulating information, persistent activity naturally emerges from learning, and the amount of persistent activity scales with the degree of manipulation required. These results shed insight into the current debate on WM encoding and suggest that persistent activity can vary markedly between short-term memory tasks with different cognitive demands.

137 citations

Journal ArticleDOI
TL;DR: Activity of a distributed paralimbic system, centered on the left hippocampus, correlated selectively with predictability as measured with mutual information, clear evidence that the brain is sensitive to the probabilistic context in which events are encountered.

137 citations

Patent
15 May 1995
TL;DR: In this article, a motion vector detection circuit detects the motion vector for each macro-block between an odd field and an even field, and a decision circuit decides the type of the encoding system.
Abstract: A motion vector detection circuit detects the motion vector for each macro-block between an odd field and an even field. An encoding system decision circuit decides the type of the encoding system, that is if the encoding system is a field-based encoding system or a frame-based encoding system, based on a median of a motion vector. A control circuit controls gates and changeover switches, in accordance with the encoding system type as decided by the decision system, for generating a field-based reference picture or a frame-based reference picture from buffer memories. The circuitry from an additive node to a VLC circuit finds difference data between the reference picture and the picture to be encoded, while transforming the difference data by discrete cosine transform and variable length encoding the transformed data. The VLC circuit sets the encoding system type as a flag in a header of a predetermined hierarchical layer of a bit stream. By the above operation, any interlaced picture may be encoded efficiently, whether the picture includes little motion or abundant motion or includes both little motion and abundant motion in combination. A picture data decoding device detects the flag and executes decoding by changing over field-based decoding to frame-based decoding or vice versa depending on the flag for reproducing the picture data.

137 citations

Journal ArticleDOI
TL;DR: Both exploratory and confirmatory factor analyses distinguished short-term memory tasks from working memory tasks, and performance on workingMemory tasks was related to word decoding skill but performance on short- term memory tasks was not.
Abstract: The aim of the present research was to determine whether short-term memory and working memory could be distinguished. In two studies, 7- to 13-year-olds (N = 155, N = 132) were administered tasks thought to assess short-term memory as well as tasks thought to assess working memory. Both exploratory and confirmatory factor analyses distinguished short-term memory tasks from working memory tasks. In addition, performance on working memory tasks was related to word decoding skill but performance on short-term memory tasks was not. Finally, performance on both short-term memory and working memory tasks were associated with age-related increases in processing speed. Results are discussed in relation to models of short-term and working memory.

136 citations

Proceedings ArticleDOI
02 Jun 2018
TL;DR: This paper investigates widely used DNNs and finds that the major contributors to memory footprint are intermediate layer outputs (feature maps), and introduces a framework for DNN-layer-specific optimizations that significantly reduce this source of main memory pressure on GPUs.
Abstract: Modern deep neural networks (DNNs) training typically relies on GPUs to train complex hundred-layer deep networks A significant problem facing both researchers and industry practitioners is that, as the networks get deeper, the available GPU main memory becomes a primary bottleneck, limiting the size of networks it can train In this paper, we investigate widely used DNNs and find that the major contributors to memory footprint are intermediate layer outputs (feature maps) We then introduce a framework for DNN-layer-specific optimizations (eg, convolution, ReLU, pool) that significantly reduce this source of main memory pressure on GPUs We find that a feature map typically has two uses that are spread far apart temporally Our key approach is to store an encoded representation of feature maps for this temporal gap and decode this data for use in the backward pass; the full-fidelity feature maps are used in the forward pass and relinquished immediately Based on this approach, we present Gist, our system that employs two classes of layer-specific encoding schemes -- lossless and lossy -- to exploit existing value redundancy in DNN training to significantly reduce the memory consumption of targeted feature maps For example, one insight is by taking advantage of the computational nature of back propagation from pool to ReLU layer, we can store the intermediate feature map using just 1 bit instead of 32 bits per value We deploy these mechanisms in a state-of-the-art DNN framework (CNTK) and observe that Gist reduces the memory footprint to upto 2X across 5 state-of-the-art image classification DNNs, with an average of 18X with only 4% performance overhead We also show that further software (eg, CuDNN) and hardware (eg, dynamic allocation) optimizations can result in even larger footprint reduction (upto 41X)

136 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
83% related
Deep learning
79.8K papers, 2.1M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
81% related
Cluster analysis
146.5K papers, 2.9M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,083
20222,253
2021450
2020378
2019358
2018363