scispace - formally typeset
Search or ask a question
Topic

Encoding (memory)

About: Encoding (memory) is a research topic. Over the lifetime, 7547 publications have been published within this topic receiving 120214 citations. The topic is also known as: memory encoding & encoding of memories.


Papers
More filters
Journal ArticleDOI
TL;DR: Across three paired associate learning experiments, the recall of four-digit number responses to word stimuli favored the most significant digit over the least significant (magnitude encoding rather than exhibiting typical bowed serial position functions) as evidence for two kinds of number coding in semantic memory.
Abstract: Across three paired associate learning experiments, the recall of four-digit number responses to word stimuli favored the most significant digit over the least significant (magnitude encoding rather than exhibiting typical bowed serial position functions (nominal encoding). Only with instructions emphasizing exact recall of each individual digit did recall functions revert to bowed curves. The results are interpreted as evidence for two kinds of number coding in semantic memory.

31 citations

01 Jan 2010
TL;DR: It is concluded that random permutations are a scalable alternative to circular convolution with several desirable properties.
Abstract: Encoding information about the order in which words typically appear has been shown to improve the performance of high-dimensional semantic space models. This requires an encoding operation capable of binding together vectors in an order-sensitive way, and efficient enough to scale to large text corpora. Although both circular convolution and random permutations have been enlisted for this purpose in semantic models, these operations have never been systematically compared. In Experiment 1 we compare their storage capacity and probability of correct retrieval; in Experiments 2 and 3 we compare their performance on semantic tasks when integrated into existing models. We conclude that random permutations are a scalable alternative to circular convolution with several desirable properties.

31 citations

01 Jan 2010
TL;DR: A proof-of-concept VSA-based model of serial recall implemented in a network of spiking neurons is presented and its ability to successfully encode and decode item sequences is demonstrated.
Abstract: A Spiking Neuron Model of Serial-Order Recall Feng-Xuan Choo (fchoo@uwaterloo.ca) Chris Eliasmith (celiasmith@uwaterloo.ca) Center for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada N2L 3G1 Abstract vectors such that the result is another vector that is similar to the original input vectors. Third, a binding operation (⊗) is used to combine vectors such that the result is a vector that is dissimilar to original vectors. Last, an approximate inverse operation (denoted with ∗ , such that A ∗ is the approximate in- verse of A) is needed so that previously bound vectors can be unbound. A ⊗ B ⊗ B ∗ ≈ A Vector symbolic architectures (VSAs) have been used to model the human serial-order memory system for decades. Despite their success, however, none of these models have yet been shown to work in a spiking neuron network. In an effort to take the first step, we present a proof-of-concept VSA-based model of serial-order memory implemented in a network of spiking neurons and demonstrate its ability to successfully encode and decode item sequences. This model also provides some insight into the differences between the cognitive processes of mem- ory encoding and subsequent recall, and establish a firm foun- dation on which more complex VSA-based models of memory can be developed. Keywords: Serial-order memory; serial-order recall; vector symbolic architectures; holographic reduced representation; population coding; LIF neurons; neural engineering frame- work Just like addition and multiplication, the VSA operations are associative, commutative, and distributive. The class of VSA used in this model is the Holographic Reduced Representation (HRR) (Plate, 2003). In this repre- sentation, each element of an HRR vector is chosen from a normal distribution with a mean of 0, and a variance of 1/n where n is the number of elements there are in the vector. The standard addition operator is used to perform the superposi- tion operation, and the circular convolution operation is used to perform the binding operation. The circular convolution of two vectors can be efficiently computed by utilizing the Fast Fourier Transform (FFT) algorithm: Introduction The human memory system is able to perform a multitude of tasks, one of which is the ability to remember and recall sequences of serially ordered items. In human serial recall experiments, subjects are presented items at a fixed interval, typically in the range of two items per second up to one item every 4 seconds. After the entire sequence has been presented the subjects are then asked to recall the items presented to them, either in order (serial recall), or in any order the sub- ject desires (free recall). Plotting the recall accuracy of the subjects, experimenters often obtain a graph with a distinc- tive U-shape. This unique shape arises from what is known as the primacy and recency effects. The primacy effect refers to the increase in recall accuracy the closer the item is to the start of the sequence, and the recency effect refers to the same increase in recall accuracy as the item gets closer to the end of the sequence. Many models have been proposed to explain this peculiar behaviour in the recall accuracy data. Here we will concen- trate on one class of models which employ vector symbolic architectures (VSAs) to perform the serial memory and re- call. Using VSAs to perform serial memory tasks would be insufficient however, if the VSA-based model cannot be im- plemented in spiking neurons, and thus, cannot be used to explain what the brain is actually doing. In this paper, we thus present a proof-of-concept VSA-based model of serial recall implemented using spiking neurons. x ⊗ y = F −1 ( F (x) F (y)), where F and F −1 are the FFT and inverse FFT operations respectively, and is the element-wise multiplication of the two vectors. The circular convolution operation, unlike the standard convolution operation, does not change the dimen- sionality of the result vector. This makes the HRR extremely suitable for a neural implementation because it means that the dimensionality of the network remains constant regardless of the number of operations performed. The VSA-based Approach to Serial Memory There are multiple ways in which VSAs can be used to encode serially ordered items into a memory trace. The CADAM model (Liepa, 1977) provides a simple example of how a sequence of items can be encoded as a single mem- ory trace. In the CADAM model, the sequence containing the items A, B, and C would be encoded as in single memory trace, M ABC as follows: M A = A M AB = A + A ⊗ B Vector Symbolic Architecture There are four core features of vector symbolic architectures. First, information is represented by randomly chosen vectors that are combined in a symbol-like manner. Second, a super- position operation (here denoted with a +) is used to combine M ABC = A + A ⊗ B + A ⊗ B ⊗ C The model presented in this paper, however, takes inspira- tion from behavioural data obtained from macaque monkeys. This data suggests that each sequence item is encoded using

31 citations

Journal ArticleDOI
TL;DR: The idea that self-generation is a powerful mnemonic that leads to enriched memory representations for both the item and context, especially when fewer generation constraints are imposed is further advances.

31 citations

Patent
20 Dec 2013
TL;DR: In this article, the authors proposed a method for encoding/decoding a video stream using macroblocks containing more than 16 × 16 pixels, for example, 64 × 64 pixels, where each macroblock is divided into two or more partitions which can be encoded using various modes.
Abstract: FIELD: physics, video.SUBSTANCE: invention relates to encoding digital video and particularly to block video encoding. The method comprises encoding/decoding a video stream using macroblocks containing more than 16x16 pixels, for example, 64x64 pixels. Each macroblock is divided into two or more partitions which can be encoded using various modes, wherein the video coder is configured to: receive a video block having a size greater than 16x16 pixels; divide the video block into partitions; encode one of the partitions using a first encoding mode; encode the other partition using a second encoding mode different from the first encoding mode; generate syntax information of the type of the video block indicating the size of the video block and identifying partitions and encoding modes used to encode the partitions.EFFECT: high efficiency of video encoding by using a higher degree of redundancy caused by increase in spatial resolution and/or frame frequency.48 cl, 18 dwg, 2 tbl

31 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
83% related
Deep learning
79.8K papers, 2.1M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
81% related
Cluster analysis
146.5K papers, 2.9M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,083
20222,253
2021450
2020378
2019358
2018363