scispace - formally typeset
Search or ask a question
Topic

Encoding (memory)

About: Encoding (memory) is a research topic. Over the lifetime, 7547 publications have been published within this topic receiving 120214 citations. The topic is also known as: memory encoding & encoding of memories.


Papers
More filters
Journal ArticleDOI
TL;DR: Participants' confidence judgments about their memory predicted their decisions to use the saved information, indicating that cognitive offloading is associated with metacognitive evaluation about memory performance.

35 citations

Journal ArticleDOI
TL;DR: A novel analysis of the mathematics of VSAs and a novel technique for representing data in HRRs, where HRRs can successfully encode vectors of locally structured data if vectors are shuffled, are presented.
Abstract: Vector Symbolic Architectures (VSAs) such as Holographic Reduced Representations (HRRs) are computational associative memories used by cognitive psychologists to model behavioural and neurological aspects of human memory. We present a novel analysis of the mathematics of VSAs and a novel technique for representing data in HRRs. Encoding and decoding in VSAs can be characterised by Latin squares. Successful encoding requires the structure of the data to be orthogonal to the structure of the Latin squares. However, HRRs can successfully encode vectors of locally structured data if vectors are shuffled. Shuffling results are illustrated using images but are applicable to any nonrandom data. The ability to use locally structured vectors provides a technique for detailed modelling of stimuli in HRR models.Keywords: holographic reduced representations, vector symbolic architectures, associative memory, Latin squares, permutationFirst proposed by Longuet-Higgins (1968) and Gabor (1969), a holographic associative memory is a computational memory based on the mathematics of holography. Holographic associative memory has been of interest to cognitive psychologists because of the following:(i) Associative memories are content-addressable, allowing items to be retrieved without search, in a manner similar to the fast, parallel retrieval of memories in the human mind.(ii) Just as human memory can store complicated and recursive relations between ideas, holographic associative memories can compactly store associations between associations.(iii) Holographic associative memories have what is called "lossy" storage, which is useful for modelling human forgetting.The mathematics of holography has long been suggested as the principle underlying the neural basis of memory (Pribram, 1969). Cognitive models based on holographic associative memory, such as TODAM (Murdock, 1982), TODAM2 (Murdock, 1993), and CHARM (Metcalfe-Eich, 1982), can explain and predict a variety of human memory phenomena.Holographic Reduced Representations (HRRs; Plate, 1994) are a refinement of Gabor's holographic associative memory. HRRs have also been used to model how humans understand analogies (Plate, 2000b; Eliasmith & Thagard, 2001) and the meaning of words (BEAGLE; Jones & Mewhort, 2007), how humans encode strings of characters (Hannagan, Dupoux, & Christophe, 2011), and to model how humans perform simple memory and problem-solving tasks such as playing rocks, paper, scissors (Rutledge-Taylor, 2010) and solving Raven's progressive matrices (Rasmussen & Eliasmith, 2011).Research into HRRs and HRR-based models has been motivated by limitations in the ability of traditional connectionist models (i.e., nonrecurrent models with one or two layers of connections) to represent knowledge with complicated structure (Plate, 1995). In traditional connectionist models, an item is represented by a pattern of activation across a group of neurons. Mathematically, the pattern of activation is represented by a vector of numbers that stand for the activations of the neurons. Relationships between pairs of items are defined by the connection weights between groups of neurons. The connection weights between two groups of neurons can be represented as a matrix of numbers. Relationships between more than two items can be defined using more connections, represented as tensors of numbers (i.e., multidimensional arrays; for psychological theory see Humphreys, Bain, & Pike, 1989; for computational theory see Smolensky, 1990). Smolensky's tensor memories provide a powerful approach to representing and manipulating relationships between items that differs substantively from traditional connectionist models. In particular, tensor memories (and, in fact, HRRs) do not need to be trained. However, as the number of items bound together into an association grows, the size of the tensor needed to represent the relationship grows exponentially. …

35 citations

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an accident detection approach based on spatio-temporal feature encoding with a multilayer neural network, which achieved promising detection accuracy and efficiency for traffic accident detection, and meets the real-time detection requirement in the VANET environment.
Abstract: In the Vehicular Ad hoc Networks (VANET) environment, recognizing traffic accident events in the driving videos captured by vehicle-mounted cameras is an essential task. Generally, traffic accidents have a short duration in driving videos, and the backgrounds of driving videos are dynamic and complex. These make traffic accident detection quite challenging. To effectively and efficiently detect accidents from the driving videos, we propose an accident detection approach based on spatio–temporal feature encoding with a multilayer neural network. Specifically, the multilayer neural network is used to encode the temporal features of video for clustering the video frames. From the obtained frame clusters, we detect the border frames as the potential accident frames. Then, we capture and encode the spatial relationships of the objects detected from these potential accident frames to confirm whether these frames are accident frames. The extensive experiments demonstrate that the proposed approach achieves promising detection accuracy and efficiency for traffic accident detection, and meets the real-time detection requirement in the VANET environment.

35 citations

Posted Content
TL;DR: This work considers recurrent neural networks for sim-to-real biped locomotion, allowing for policies that learn to use internal memory to model important physical properties and shows that RNNs could use their learned memory states to perform online system identification by encoding parameters of the dynamics into memory.
Abstract: Controlling a non-statically stable biped is a difficult problem largely due to the complex hybrid dynamics involved. Recent work has demonstrated the effectiveness of reinforcement learning (RL) for simulation-based training of neural network controllers that successfully transfer to real bipeds. The existing work, however, has primarily used simple memoryless network architectures, even though more sophisticated architectures, such as those including memory, often yield superior performance in other RL domains. In this work, we consider recurrent neural networks (RNNs) for sim-to-real biped locomotion, allowing for policies that learn to use internal memory to model important physical properties. We show that while RNNs are able to significantly outperform memoryless policies in simulation, they do not exhibit superior behavior on the real biped due to overfitting to the simulation physics unless trained using dynamics randomization to prevent overfitting; this leads to consistently better sim-to-real transfer. We also show that RNNs could use their learned memory states to perform online system identification by encoding parameters of the dynamics into memory.

35 citations

Patent
Lei Shao1
15 Sep 2004
TL;DR: In this paper, an apparatus and associated method for a multicarrier MIMO transmitter with space-frequency encoding and power allocation is presented, along with a power allocation algorithm.
Abstract: An apparatus and associated method for a multicarrier MIMO transmitter with space-frequency encoding and power allocation.

35 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
83% related
Deep learning
79.8K papers, 2.1M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
81% related
Cluster analysis
146.5K papers, 2.9M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,083
20222,253
2021450
2020378
2019358
2018363