scispace - formally typeset
Search or ask a question
Topic

Encoding (memory)

About: Encoding (memory) is a research topic. Over the lifetime, 7547 publications have been published within this topic receiving 120214 citations. The topic is also known as: memory encoding & encoding of memories.


Papers
More filters
Proceedings ArticleDOI
01 Jan 2018
TL;DR: Phrase-level Self-Attention Networks (PSAN) that perform self-attention across words inside a phrase to capture context dependencies at the phrase level, and use the gated memory updating mechanism to refine each word’s representation hierarchically with longer-term context dependencies captured in a larger phrase are proposed.
Abstract: Universal sentence encoding is a hot topic in recent NLP research Attention mechanism has been an integral part in many sentence encoding models, allowing the models to capture context dependencies regardless of the distance between the elements in the sequence Fully attention-based models have recently attracted enormous interest due to their highly parallelizable computation and significantly less training time However, the memory consumption of their models grows quadratically with the sentence length, and the syntactic information is neglected To this end, we propose Phrase-level Self-Attention Networks (PSAN) that perform self-attention across words inside a phrase to capture context dependencies at the phrase level, and use the gated memory updating mechanism to refine each word’s representation hierarchically with longer-term context dependencies captured in a larger phrase As a result, the memory consumption can be reduced because the self-attention is performed at the phrase level instead of the sentence level At the same time, syntactic information can be easily integrated in the model Experiment results show that PSAN can achieve the state-of-the-art performance across a plethora of NLP tasks including binary and multi-class classification, natural language inference and sentence similarity

53 citations

Journal ArticleDOI
TL;DR: A novel paradigm is used to investigate how control influences memory encoding and, conversely, how memory measures can provide new insight into flexible cognitive control to illustrate how cognitive control and bottom-up factors interact to simultaneously influence both current performance and future memory.
Abstract: Cognitive control and memory are fundamentally intertwined, but interactions between the two have only recently received sustained research interest. In the study reported here, we used a novel paradigm to investigate how control influences memory encoding and, conversely, how memory measures can provide new insight into flexible cognitive control. Participants switched between classifying objects and words, then were tested for their recognition memory of items presented in this task-switching phase. Task switching impaired memory for task-relevant information but actually improved memory for task-irrelevant information, which indicates that control demands reduced the selectivity of memory encoding rather than causing a general memory decline. Recognition memory strength provided a robust trial-by-trial measure of the effectiveness of cognitive control that "predicted" earlier task-switching performance. It also revealed a substantial influence of bottom-up factors on between-task competition, but only on trials in which participants had to switch from one type of classification to the other. Collectively, our findings illustrate how cognitive control and bottom-up factors interact to simultaneously influence both current performance and future memory.

53 citations

Journal ArticleDOI
TL;DR: DSwarm-Net as mentioned in this paper employs deep learning and swarm intelligence-based metaheuristic for HAR that uses 3D skeleton data for action classification and extracts four different types of features from the skeletal data namely: Distance, Distance Velocity, Angle, and Angle Velocity, which capture complementary information from the skeleton joints for encoding them into images.
Abstract: Abstract Human Action Recognition (HAR) is a popular area of research in computer vision due to its wide range of applications such as surveillance, health care, and gaming, etc. Action recognition based on 3D skeleton data allows simplistic, cost-efficient models to be formed making it a widely used method. In this work, we propose DSwarm-Net , a framework that employs deep learning and swarm intelligence-based metaheuristic for HAR that uses 3D skeleton data for action classification. We extract four different types of features from the skeletal data namely: Distance, Distance Velocity, Angle, and Angle Velocity, which capture complementary information from the skeleton joints for encoding them into images. Encoding the skeleton data features into images is an alternative to the traditional video-processing approach and it helps in making the classification task less complex. The Distance and Distance Velocity encoded images have been stacked depth-wise and fed into a Convolutional Neural Network model which is a modified version of Inception-ResNet. Similarly, the Angle and Angle Velocity encoded images have been stacked depth-wise and fed into the same network. After training these models, deep features have been extracted from the pre-final layer of the networks, and the obtained feature representation is optimized by a nature-inspired metaheuristic, called Ant Lion Optimizer, to eliminate the non-informative or misleading features and to reduce the dimensionality of the feature set. DSwarm-Net has been evaluated on three publicly available HAR datasets, namely UTD-MHAD, HDM05, and NTU RGB+D 60 achieving competitive results, thus confirming the superiority of the proposed model compared to state-of-the-art models.

52 citations

Journal ArticleDOI
TL;DR: Initial evidence suggesting that both training regimes (i.e., strategy and/or process training) can successfully be applied to improve prospective memory are reviewed.
Abstract: In research on cognitive plasticity, two training approaches have been established: (1) training of strategies to improve performance in a given task (e.g., encoding strategies to improve episodic memory performance) and (2) training of basic cognitive processes (e.g., working memory, inhibition) that underlie a range of more complex cognitive tasks (e.g., planning) to improve both the training target and the complex transfer tasks. Strategy training aims to compensate or circumvent limitations in underlying processes, while process training attempts to augment or to restore these processes. Although research on both approaches has produced some promising findings, results are still heterogeneous and the impact of most training regimes for everyday life is unknown. We, therefore, discuss recent proposals of training regimes aiming to improve prospective memory (i.e., forming and realizing delayed intentions) as this type of complex cognition is highly relevant for independent living. Furthermore, prospective memory is associated with working memory and executive functions and age-related decline is widely reported. We review initial evidence suggesting that both training regimes (i.e., strategy and/or process training) can successfully be applied to improve prospective memory. Conceptual and methodological implications of the findings for research on age-related prospective memory and for training research in general are discussed.

52 citations

Journal ArticleDOI
TL;DR: A mechanism that focuses on exchange-relevant information and flexibly adapts to take into account the relative significance of this information in the encoding context may be more beneficial than focusing exclusively on cheaters.

52 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
83% related
Deep learning
79.8K papers, 2.1M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
81% related
Cluster analysis
146.5K papers, 2.9M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,083
20222,253
2021450
2020378
2019358
2018363