scispace - formally typeset
Search or ask a question
Topic

Encoding (memory)

About: Encoding (memory) is a research topic. Over the lifetime, 7547 publications have been published within this topic receiving 120214 citations. The topic is also known as: memory encoding & encoding of memories.


Papers
More filters
Posted Content
TL;DR: This paper proposes a neural language model with a key-value attention mechanism that outputs separate representations for the key and value of a differentiable memory, as well as for encoding the next-word distribution that outperforms existing memory-augmented neural language models on two corpora.
Abstract: Neural language models predict the next token using a latent representation of the immediate token history. Recently, various methods for augmenting neural language models with an attention mechanism over a differentiable memory have been proposed. For predicting the next token, these models query information from a memory of the recent history which can facilitate learning mid- and long-range dependencies. However, conventional attention mechanisms used in memory-augmented neural language models produce a single output vector per time step. This vector is used both for predicting the next token as well as for the key and value of a differentiable memory of a token history. In this paper, we propose a neural language model with a key-value attention mechanism that outputs separate representations for the key and value of a differentiable memory, as well as for encoding the next-word distribution. This model outperforms existing memory-augmented neural language models on two corpora. Yet, we found that our method mainly utilizes a memory of the five most recent output representations. This led to the unexpected main finding that a much simpler model based only on the concatenation of recent output representations from previous time steps is on par with more sophisticated memory-augmented neural language models.

110 citations

Proceedings ArticleDOI
11 May 2014
TL;DR: This paper introduces an FPGA-optimized SMVM architecture and a novel sparse matrix encoding that explicitly exposes parallelism across rows, while keeping the hardware complexity and on-chip memory usage low.
Abstract: Sparse matrix-vector multiplication (SMVM) is a crucial primitive used in a variety of scientific and commercial applications. Despite having significant parallelism, SMVM is a challenging kernel to optimize due to its irregular memory access characteristics. Numerous studies have proposed the use of FPGAs to accelerate SMVM implementations. However, most prior approaches focus on parallelizing multiply-accumulate operations within a single row of the matrix (which limits parallelism if rows are small) and/or make inefficient uses of the memory system when fetching matrix and vector elements. In this paper, we introduce an FPGA-optimized SMVM architecture and a novel sparse matrix encoding that explicitly exposes parallelism across rows, while keeping the hardware complexity and on-chip memory usage low. This system compares favorably with prior FPGA SMVM implementations. For the over 700 University of Florida sparse matrices we evaluated, it also performs within about two thirds of CPU SMVM performance on average, even though it has 2.4× lower DRAM memory bandwidth, and within almost one third of GPU SVMV performance on average, even at 9x lower memory bandwidth. Additionally, it consumes only 25W, for power efficiencies 2.6x and 2.3x higher than CPU and GPU, respectively, based on maximum device power.

109 citations

Journal ArticleDOI
TL;DR: It is indicated that learned items can be integrated into cortical memory networks at an accelerated rate through fast mapping and the retrieval of a related known concept, in order to infer the target of the FM question, is critical for this effect.
Abstract: Successful learning involves integrating new material into existing memory networks. A learning procedure known as fast mapping (FM), thought to simulate the word-learning environment of children, has recently been linked to distinct neuroanatomical substrates in adults. This idea has suggested the (never-before tested) hypothesis that FM may promote rapid incorporation into cortical memory networks. We test this hypothesis here in 2 experiments. In our 1st experiment, we introduced 50 participants to 16 unfamiliar animals and names through FM or explicit encoding (EE) and tested participants on the training day, and again after sleep. Learning through EE produced strong declarative memories, without immediate lexical competition, as expected from slow-consolidation models. Learning through FM, however, led to almost immediate lexical competition, which continued to the next day. Additionally, the learned words began to prime related concepts on the day following FM (but not EE) training. In a 2nd experiment, we replicated the lexical integration results and determined that presenting an already-known item during learning was crucial for rapid integration through FM. The findings presented here indicate that learned items can be integrated into cortical memory networks at an accelerated rate through fast mapping. The retrieval of a related known concept, in order to infer the target of the FM question, is critical for this effect.

109 citations

Patent
22 Dec 2005
TL;DR: In this article, an encoding and decoding method and apparatus was proposed to improve decoding performance without using a large memory capacity and also reduce the complexity of hardware for implementation, where an encoded signal is received from a transmitting side, and the received signal is decoded using the parity check matrix.
Abstract: An encoding and decoding method and apparatus.is disclosed. The method and apparatus improves encoding and decoding performance without using a large memory capacity and also reduces the complexity of hardware for implementation. According to the method, an encoded signal is received from a transmitting side, and the received signal is decoded using the parity check matrix. The parity check matrix includes layers where nonzero elements of a specific number of layers do not overlap in column direction.

109 citations

Patent
31 Oct 2007
TL;DR: In this article, a network-based video encoding and decoding system encodes and decodes remotely displayed user application data on a centralized desktop computer, where the encoding system consults a back channel information manager to dynamically adjust encoding parameters.
Abstract: A network-based video encoding and decoding system encodes and decodes remotely displayed user application data on a centralized desktop computer. Remotely displayed user application data are screen captures of a browsing application run by the centralized desktop computer on user's behalf. The encoding system optimizes its encoding performance using back channel information which includes real time network capacity information and decoder feedback. The encoding system consults a back channel information manager to dynamically adjust encoding parameters. Based on the real time network capacity information received, the encoding system adjusts its capturing sampling rate. Based on encoding errors identified by the decoding system, the encoding system selectively re-send previously encoded frames/blocks, or send intra frames on demand to allow the decoding system to correct encoding errors. In response to encoding customization requests from the decoding system, the encoding system adjusts its encoding parameters to meet such requests.

109 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
83% related
Deep learning
79.8K papers, 2.1M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
81% related
Cluster analysis
146.5K papers, 2.9M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,083
20222,253
2021450
2020378
2019358
2018363