scispace - formally typeset
Search or ask a question
Topic

Encoding (memory)

About: Encoding (memory) is a research topic. Over the lifetime, 7547 publications have been published within this topic receiving 120214 citations. The topic is also known as: memory encoding & encoding of memories.


Papers
More filters
Posted Content
TL;DR: This paper proposes a model, called "bi-directional block self-attention network (Bi-BloSAN), for RNN/CNN-free sequence encoding that achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN /CNN/SAN.
Abstract: Recurrent neural networks (RNN), convolutional neural networks (CNN) and self-attention networks (SAN) are commonly used to produce context-aware representations RNN can capture long-range dependency but is hard to parallelize and not time-efficient CNN focuses on local dependency but does not perform well on some tasks SAN can model both such dependencies via highly parallelizable computation, but memory requirement grows rapidly in line with sequence length In this paper, we propose a model, called "bi-directional block self-attention network (Bi-BloSAN)", for RNN/CNN-free sequence encoding It requires as little memory as RNN but with all the merits of SAN Bi-BloSAN splits the entire sequence into blocks, and applies an intra-block SAN to each block for modeling local context, then applies an inter-block SAN to the outputs for all blocks to capture long-range dependency Thus, each SAN only needs to process a short sequence, and only a small amount of memory is required Additionally, we use feature-level attention to handle the variation of contexts around the same word, and use forward/backward masks to encode temporal order information On nine benchmark datasets for different NLP tasks, Bi-BloSAN achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN/CNN/SAN

121 citations

Journal ArticleDOI
TL;DR: This article attempts to determine whether these effects are a result of the interruption of encoding of associative information among the components of an episode or whether the cause lies somewhere other than in the associative processes that are engaged.
Abstract: Divided attention at encoding is well known to have adverse effects on episodic memory performance (e.g., Naveh-Benjamin & Greg, 2000). This article attempts to determine whether these effects are a result of the interruption of encoding of associative information among the components of an episode. Five experiments, using different types of episodes and episodes components, were conducted. Participants studied information under either full or divided attention and were then tested on their memory for both the episodes’ components and the associations between them. Divided attention did not produce a differential deficit in memory for associative information; memory for the components suffered to the same degree as memory for the associations among the components. The cause of the divided-attention effect at encoding lies somewhere other than in the associative processes that are engaged.

120 citations

Patent
25 Aug 1995
TL;DR: In this paper, a method and system for the remote control of devices having a secure self-learn capability is presented, which includes an encoder and a decoder, the encoder encoding variable information including a user key using a non-linear algorithm to produce an encoded value transmitted to the decoder.
Abstract: A method and system for the remote control of devices having a secure self learn capability. The system includes an encoder and a decoder, the encoder encoding variable information including a user key using a non-linear algorithm to produce an encoded value transmitted to the decoder, the decoder decoding the value using the same algorithm. In a learning mode a new encoder is to be added to the system. The new encoder produces an encoded value using a key generation seed. The decoder, upon receiving the encoded key generation seed, produces a decoding key based upon the decoded key generation seed. The decoding key is stored in the decoder memory allowing valid recognition of the new encoder in a secure manner.

120 citations

Journal ArticleDOI
TL;DR: Evaluation on the Airline Travel Information System (ATIS) task shows that in comparison to its parent CDHMM system, a converted SDCHMM system achieves seven- to 18-fold reduction in memory requirement for acoustic models, and runs 30%-60% faster without any loss of recognition accuracy.
Abstract: Most contemporary laboratory recognizers require too much memory to run, and are too slow for mass applications. One major cause of the problem is the large parameter space of their acoustic models. In this paper, we propose a new acoustic modeling methodology which we call subspace distribution clustering hidden Markov modeling (SDCHMM) with the aim of achieving much more compact acoustic models. The theory of SDCHMM is based on tying the parameters of a new unit, namely the subspace distribution, of continuous density hidden Markov models (CDHMMs). SDCHMMs can be converted from CDHMMs by projecting the distributions of the CDHMMs onto orthogonal subspaces, and then tying similar subspace distributions over all states and all acoustic models in each subspace, by exploiting the combinatorial effect of subspace distribution encoding, all original full-space distributions can be represented by combinations of a small number of subspace distribution prototypes. Consequently, there is a great reduction in the number of model parameters, and thus substantial savings in memory and computation. This renders SDCHMM very attractive in the practical implementation of acoustic models. Evaluation on the Airline Travel Information System (ATIS) task shows that in comparison to its parent CDHMM system, a converted SDCHMM system achieves seven- to 18-fold reduction in memory requirement for acoustic models, and runs 30%-60% faster without any loss of recognition accuracy.

120 citations

Patent
01 Jul 2009
TL;DR: In this paper, an intra-TP motion prediction/compensation unit 75 performs motion prediction within a predetermined search range by taking predicted motion vector information generated by an intrapredicted motion vector generating unit 76 as the center of search, on the basis of an image to be intra-predicted from a screen rearrangement buffer 62 and reference images from a frame memory 72.
Abstract: The present invention relates to image processing apparatus and method which make it possible to prevent a decrease in compression efficiency without increasing computational complexity An intra-TP motion prediction/compensation unit 75 performs motion prediction within a predetermined search range by taking predicted motion vector information generated by an intra-predicted motion vector generating unit 76 as the center of search, on the basis of an image to be intra-predicted from a screen rearrangement buffer 62, and reference images from a frame memory 72 An inter-TP motion prediction/compensation unit 78 performs motion prediction within a predetermined search range by taking predicted motion vector information generated by an inter-predicted motion vector generating unit 79 as the center of search, on the basis of an image to be inter-encoded from the screen rearrangement buffer 62, and reference images from the frame memory 72 The present invention can be applied to, for example, an image encoding apparatus that performs encoding in H264/AVC format

119 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
83% related
Deep learning
79.8K papers, 2.1M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
81% related
Cluster analysis
146.5K papers, 2.9M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,083
20222,253
2021450
2020378
2019358
2018363