scispace - formally typeset
Search or ask a question
Topic

Encoding (memory)

About: Encoding (memory) is a research topic. Over the lifetime, 7547 publications have been published within this topic receiving 120214 citations. The topic is also known as: memory encoding & encoding of memories.


Papers
More filters
Patent
23 Feb 1990
TL;DR: In this paper, a signal processing device in which compressed encoded information obtained by encoding an input information signal based on use of correlation between components thereof is written in a memory capable of simultaneously performing write and read operations, and the encoded information read out from the memory is decoded to restore the information signal.
Abstract: A signal processing device in which compressed encoded information obtained by encoding an input information signal based on use of correlation between components thereof is written in a memory capable of simultaneously performing write and read operations, and the encoded information read out from the memory is decoded to restore the information signal while signal processing is effected by using the information signal prior to the encoding and the decoded information signal.

47 citations

Journal ArticleDOI
TL;DR: In this article, a distributed representation for categorical features where each category is mapped to a distinct vector, and the properties of the vector are learned while training a neural network is proposed.
Abstract: Many machine learning algorithms and almost all deep learning architectures are incapable of processing plain texts in their raw form. This means that their input to the algorithms must be numerical in order to solve classification or regression problems. Hence, it is necessary to encode these categorical variables into numerical values using encoding techniques. Categorical features are common and often of high cardinality. One-hot encoding in such circumstances leads to very high dimensional vector representations, raising memory and computability concerns for machine learning models. This paper proposes a deep-learned embedding technique for categorical features encoding on categorical datasets. Our technique is a distributed representation for categorical features where each category is mapped to a distinct vector, and the properties of the vector are learned while training a neural network. First, we create a data vocabulary that includes only categorical data, and then we use word tokenization to make each categorical data a single word. After that, feature learning is introduced to map all of the categorical data from the vocabulary to word vectors. Three different datasets provided by the University of California Irvine (UCI) are used for training. The experimental results show that the proposed deep-learned embedding technique for categorical data provides a higher F1 score of 89% than 71% of one-hot encoding, in the case of the Long short-term memory (LSTM) model. Moreover, the deep-learned embedding technique uses less memory and generates fewer features than one-hot encoding.

47 citations

Proceedings ArticleDOI
01 Jul 2012
TL;DR: Fundamental information-theoretic bounds are provided on the required circuit wiring complexity and power consumption for encoding and decoding of error-correcting codes and for bounded transmit-power schemes, showing that there is a fundamental tradeoff between the transmit and encoding/decoding power.
Abstract: We provide fundamental information-theoretic bounds on the required circuit wiring complexity and power consumption for encoding and decoding of error-correcting codes. These bounds hold for all codes and all encoding and decoding algorithms implemented within the paradigm of our VLSI model. This model essentially views computation on a 2-D VLSI circuit as a computation on a network of connected nodes. The bounds are derived based on analyzing information flow in the circuit. They are then used to show that there is a fundamental tradeoff between the transmit and encoding/decoding power, and that the total (transmit + encoding + decoding) power must diverge to infinity at least as fast as cube-root of log 1/p e , where P e is the average block-error probability. On the other hand, for bounded transmit-power schemes, the total power must diverge to infinity at least as fast as square-root of log 1/P e due to the burden of encoding/decoding.

47 citations

Journal ArticleDOI
TL;DR: A solution methodology for obtaining a sequential decomposition of the global optimization problem is developed and is extended to the case when the sensor makes an imperfect observation of the state of the plant.
Abstract: A discrete time stochastic feedback control system consisting of a nonlinear plant, a sensor, a controller, and a noisy communication channel between the sensor and the controller is considered. The sensor has limited memory and, at each time, it transmits an encoded symbol over the channel and updates its memory. The controller receives a noise-corrupted copy of the transmitted symbol. It generates a control action based on all its past observations and all its past actions. This control action is fed back to the plant. At each time instant the system incurs an instantaneous cost depending on the state of the plant and the control action. The objective is to choose encoding, memory update, and control strategies to minimize an expected total cost over a finite horizon, or an expected discounted cost over an infinite horizon, or an average cost per unit time over an infinite horizon. A solution methodology for obtaining a sequential decomposition of the global optimization problem is developed. This solution methodology is extended to the case when the sensor makes an imperfect observation of the state of the plant.

47 citations

Posted Content
TL;DR: This paper proposes a new method, evolution of a tree-based encoding of the gated memory nodes, and shows that it makes it possible to explore new variations more effectively than other methods, and discovers nodes with multiple recurrent paths and multiple memory cells, which lead to significant improvement in the standard language modeling benchmark task.
Abstract: Gated recurrent networks such as those composed of Long Short-Term Memory (LSTM) nodes have recently been used to improve state of the art in many sequential processing tasks such as speech recognition and machine translation. However, the basic structure of the LSTM node is essentially the same as when it was first conceived 25 years ago. Recently, evolutionary and reinforcement learning mechanisms have been employed to create new variations of this structure. This paper proposes a new method, evolution of a tree-based encoding of the gated memory nodes, and shows that it makes it possible to explore new variations more effectively than other methods. The method discovers nodes with multiple recurrent paths and multiple memory cells, which lead to significant improvement in the standard language modeling benchmark task. The paper also shows how the search process can be speeded up by training an LSTM network to estimate performance of candidate structures, and by encouraging exploration of novel solutions. Thus, evolutionary design of complex neural network structures promises to improve performance of deep learning architectures beyond human ability to do so.

47 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
83% related
Deep learning
79.8K papers, 2.1M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
81% related
Cluster analysis
146.5K papers, 2.9M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,083
20222,253
2021450
2020378
2019358
2018363