scispace - formally typeset
Search or ask a question
Topic

Entropy (information theory)

About: Entropy (information theory) is a research topic. Over the lifetime, 23293 publications have been published within this topic receiving 472212 citations. The topic is also known as: entropy & Shannon entropy.


Papers
More filters
Posted Content
TL;DR: In this article, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework is proposed, where the actor aims to maximize expected reward while also maximizing entropy.
Abstract: Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.

3,141 citations

Journal ArticleDOI
TL;DR: A new method of estimating the entropy and redundancy of a language is described, which exploits the knowledge of the language statistics possessed by those who speak the language, and depends on experimental results in prediction of the next letter when the preceding text is known.
Abstract: A new method of estimating the entropy and redundancy of a language is described. This method exploits the knowledge of the language statistics possessed by those who speak the language, and depends on experimental results in prediction of the next letter when the preceding text is known. Results of experiments in prediction are given, and some properties of an ideal predictor are developed.

2,556 citations

Journal ArticleDOI
TL;DR: Results indicate that the normalised entropy measure provides significantly improved behaviour over a range of imaged fields of view.

2,364 citations

Journal ArticleDOI
TL;DR: The authors outline a new scheme for parameterizing polarimetric scattering problems that relies on an eigenvalue analysis of the coherency matrix and employs a three-level Bernoulli statistical model to generate estimates of the average target scattering matrix parameters from the data.
Abstract: The authors outline a new scheme for parameterizing polarimetric scattering problems, which has application in the quantitative analysis of polarimetric SAR data. The method relies on an eigenvalue analysis of the coherency matrix and employs a three-level Bernoulli statistical model to generate estimates of the average target scattering matrix parameters from the data. The scattering entropy is a key parameter is determining the randomness in this model and is seen as a fundamental parameter in assessing the importance of polarimetry in remote sensing problems. The authors show application of the method to some important classical random media scattering problems and apply it to POLSAR data from the NASA/JPL AIRSAR data base.

2,262 citations

Book ChapterDOI
09 Jul 1995
TL;DR: Binning, an unsupervised discretization method, is compared to entropy-based and purity-based methods, which are supervised algorithms, and it is found that the performance of the Naive-Bayes algorithm significantly improved when features were discretized using an entropy- based method.
Abstract: Many supervised machine learning algorithms require a discrete feature space. In this paper, we review previous work on continuous feature discretization, identify defining characteristics of the methods, and conduct an empirical evaluation of several methods. We compare binning, an unsupervised discretization method, to entropy-based and purity-based methods, which are supervised algorithms. We found that the performance of the Naive-Bayes algorithm significantly improved when features were discretized using an entropy-based method. In fact, over the 16 tested datasets, the discretized version of Naive-Bayes slightly outperformed C4.5 on average. We also show that in some cases, the performance of the C4.5 induction algorithm significantly improved if features were discretized in advance; in our experiments, the performance never significantly degraded, an interesting phenomenon considering the fact that C4.5 is capable of locally discretizing features.

2,089 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
87% related
Artificial neural network
207K papers, 4.5M citations
87% related
Matrix (mathematics)
105.5K papers, 1.9M citations
85% related
Nonlinear system
208.1K papers, 4M citations
85% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021991
20201,509
20191,717
20181,557
20171,317