scispace - formally typeset
Search or ask a question

Showing papers on "Entropy (information theory) published in 2018"


Posted Content
TL;DR: In this article, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework is proposed, where the actor aims to maximize expected reward while also maximizing entropy.
Abstract: Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.

3,141 citations


Proceedings Article
03 Jul 2018
TL;DR: This paper proposes soft actor-critic, an off-policy actor-Critic deep RL algorithm based on the maximum entropy reinforcement learning framework, and achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off- policy methods.
Abstract: Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.

1,500 citations


Journal ArticleDOI
TL;DR: The authors introduce an entropy-forming-ability descriptor capturing the synthesizability of high-entropy materials, and apply the model to the discovery of new refractory metal carbides.
Abstract: High-entropy materials have attracted considerable interest due to the combination of useful properties and promising applications. Predicting their formation remains the major hindrance to the discovery of new systems. Here we propose a descriptor-entropy forming ability-for addressing synthesizability from first principles. The formalism, based on the energy distribution spectrum of randomized calculations, captures the accessibility of equally-sampled states near the ground state and quantifies configurational disorder capable of stabilizing high-entropy homogeneous phases. The methodology is applied to disordered refractory 5-metal carbides-promising candidates for high-hardness applications. The descriptor correctly predicts the ease with which compositions can be experimentally synthesized as rock-salt high-entropy homogeneous phases, validating the ansatz, and in some cases, going beyond intuition. Several of these materials exhibit hardness up to 50% higher than rule of mixtures estimations. The entropy descriptor method has the potential to accelerate the search for high-entropy systems by rationally combining first principles with experimental synthesis and characterization.

511 citations


Proceedings ArticleDOI
Fabian Mentzer1, Eirikur Agustsson1, Michael Tschannen1, Radu Timofte1, Luc Van Gool1 
18 Jun 2018
TL;DR: In this article, a 3D-CNN is used to learn a conditional probability model of the latent distribution of the auto-encoder during training, and the context model is updated to learn the dependencies between the symbols in the latent representation.
Abstract: Deep Neural Networks trained as image auto-encoders have recently emerged as a promising direction for advancing the state-of-the-art in image compression. The key challenge in learning such networks is twofold: To deal with quantization, and to control the trade-off between reconstruction error (distortion) and entropy (rate) of the latent image representation. In this paper, we focus on the latter challenge and propose a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder. The main idea is to directly model the entropy of the latent representation by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto-encoder. During training, the auto-encoder makes use of the context model to estimate the entropy of its representation, and the context model is concurrently updated to learn the dependencies between the symbols in the latent representation. Our experiments show that this approach, when measured in MS-SSIM, yields a state-of-the-art image compression system based on a simple convolutional auto-encoder.

410 citations


Posted Content
TL;DR: It is found that in terms of compression performance, autoregressive and hierarchical priors are complementary and can be combined to exploit the probabilistic structure in the latents better than all previous learned models.
Abstract: Recent models for learned image compression are based on autoencoders, learning approximately invertible mappings from pixels to a quantized latent representation. These are combined with an entropy model, a prior on the latent representation that can be used with standard arithmetic coding algorithms to yield a compressed bitstream. Recently, hierarchical entropy models have been introduced as a way to exploit more structure in the latents than simple fully factorized priors, improving compression performance while maintaining end-to-end optimization. Inspired by the success of autoregressive priors in probabilistic generative models, we examine autoregressive, hierarchical, as well as combined priors as alternatives, weighing their costs and benefits in the context of image compression. While it is well known that autoregressive models come with a significant computational penalty, we find that in terms of compression performance, autoregressive and hierarchical priors are complementary and, together, exploit the probabilistic structure in the latents better than all previous learned models. The combined model yields state-of-the-art rate--distortion performance, providing a 15.8% average reduction in file size over the previous state-of-the-art method based on deep learning, which corresponds to a 59.8% size reduction over JPEG, more than 35% reduction compared to WebP and JPEG2000, and bitstreams 8.4% smaller than BPG, the current state-of-the-art image codec. To the best of our knowledge, our model is the first learning-based method to outperform BPG on both PSNR and MS-SSIM distortion metrics.

391 citations


Proceedings Article
03 Dec 2018
TL;DR: In this article, the authors compare the performance of autoregressive, hierarchical, and combined priors in the context of image compression and find that in terms of compression performance, autoregression and hierarchical priors are complementary and can be combined to exploit the probabilistic structure in the latents better than all previous learned models.
Abstract: Recent models for learned image compression are based on autoencoders that learn approximately invertible mappings from pixels to a quantized latent representation. The transforms are combined with an entropy model, which is a prior on the latent representation that can be used with standard arithmetic coding algorithms to generate a compressed bitstream. Recently, hierarchical entropy models were introduced as a way to exploit more structure in the latents than previous fully factorized priors, improving compression performance while maintaining end-to-end optimization. Inspired by the success of autoregressive priors in probabilistic generative models, we examine autoregressive, hierarchical, and combined priors as alternatives, weighing their costs and benefits in the context of image compression. While it is well known that autoregressive models can incur a significant computational penalty, we find that in terms of compression performance, autoregressive and hierarchical priors are complementary and can be combined to exploit the probabilistic structure in the latents better than all previous learned models. The combined model yields state-of-the-art rate–distortion performance and generates smaller files than existing methods: 15.8% rate reductions over the baseline hierarchical model and 59.8%, 35%, and 8.4% savings over JPEG, JPEG2000, and BPG, respectively. To the best of our knowledge, our model is the first learning-based method to outperform the top standard image codec (BPG) on both the PSNR and MS-SSIM distortion metrics.

355 citations


ReportDOI
10 Jan 2018
TL;DR: This Recommendation specifies the design principles and requirements for the entropy sources used by Random Bit Generators, and the tests for the validation of entropy sources.
Abstract: This Recommendation specifies the design principles and requirements for the entropy sources used by Random Bit Generators, and the tests for the validation of entropy sources. These entropy sources are intended to be combined with Deterministic Random Bit Generator mechanisms that are specified in SP 800-90A to construct Random Bit Generators, as specified in SP 800-90C.

313 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: Zhang et al. as mentioned in this paper proposed to adapt the bit rate of different parts of the image to local content and allocate the content-aware bit rate under the guidance of a content-weighted importance map.
Abstract: Lossy image compression is generally formulated as a joint rate-distortion optimization problem to learn encoder, quantizer, and decoder. Due to the non-differentiable quantizer and discrete entropy estimation, it is very challenging to develop a convolutional network (CNN)-based image compression system. In this paper, motivated by that the local information content is spatially variant in an image, we suggest that: (i) the bit rate of the different parts of the image is adapted to local content, and (ii) the content-aware bit rate is allocated under the guidance of a content-weighted importance map. The sum of the importance map can thus serve as a continuous alternative of discrete entropy estimation to control compression rate. The binarizer is adopted to quantize the output of encoder and a proxy function is introduced for approximating binary operation in backward propagation to make it differentiable. The encoder, decoder, binarizer and importance map can be jointly optimized in an end-to-end manner. And a convolutional entropy encoder is further presented for lossless compression of importance map and binary codes. In low bit rate image compression, experiments show that our system significantly outperforms JPEG and JPEG 2000 by structural similarity (SSIM) index, and can produce the much better visual result with sharp edges, rich textures, and fewer artifacts.

259 citations


Journal ArticleDOI
TL;DR: Results show that the EWT outperforms empirical mode decomposition for decomposing the signal into multiple components, and the proposed EWTFSFD method can accurately and effectively achieve the fault diagnosis of motor bearing.
Abstract: Motor bearing is subjected to the joint effects of much more loads, transmissions, and shocks that cause bearing fault and machinery breakdown. A vibration signal analysis method is the most popular technique that is used to monitor and diagnose the fault of motor bearing. However, the application of the vibration signal analysis method for motor bearing is very limited in engineering practice. In this paper, on the basis of comparing fault feature extraction by using empirical wavelet transform (EWT) and Hilbert transform with the theoretical calculation, a new motor bearing fault diagnosis method based on integrating EWT, fuzzy entropy, and support vector machine (SVM) called EWTFSFD is proposed. In the proposed method, a novel signal processing method called EWT is used to decompose vibration signal into multiple components in order to extract a series of amplitude modulated–frequency modulated (AM-FM) components with supporting Fourier spectrum under an orthogonal basis. Then, fuzzy entropy is utilized to measure the complexity of vibration signal, reflect the complexity changes of intrinsic oscillation, and compute the fuzzy entropy values of AM-FM components, which are regarded as the inputs of the SVM model to train and construct an SVM classifier for fulfilling fault pattern recognition. Finally, the effectiveness of the proposed method is validated by using the simulated signal and real motor bearing vibration signals. The experiment results show that the EWT outperforms empirical mode decomposition for decomposing the signal into multiple components, and the proposed EWTFSFD method can accurately and effectively achieve the fault diagnosis of motor bearing.

225 citations


Posted Content
TL;DR: In this article, the authors show that for a fixed amount of coarse-grained information, measured by the von Neumann entropy, any system can be transformed to a state that possesses minimal energy, without changing its entropy.
Abstract: Thermodynamics and information have intricate inter-relations. The justification of the fact that information is physical, is done by inter-linking information and thermodynamics - through Landauer's principle. This modern approach towards information recently has improved our understanding of thermodynamics, both in classical and quantum domains. Here we show thermodynamics as a consequence of information conservation. Our approach can be applied to most general situations, where systems and thermal-baths could be quantum, of arbitrary sizes and even could posses inter-system correlations. The approach does not rely on an a priori predetermined temperature associated to a thermal bath, which is not meaningful for finite-size cases. Hence, the thermal-baths and systems are not different, rather both are treated on an equal footing. This results in a "temperature"-independent formulation of thermodynamics. We exploit the fact that, for a fix amount of coarse-grained information, measured by the von Neumann entropy, any system can be transformed to a state that possesses minimal energy, without changing its entropy. This state is known as a completely passive state, which assumes Boltzmann-Gibb's canonical form with an intrinsic temperature. This leads us to introduce the notions of bound and free energy, which we further use to quantify heat and work respectively. With this guiding principle of information conservation, we develop universal notions of equilibrium, heat and work, Landauer's principle and also universal fundamental laws of thermodynamics. We show that the maximum efficiency of a quantum engine, equipped with a finite baths, is in general lower than that of an ideal Carnot's engine. We also introduce a resource theoretic framework for intrinsic-temperature based thermodynamics, within which we address the problem of work extraction and state transformations.

193 citations


Journal ArticleDOI
TL;DR: This paper aims to investigate the applications of entropy for the fault characteristics extraction of rotating machines and reviews the applications using the original entropy method and the improved entropy methods, respectively.
Abstract: Rotating machines have been widely used in industrial engineering. The fault diagnosis of rotating machines plays a vital important role to reduce the catastrophic failures and heavy economic loss. However, the measured vibration signal of rotating machinery often represents non-linear and non-stationary characteristics, resulting in difficulty in the fault feature extraction. As a statistical measure, entropy can quantify the complexity and detect dynamic change through taking into account the non-linear behavior of time series. Therefore, entropy can be served as a promising tool to extract the dynamic characteristics of rotating machines. Recently, many studies have applied entropy in fault diagnosis of rotating machinery. This paper aims to investigate the applications of entropy for the fault characteristics extraction of rotating machines. First, various entropy methods are briefly introduced. Its foundation, application, and some improvements are described and discussed. The review is divided into eight parts: Shannon entropy, Renyi entropy, approximate entropy, sample entropy, fuzzy entropy, permutation entropy, and other entropy methods. In each part, we will review the applications using the original entropy method and the improved entropy methods, respectively. In the end, a summary and some research prospects are given.

Posted Content
Fabian Mentzer1, Eirikur Agustsson1, Michael Tschannen1, Radu Timofte1, Luc Van Gool1 
TL;DR: This paper proposes a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto- Encoder.
Abstract: Deep Neural Networks trained as image auto-encoders have recently emerged as a promising direction for advancing the state-of-the-art in image compression. The key challenge in learning such networks is twofold: To deal with quantization, and to control the trade-off between reconstruction error (distortion) and entropy (rate) of the latent image representation. In this paper, we focus on the latter challenge and propose a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder. The main idea is to directly model the entropy of the latent representation by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto-encoder. During training, the auto-encoder makes use of the context model to estimate the entropy of its representation, and the context model is concurrently updated to learn the dependencies between the symbols in the latent representation. Our experiments show that this approach, when measured in MS-SSIM, yields a state-of-the-art image compression system based on a simple convolutional auto-encoder.

Posted Content
TL;DR: In this article, entropy loss and adversarial loss are proposed for unsupervised domain adaptation in semantic segmentation with losses based on the entropy of the pixel-wise predictions, respectively.
Abstract: Semantic segmentation is a key problem for many computer vision tasks. While approaches based on convolutional neural networks constantly break new records on different benchmarks, generalizing well to diverse testing environments remains a major challenge. In numerous real world applications, there is indeed a large gap between data distributions in train and test domains, which results in severe performance loss at run-time. In this work, we address the task of unsupervised domain adaptation in semantic segmentation with losses based on the entropy of the pixel-wise predictions. To this end, we propose two novel, complementary methods using (i) entropy loss and (ii) adversarial loss respectively. We demonstrate state-of-the-art performance in semantic segmentation on two challenging "synthetic-2-real" set-ups and show that the approach can also be used for detection.

Journal ArticleDOI
TL;DR: A new score function for interval-valued fuzzy number is proposed for tackling the comparison problem, the formulae of information measures are introduced and their transformation relations are pioneered, and the objective weights of various parameters are determined via new entropy method.

Journal ArticleDOI
TL;DR: Extensions to data‐riven computing for both distance‐minimizing and entropy‐maximizing schemes to incorporate time integration are formulated and selected numerical tests are presented that establish the convergence properties of both types of data‐driven solvers and solutions.
Abstract: We formulate extensions to Data Driven Computing for both distance minimizing and entropy maximizing schemes to incorporate time integration. Previous works focused on formulating both types of solvers in the presence of static equilibrium constraints. Here formulations assign data points a variable relevance depending on distance to the solution and on maximum-entropy weighting, with distance minimizing schemes discussed as a special case. The resulting schemes consist of the minimization of a suitably-defined free energy over phase space subject to compatibility and a time-discretized momentum conservation constraint. We present selected numerical tests that establish the convergence properties of both types of Data Driven solvers and solutions.

Journal ArticleDOI
TL;DR: A new definition of entropy of basic probability assignments in the Dempster–Shafer theory of belief functions, which is interpreted as a measure of total uncertainty in the BPA, is proposed, which satisfies all six properties in the list, whereas none of the existing definitions do.

Journal ArticleDOI
TL;DR: A novel early fault feature extraction method based on the proposed hierarchical symbol dynamic entropy (HSDE) and the binary tree support vector machine (BT-SVM) is proposed to recognize the early fault types of rolling bearings.

Journal ArticleDOI
TL;DR: This work derives a generalization of the operationally accessible entanglement that is both computationally and experimentally measurable and investigates its scaling with the size of a spatial subregion for free fermions and finds a logarithmically violated area law scaling, similar to the spatialEntanglement entropy.
Abstract: Operationally accessible entanglement in bipartite systems of indistinguishable particles could be reduced due to restrictions on the allowed local operations as a result of particle number conservation. In order to quantify this effect, Wiseman and Vaccaro [Phys. Rev. Lett. 91, 097902 (2003)PRLTAO0031-900710.1103/PhysRevLett.91.097902] introduced an operational measure of the von Neumann entanglement entropy. Motivated by advances in measuring Renyi entropies in quantum many-body systems subject to conservation laws, we derive a generalization of the operationally accessible entanglement that is both computationally and experimentally measurable. Using the Widom theorem, we investigate its scaling with the size of a spatial subregion for free fermions and find a logarithmically violated area law scaling, similar to the spatial entanglement entropy, with at most a double-log leading-order correction. A modification of the correlation matrix method confirms our findings in systems of up to 10^{5} particles.

Journal ArticleDOI
TL;DR: This paper defines semantic information as the syntactic information that a physical system has about its environment which is causally necessary for the system to maintain its own existence, and uses recent results in non-equilibrium statistical physics to analyse semantic information from a thermodynamic point of view.
Abstract: Shannon information theory provides various measures of so-called syntactic information, which reflect the amount of statistical correlation between systems. By contrast, the concept of ‘semantic i...

Journal ArticleDOI
Fuyuan Xiao1
TL;DR: From the experimental results, it is demonstrated that the proposed method outperforms the related methods, because the uncertainty resulting from human’s subjective cognition can be reduced; meanwhile, the decision-making level can also be improved with better performance.
Abstract: The existing approaches for fuzzy soft sets decision-making are mainly based on different types of level soft sets. How to deal with such kinds of fuzzy soft sets decision-making problems via decreasing the uncertainty resulting from human’s subjective cognition is still an open issue. To address this issue, a hybrid method for utilizing fuzzy soft sets in decision-making by integrating a fuzzy preference relations analysis based on the belief entropy with the Dempster–Shafer evidence theory is proposed. The proposed method is composed of four procedures. First, we measure the uncertainties of parameters by leveraging the belief entropy. Second, with the fuzzy preference relations analysis, the uncertainties of parameters are modulated by making use of the relative reliability preference of parameters. Third, an appropriate basic probability assignment in terms of each parameter is generated on the modulated uncertainty degrees of parameters basis. Finally, we adopt Dempster’s combination rule to fuse the independent parameters into an integrated one; thus, the best one can be obtained based on the ranking candidate alternatives. In order to validate the feasibility and effectiveness of the proposed method, a numerical example and a medical diagnosis application are implemented. From the experimental results, it is demonstrated that the proposed method outperforms the related methods, because the uncertainty resulting from human’s subjective cognition can be reduced; meanwhile, the decision-making level can also be improved with better performance.

Journal ArticleDOI
TL;DR: A new type of operator called an intuitionistic fuzzy entropy weighted power average ggregation operator is proposed, which is completely driven by data and fully takes into account the relationship among values.
Abstract: Atanassov's intuitionistic fuzzy set (IFS) is a generalization of a fuzzy set that can express and process uncertainty much better. There are various averaging operators defined for IFSs. In this paper, a new type of operator called an intuitionistic fuzzy entropy weighted power average ggregation operator is proposed. The entropy among IFSs is taken into consideration to determine the weights. What's more, the similarity is considered to measure the support degree between two elements of the IFS. Compared with other classical power average operators, the proposed operator is completely driven by data and fully takes into account the relationship among values. Finally, an illustrative example of multiple attribute group decision making is presented to show that the proposed operator is effective and practical.

Journal ArticleDOI
TL;DR: In the absence of dissipation, it is proved that the semi-discretization of the Euler equations based on high-order summation-by-parts operators conserves entropy; significantly, this proof of nonlinear L 2 stability does not rely on integral exactness.

Journal ArticleDOI
Fuyuan Xiao1
TL;DR: An improved conflicting evidence combination approach based on similarity measure and belief function entropy is proposed and shows that the proposed method is reasonable and efficient in dealing with the conflicting evidences with better convergence.
Abstract: Dempster–Shafer evidence theory is widely adopted in a variety of fields of information fusion. Nevertheless, it is still an open issue about how to avoid the counter-intuitive results to combine the conflicting evidences. In order to overcome this problem, an improved conflicting evidence combination approach based on similarity measure and belief function entropy is proposed. First, the credibility degree of the evidences and their corresponding globe credibility degree are calculated on account of the modified cosine similarity measure of the basic probability assignment. Next, according to the globe credibility degree of the evidences, the primitive evidences are divided into two categories, namely, the reliable evidences and the unreliable evidences. In addition, for strengthening the positive effect of the reliable evidences and alleviating the negative impact of the unreliable evidences, a reward function and a penalty function are designed, respectively, to measure the information volume of the different types of the evidences by taking advantage of the Deng entropy function. Then, the weight value that obtained from the first step is modified by making use of the measured information volume. Finally, the modified weights of the evidences are applied for adjusting the body of the evidences before using the Dempster’s combination rule. A numerical example is provided to illustrate that the proposed method is reasonable and efficient in dealing with the conflicting evidences with better convergence. The results show that the proposed method is not only efficient, but also reliable. It outperforms other related methods which can recognise the target more accurate by 98.92%.

Journal ArticleDOI
Huimin Zhao1, Rui Yao1, Ling Xu, Yu Yuan1, Guangyu Li1, Wu Deng 
07 Sep 2018-Entropy
TL;DR: The HMGSEDI method is an effective quantitative fault damage degree identification method, and provides a new way to identify faultDamage degree and fault prediction of rotating machinery.
Abstract: A damage degree identification method based on high-order difference mathematical morphology gradient spectrum entropy (HMGSEDI) is proposed in this paper to solve the problem that fault signal of rolling bearings are weak and difficult to be quantitatively measured. In the HMGSEDI method, on the basis of mathematical morphology gradient spectrum and spectrum entropy, the changing scale influence of structure elements to damage degree identification is thoroughly analyzed to determine its optimal scale range. The high-order difference mathematical morphology gradient spectrum entropy is then defined in order to quantitatively describe the fault damage degree of bearing. The discrimination concept of fault damage degree is defined to quantitatively describe the difference between the high-order differential mathematical entropy and the general mathematical morphology entropy in order to propose a fault damage degree identification method. The vibration signal of motors under no-load and load states are used to testify the effectiveness of the proposed HMGSEDI method. The experiment shows that high-order differential mathematical morphology entropy can more effectively identify the fault damage degree of bearings and the identification accuracy of fault damage degree can be greatly improved. Therefore, the HMGSEDI method is an effective quantitative fault damage degree identification method, and provides a new way to identify fault damage degree and fault prediction of rotating machinery.

Book
30 Apr 2018
TL;DR: In this article, the authors present a factorization method for estimating resolvents of non-symmetric operators in Banach or Hilbert spaces in terms of estimates in another (typically smaller) reference space.
Abstract: We present a factorization method for estimating resolvents of non-symmetric operators in Banach or Hilbert spaces in terms of estimates in another (typically smaller) ``reference'' space. This applies to a class of operators writing as a ``regularizing'' part (in a broad sense) plus a dissipative part. Then in the Hilbert case we combine this factorization approach with an abstract Plancherel identity on the resolvent into a method for enlarging the functional space of decay estimates on semigroups. In the Banach case, we prove the same result however with some loss on the norm. We then apply these functional analysis approach to several PDEs: the Fokker-Planck and kinetic Fokker-Planck equations, the linear scattering Boltzmann equation in the torus, and, most importantly the linearized Boltzmann equation in the torus (at the price of extra specific work in the latter case). In addition to the abstract method in itself, the main outcome of the paper is indeed the first proof of exponential decay towards global equilibrium (e.g. in terms of the relative entropy) for the full Boltzmann equation for hard spheres, conditionnally to some smoothness and (polynomial) moment estimates. This improves on the result in [Desvillettes-Villani, Invent. Math., 2005] where the rate was ``almost exponential'', that is polynomial with exponent as high as wanted, and solves a long-standing conjecture about the rate of decay in the H-theorem for the nonlinear Boltzmann equation, see for instance [Cercignani, Arch. Mech, 1982] and [Rezakhanlou-Villani, Lecture Notes Springer, 2001].

Journal ArticleDOI
TL;DR: An improved environmental DEA cross model based on the information entropy to analyze and evaluate the carbon emission of industrial departments in China and can obtain the potential of carbon emission reduction ofindustrial departments to improve the energy efficiency.

Journal ArticleDOI
12 Feb 2018-PLOS ONE
TL;DR: It is demonstrated that access to variable neural states predicts complex behavioral performance, and specifically shows that entropy derived from neuroimaging signals at rest carries information about intellectual capacity.
Abstract: Human intelligence comprises comprehension of and reasoning about an infinitely variable external environment. A brain capable of large variability in neural configurations, or states, will more easily understand and predict variable external events. Entropy measures the variety of configurations possible within a system, and recently the concept of brain entropy has been defined as the number of neural states a given brain can access. This study investigates the relationship between human intelligence and brain entropy, to determine whether neural variability as reflected in neuroimaging signals carries information about intellectual ability. We hypothesize that intelligence will be positively associated with entropy in a sample of 892 healthy adults, using resting-state fMRI. Intelligence is measured with the Shipley Vocabulary and WASI Matrix Reasoning tests. Brain entropy was positively associated with intelligence. This relation was most strongly observed in the prefrontal cortex, inferior temporal lobes, and cerebellum. This relationship between high brain entropy and high intelligence indicates an essential role for entropy in brain functioning. It demonstrates that access to variable neural states predicts complex behavioral performance, and specifically shows that entropy derived from neuroimaging signals at rest carries information about intellectual capacity. Future work in this area may elucidate the links between brain entropy in both resting and active states and various forms of intelligence. This insight has the potential to provide predictive information about adaptive behavior and to delineate the subdivisions and nature of intelligence based on entropic patterns.

Journal ArticleDOI
TL;DR: This study provides a comprehensive comparison to these belief interval based uncertainty measures and is very useful for choosing the appropriate uncertainty measure in the practical applications.

Journal ArticleDOI
03 Nov 2018-Entropy
TL;DR: The main work of this paper is to propose a new belief entropy, which is mainly used to measure the uncertainty of BPA, and is based on Deng entropy and probability interval consisting of lower and upper probabilities.
Abstract: How to measure the uncertainty of the basic probability assignment (BPA) function is an open issue in Dempster–Shafer (D–S) theory. The main work of this paper is to propose a new belief entropy, which is mainly used to measure the uncertainty of BPA. The proposed belief entropy is based on Deng entropy and probability interval consisting of lower and upper probabilities. In addition, under certain conditions, it can be transformed into Shannon entropy. Numerical examples are used to illustrate the efficiency of the new belief entropy in measurement uncertainty.

Journal ArticleDOI
TL;DR: In this article, the authors studied the entanglement entropy of a single interval on a cylinder in two-dimensional deformed conformal field theory (CFT) and showed that the R\'enyi entropy takes a universal form in a CFT.
Abstract: In this paper, we study the entanglement entropy of a single interval on a cylinder in two-dimensional $T\overline{T}$-deformed conformal field theory (CFT). For such case, the (R\'enyi) entanglement entropy takes a universal form in a CFT. We compute the correction due to the deformation up to the leading order of the deformation parameter in the framework of the conformal perturbation theory. We find that the correction to the entanglement entropy is nonvanishing in the finite temperature case, while it is vanishing in the finite size case. For the deformed holographic large $c$ CFT, which is proposed to be dual to a ${\mathrm{AdS}}_{3}$ gravity in a finite region, we find the agreement with the holographic entanglement entropy via the Ryu-Takayanagi formula. Moreover, we compute the leading order correction to the R\'enyi entropy, and discuss its holographic picture as well.