scispace - formally typeset
Search or ask a question

Showing papers on "Entropy (information theory) published in 2012"


Journal ArticleDOI
23 Aug 2012-Entropy
TL;DR: The theoretical foundations of the permutation entropy are analyzed, as well as the main recent applications to the analysis of economical markets and to the understanding of biomedical systems.
Abstract: Entropy is a powerful tool for the analysis of time series, as it allows describing the probability distributions of the possible state of a system, and therefore the information encoded in it. Nevertheless, important information may be codified also in the temporal dynamics, an aspect which is not usually taken into account. The idea of calculating entropy based on permutation patterns (that is, permutations defined by the order relations among values of a time series) has received a lot of attention in the last years, especially for the understanding of complex and chaotic systems. Permutation entropy directly accounts for the temporal information contained in the time series; furthermore, it has the quality of simplicity, robustness and very low computational cost. To celebrate the tenth anniversary of the original work, here we analyze the theoretical foundations of the permutation entropy, as well as the main recent applications to the analysis of economical markets and to the understanding of biomedical systems.

537 citations


Journal ArticleDOI
TL;DR: The authors develops desiderata for probabilistic optimization algorithms, then presents a concrete algorithm which addresses each of the computational intractabilities with a sequence of approximations and explicitly addresses the decision problem of maximizing information gain from each evaluation.
Abstract: Contemporary global optimization algorithms are based on local measures of utility, rather than a probability measure over location and value of the optimum. They thus attempt to collect low function values, not to learn about the optimum. The reason for the absence of probabilistic global optimizers is that the corresponding inference problem is intractable in several ways. This paper develops desiderata for probabilistic optimization algorithms, then presents a concrete algorithm which addresses each of the computational intractabilities with a sequence of approximations and explicitly addresses the decision problem of maximizing information gain from each evaluation.

403 citations


Proceedings Article
03 Dec 2012
TL;DR: This work proposes a minimax entropy principle to improve the quality of noisy labels from crowds of nonexperts, and shows that a simple coordinate descent scheme can optimize minimAX entropy.
Abstract: An important way to make large training sets is to gather noisy labels from crowds of nonexperts. We propose a minimax entropy principle to improve the quality of these labels. Our method assumes that labels are generated by a probability distribution over workers, items, and labels. By maximizing the entropy of this distribution, the method naturally infers item confusability and worker expertise. We infer the ground truth by minimizing the entropy of this distribution, which we show minimizes the Kullback-Leibler (KL) divergence between the probability distribution and the unknown truth. We show that a simple coordinate descent scheme can optimize minimax entropy. Empirically, our results are substantially better than previously published methods for the same problem.

393 citations


Journal ArticleDOI
TL;DR: The most important properties of Rényi divergence and Kullback- Leibler divergence are reviewed, including convexity, continuity, limits of σ-algebras, and the relation of the special order 0 to the Gaussian dichotomy and contiguity.
Abstract: Renyi divergence is related to Renyi entropy much like Kullback-Leibler divergence is related to Shannon's entropy, and comes up in many settings. It was introduced by Renyi as a measure of information that satisfies almost the same axioms as Kullback-Leibler divergence, and depends on a parameter that is called its order. In particular, the Renyi divergence of order 1 equals the Kullback-Leibler divergence. We review and extend the most important properties of Renyi divergence and Kullback-Leibler divergence, including convexity, continuity, limits of $\sigma$-algebras and the relation of the special order 0 to the Gaussian dichotomy and contiguity. We also show how to generalize the Pythagorean inequality to orders different from 1, and we extend the known equivalence between channel capacity and minimax redundancy to continuous channel inputs (for all orders) and present several other minimax results.

350 citations


Journal ArticleDOI
TL;DR: A formula is presented that decomposes TE into a sum of finite-dimensional contributions that is called decomposed transfer entropy and demonstrates the method's performance using examples of nonlinear stochastic delay-differential equations and observational climate data.
Abstract: Multivariate transfer entropy (TE) is a model-free approach to detect causalities in multivariate time series. It is able to distinguish direct from indirect causality and common drivers without assuming any underlying model. But despite these advantages it has mostly been applied in a bivariate setting as it is hard to estimate reliably in high dimensions since its definition involves infinite vectors. To overcome this limitation, we propose to embed TE into the framework of graphical models and present a formula that decomposes TE into a sum of finite-dimensional contributions that we call decomposed transfer entropy. Graphical models further provide a richer picture because they also yield the causal coupling delays. To estimate the graphical model we suggest an iterative algorithm, a modified version of the PC-algorithm with a very low estimation dimension. We present an appropriate significance test and demonstrate the method's performance using examples of nonlinear stochastic delay-differential equations and observational climate data (sea level pressure).

285 citations


Journal ArticleDOI
TL;DR: Two multiattribute decision‐making methods are developed in which the attribute values are given in the form of hesitant fuzzy sets reflecting humans' hesitant thinking comprehensively, and it is found that three measures are interchangeable under certain conditions.
Abstract: We introduce the concepts of entropy and cross-entropy for hesitant fuzzy information, and discuss their desirable properties. Several measure formulas are further developed, and the relationships among the proposed entropy, cross-entropy, and similarity measures are analyzed, from which we can find that three measures are interchangeable under certain conditions. Then we develop two multiattribute decision-making methods in which the attribute values are given in the form of hesitant fuzzy sets reflecting humans' hesitant thinking comprehensively. In one method, the weight vector is determined by the hesitant fuzzy entropy measure, and the optimal alternative is obtained by comparing the hesitant fuzzy cross-entropies between the alternatives and the ideal solutions; in another method, the weight vector is derived from the maximizing deviation method and the optimal alternative is obtained by using the TOPSIS method. An actual example is provided to compare our methods with the existing ones. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.

274 citations


Journal ArticleDOI
TL;DR: Two methods to determine the optimal weights of attributes are given, and three new aggregation operators are introduced, which treat the membership and non-membership information fairly, to aggregate intuitionistic fuzzy information.

267 citations


Journal ArticleDOI
TL;DR: In this paper, a measure of holographic information based on a causal wedge construction is proposed to quantify the amount of information contained in a given spatial region in field theory, which is an extremal surface on the boundary of the bulk causal wedge.
Abstract: We propose a measure of holographic information based on a causal wedge construction. The motivation behind this comes from an attempt to understand how boundary field theories can holographically reconstruct spacetime. We argue that given the knowledge of the reduced density matrix in a spatial region of the boundary, one should be able to reconstruct at least the corresponding bulk causal wedge. In attempt to quantify the ‘amount of information’ contained in a given spatial region in field theory, we consider a particular bulk surface (specifically a co-dimension two surface in the bulk spacetime which is an extremal surface on the boundary of the bulk causal wedge), and propose that the area of this surface, measured in Planck units, naturally quantifies the information content. We therefore call this area the causal holographic information. We also contrast our ideas with earlier studies of holographic entanglement entropy. In particular, we establish that the causal holographic information, whilst not being a von Neumann entropy, curiously enough agrees with the entanglement entropy in all cases where one has a microscopic understanding of entanglement entropy.

244 citations


Journal ArticleDOI
TL;DR: In this paper, the authors use information entropy as an objective measure to compare and evaluate model and observational results, and apply it to the visualization of uncertainties in geological models, here understood as structural representations of the subsurface.

215 citations


Proceedings ArticleDOI
16 Apr 2012
TL;DR: A measure of causal relationships between nodes based on the information--theoretic notion of transfer entropy, or information transfer, which allows us to differentiate between weak influence over large groups and strong influence over small groups.
Abstract: Recent research has explored the increasingly important role of social media by examining the dynamics of individual and group behavior, characterizing patterns of information diffusion, and identifying influential individuals. In this paper we suggest a measure of causal relationships between nodes based on the information--theoretic notion of transfer entropy, or information transfer. This theoretically grounded measure is based on dynamic information, captures fine--grain notions of influence, and admits a natural, predictive interpretation. Networks inferred by transfer entropy can differ significantly from static friendship networks because most friendship links are not useful for predicting future dynamics. We demonstrate through analysis of synthetic and real-world data that transfer entropy reveals meaningful hidden network structures. In addition to altering our notion of who is influential, transfer entropy allows us to differentiate between weak influence over large groups and strong influence over small groups.

196 citations


Journal ArticleDOI
TL;DR: This work first introduces multivariate sample entropy (MSampEn) and evaluates it over multiple time scales to perform the multivariate multiscale entropy (MMSE) analysis, which makes it possible to assess structural complexity of multivariate physical or physiological systems, together with more degrees of freedom and enhanced rigor in the analysis.
Abstract: Multivariate physical and biological recordings are common and their simultaneous analysis is a prerequisite for the understanding of the complexity of underlying signal generating mechanisms. Traditional entropy measures are maximized for random processes and fail to quantify inherent long-range dependencies in real world data, a key feature of complex systems. The recently introduced multiscale entropy (MSE) is a univariate method capable of detecting intrinsic correlations and has been used to measure complexity of single channel physiological signals. To generalize this method for multichannel data, we first introduce multivariate sample entropy (MSampEn) and evaluate it over multiple time scales to perform the multivariate multiscale entropy (MMSE) analysis. This makes it possible to assess structural complexity of multivariate physical or physiological systems, together with more degrees of freedom and enhanced rigor in the analysis. Simulations on both multivariate synthetic data and real world postural sway analysis support the approach.

Journal ArticleDOI
TL;DR: This article theoretically analyze the requirements for structural representations and introduces two approaches to create such representations, which are based on the calculation of patch entropy and manifold learning, respectively.

Journal ArticleDOI
TL;DR: The use of genetic algorithms in image encryption has been attempted for the first time in this paper and a high level of resistance of the proposed method against brute-force and statistical invasions is obviously illustrated.
Abstract: The security of digital images has attracted much attention recently. In this study, a new method based on a hybrid model is proposed for image encryption. The hybrid model is composed of a genetic algorithm and a chaotic function. In the first stage of the proposed method, a number of encrypted images are constructed using the original image and the chaotic function. In the next stage, these encrypted images are used as the initial population for the genetic algorithm. In each stage of the genetic algorithm, the answer obtained from the previous iteration is optimized to produce the best-encrypted image. The best-encrypted image is defined as the image with the highest entropy and the lowest correlation coefficient among adjacent pixels. The use of genetic algorithms in image encryption has been attempted for the first time in this paper. Analyzing the results from the performed experiments, a high level of resistance of the proposed method against brute-force and statistical invasions is obviously illustrated. The obtained entropy and correlation coefficients of the method are approximately 7.9978 and −0.0009, respectively.

Journal ArticleDOI
TL;DR: The active information storage is introduced, which quantifies the information storage component that is directly in use in the computation of the next state of a process, and it is demonstrated that the local entropy rate is a useful spatiotemporal filter for information transfer structure.

Journal ArticleDOI
14 Mar 2012-Entropy
TL;DR: This paper presents a taxonomy and overview of approaches to the measurement of graph and network complexity and distinguishes between deterministic and probabilistic approaches with a view to placing entropy-based Probabilistic measurement in context.
Abstract: This paper presents a taxonomy and overview of approaches to the measurement of graph and network complexity. The taxonomy distinguishes between deterministic (e.g., Kolmogorov complexity) and probabilistic approaches with a view to placing entropy-based probabilistic measurement in context. Entropy-based measurement is the main focus of the paper. Relationships between the different entropy functions used to measure complexity are examined; and intrinsic (e.g., classical measures) and extrinsic (e.g., Korner entropy) variants of entropy-based models are discussed in some detail.

Journal ArticleDOI
TL;DR: This article proposes a certain time-delayed conditional mutual information, the momentary information transfer (MIT), as a lag-specific measure of association that is general, causal, reflects a well interpretable notion of coupling strength, and is practically computable.
Abstract: While it is an important problem to identify the existence of causal associations between two components of a multivariate time series, a topic addressed in Runge, Heitzig, Petoukhov, and Kurths [Phys. Rev. Lett. 108, 258701 (2012)], it is even more important to assess the strength of their association in a meaningful way. In the present article we focus on the problem of defining a meaningful coupling strength using information-theoretic measures and demonstrate the shortcomings of the well-known mutual information and transfer entropy. Instead, we propose a certain time-delayed conditional mutual information, the momentary information transfer (MIT), as a lag-specific measure of association that is general, causal, reflects a well interpretable notion of coupling strength, and is practically computable. Rooted in information theory, MIT is general in that it does not assume a certain model class underlying the process that generates the time series. As discussed in a previous paper [Runge, Heitzig, Petoukhov, and Kurths, Phys. Rev. Lett. 108, 258701 (2012)], the general framework of graphical models makes MIT causal in that it gives a nonzero value only to lagged components that are not independent conditional on the remaining process. Further, graphical models admit a low-dimensional formulation of conditions, which is important for a reliable estimation of conditional mutual information and, thus, makes MIT practically computable. MIT is based on the fundamental concept of source entropy, which we utilize to yield a notion of coupling strength that is, compared to mutual information and transfer entropy, well interpretable in that, for many cases, it solely depends on the interaction of the two components at a certain lag. In particular, MIT is, thus, in many cases able to exclude the misleading influence of autodependency within a process in an information-theoretic way. We formalize and prove this idea analytically and numerically for a general class of nonlinear stochastic processes and illustrate the potential of MIT on climatological data.

Journal ArticleDOI
TL;DR: A new measure of feature quality, called rank mutual information (RMI), is introduced, which combines the advantage of robustness of Shannon's entropy with the ability of dominance rough sets in extracting ordinal structures from monotonic data sets and can get monotonically consistent decision trees.
Abstract: In many decision making tasks, values of features and decision are ordinal. Moreover, there is a monotonic constraint that the objects with better feature values should not be assigned to a worse decision class. Such problems are called ordinal classification with monotonicity constraint. Some learning algorithms have been developed to handle this kind of tasks in recent years. However, experiments show that these algorithms are sensitive to noisy samples and do not work well in real-world applications. In this work, we introduce a new measure of feature quality, called rank mutual information (RMI), which combines the advantage of robustness of Shannon's entropy with the ability of dominance rough sets in extracting ordinal structures from monotonic data sets. Then, we design a decision tree algorithm (REMT) based on rank mutual information. The theoretic and experimental analysis shows that the proposed algorithm can get monotonically consistent decision trees, if training samples are monotonically consistent. Its performance is still good when data are contaminated with noise.

Journal ArticleDOI
01 Nov 2012
TL;DR: An information-theoretic method to measure the similarity between a given set of observed, real-world data and visual simulation technique for aggregate crowd motions of a complex system consisting of many individual agents is presented.
Abstract: We present an information-theoretic method to measure the similarity between a given set of observed, real-world data and visual simulation technique for aggregate crowd motions of a complex system consisting of many individual agents. This metric uses a two-step process to quantify a simulator's ability to reproduce the collective behaviors of the whole system, as observed in the recorded real-world data. First, Bayesian inference is used to estimate the simulation states which best correspond to the observed data, then a maximum likelihood estimator is used to approximate the prediction errors. This process is iterated using the EM-algorithm to produce a robust, statistical estimate of the magnitude of the prediction error as measured by its entropy (smaller is better). This metric serves as a simulator-to-data similarity measurement. We evaluated the metric in terms of robustness to sensor noise, consistency across different datasets and simulation methods, and correlation to perceptual metrics.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the rough decision entropy measure and the interval approximation roughness measure are effective and valid for evaluating the uncertainty measurement of interval-valued decision systems.
Abstract: Uncertainty measures can supply new points of view for analyzing data and help us to disclose the substantive characteristics of data sets. Some uncertainty measures for single-valued information systems or single-valued decision systems have been developed. However, there are few studies on the uncertainty measurement for interval-valued information systems or interval-valued decision systems. This paper addresses the uncertainty measurement problem in interval-valued decision systems. An extended conditional entropy is proposed in interval-valued decision systems based on possible degree between interval values. Consequently, a concept called rough decision entropy is introduced to evaluate the uncertainty of an interval-valued decision system. Besides, the original approximation accuracy measure proposed by Pawlak is extended to deal with interval-valued decision systems and the concept of interval approximation roughness is presented. Experimental results demonstrate that the rough decision entropy measure and the interval approximation roughness measure are effective and valid for evaluating the uncertainty measurement of interval-valued decision systems. Experimental results also indicate that the rough decision entropy measure outperforms the interval approximation roughness measure.

Journal ArticleDOI
TL;DR: This paper derives expressions for the entropy of stochastic blockmodel ensembles from several ensemble variants, including the traditional model as well as the newly introduced degree-corrected version, which imposes a degree sequence on the vertices, in addition to the block structure.
Abstract: Stochastic blockmodels are generative network models where the vertices are separated into discrete groups, and the probability of an edge existing between two vertices is determined solely by their group membership. In this paper, we derive expressions for the entropy of stochastic blockmodel ensembles. We consider several ensemble variants, including the traditional model as well as the newly introduced degree-corrected version [Karrer et al., Phys. Rev. E 83, 016107 (2011)], which imposes a degree sequence on the vertices, in addition to the block structure. The imposed degree sequence is implemented both as ``soft'' constraints, where only the expected degrees are imposed, and as ``hard'' constraints, where they are required to be the same on all samples of the ensemble. We also consider generalizations to multigraphs and directed graphs. We illustrate one of many applications of this measure by directly deriving a log-likelihood function from the entropy expression, and using it to infer latent block structure in observed data. Due to the general nature of the ensembles considered, the method works well for ensembles with intrinsic degree correlations (i.e., with entropic origin) as well as extrinsic degree correlations, which go beyond the block structure.

Journal ArticleDOI
TL;DR: In this paper, a new notion of Ricci curvature that applies to Markov chains on discrete spaces was introduced. But the role of the Wasserstein metric is taken over by a different metric, having the property that continuous time Markov Chains are gradient flows of the entropy.
Abstract: We study a new notion of Ricci curvature that applies to Markov chains on discrete spaces. This notion relies on geodesic convexity of the entropy and is analogous to the one introduced by Lott, Sturm, and Villani for geodesic measure spaces. In order to apply to the discrete setting, the role of the Wasserstein metric is taken over by a different metric, having the property that continuous time Markov chains are gradient flows of the entropy. Using this notion of Ricci curvature we prove discrete analogues of fundamental results by Bakry–Emery and Otto–Villani. Further, we show that Ricci curvature bounds are preserved under tensorisation. As a special case we obtain the sharp Ricci curvature lower bound for the discrete hypercube.

Journal ArticleDOI
TL;DR: A hybrid intelligent algorithm is designed to obtain the optimal portfolio strategy by taking into account four criteria viz., return, risk, transaction cost and diversification degree of portfolio.

Journal Article
TL;DR: An automatic system for the extraction of normal and abnormal features in color retinal images could assist the ophthalmologists, to detect the signs of diabetic retinopathy in the early stage, for a better treatment plan and to improve the vision related quality of life.
Abstract: Diabetic retinopathy is one of the serious eye diseases that can cause blindness and vision loss. Diabetes mellitus, a metabolic disorder, has become one of the rapidly increasing health threats both in India and worldwide. The complication of the diabetes associated to retina of the eye is diabetic retinopathy. A patient with the disease has to undergo periodic screening of eye. For the diagnosis, ophthalmologists use color retinal images of a patient acquired from digital fundus camera. The present study is aimed at developing an automatic system for the extraction of normal and abnormal features in color retinal images. Prolonged diabetes causes micro-vascular leakage and micro-vascular blockage within the retinal blood vessels. Filter based approach with morphological filters is used to segment the vessels. The morphological filter are tuned to match that part of vessel to be extracted in a green channel image. To classify the pixels into vessels and non vessels local thresholding based on gray level co-occurrence matrix is applied. The performance of the method is evaluated on two publicly available retinal databases with hand labeled ground truths. The performance of retinal vessels on drive database, sensitivity 86.39%, accompanied by specificity of 91.2%. While for STARE database proposed method sensitivity 92.15 % and specificity 84.46%. The system could assist the ophthalmologists, to detect the signs of diabetic retinopathy in the early stage, for a better treatment plan and to improve the vision related quality of life. Keywords— Vessel segmentation, Morphological filter, Image Processing, Diabetic Retinopathy .

Journal ArticleDOI
TL;DR: This work shows how to systematically construct quantum models that break this classical bound, and that the system of minimal entropy that simulates such processes must necessarily feature quantum dynamics.
Abstract: Mathematical models are an essential component of quantitative science. They generate predictions about the future, based on information available in the present. In the spirit of simpler is better; should two models make identical predictions, the one that requires less input is preferred. Yet, for almost all stochastic processes, even the provably optimal classical models waste information. The amount of input information they demand exceeds the amount of predictive information they output. Here we show how to systematically construct quantum models that break this classical bound, and that the system of minimal entropy that simulates such processes must necessarily feature quantum dynamics. This indicates that many observed phenomena could be significantly simpler than classically possible should quantum effects be involved.

Journal ArticleDOI
TL;DR: Three types of definitions of lower and upper approximations and corresponding uncertainty measurement concepts including accuracy, roughness and approximation accuracy are investigated andoretical analysis indicates that two of the three types can be used to evaluate the uncertainty in incomplete information systems.

Journal ArticleDOI
TL;DR: A scheme of improvement on the region-scalable fitting (RSF) model proposed by Li et al. allows for more flexible initialization and more robustness to noise compared to the original RSF model.

Book
30 Jun 2012
TL;DR: The National Institute of Standards and Technology Special Publication 800-90A: Recommendation for Random Number Generation Using Deterministic Random Bit Generators specifies techniques for the generation of random bits that may then be used directly or converted to random numbers when random values are required by applications using cryptography.
Abstract: The National Institute of Standards and Technology Special Publication 800-90A: Recommendation for Random Number Generation Using Deterministic Random Bit Generators specifies techniques for the generation of random bits that may then be used directly or converted to random numbers when random values are required by applications using cryptography. There are two fundamentally different strategies for generating random bits. One strategy is to produce bits non-deterministically, where every bit of output is based on a physical process that is unpredictable; this class of random bit generators (RBGs) is commonly known as non-deterministic random bit generators (NRBGs). The other strategy is to compute bits deterministically using an algorithm; this class of RBGs is known as Deterministic Random Bit Generators (DRBGs). A DRBG is based on a DRBG mechanism as specified in this Recommendation and includes a source of entropy input. A DRBG mechanism uses an algorithm (i.e., a DRBG algorithm) that produces a sequence of bits from an initial value that is determined by a seed that is determined from the entropy input. Once the seed is provided and the initial value is determined, the DRBG is said to be instantiated and may be used to produce output. Because of the deterministic nature of the process, a DRBG is said to produce pseudorandom bits, rather than random bits. The seed used to instantiate the DRBG must contain sufficient entropy to provide an assurance of randomness. If the seed is kept secret, and the algorithm is well designed, the bits output by the DRBG will be unpredictable, up to the instantiated security strength of the DRBG. The security provided by an RBG that uses a DRBG mechanism is a system implementation issue; both the DRBG mechanism and its source of entropy input must be considered when determining whether the RBG is appropriate for use by consuming applications.~

Journal ArticleDOI
TL;DR: Compared with several representative reducts, the proposed reduction method in incomplete decision systems can provide a mathematical quantitative measure of knowledge uncertainty and is indeed efficient, and outperforms other available approaches for feature selection from incomplete and complete data sets.
Abstract: Feature selection in large, incomplete decision systems is a challenging problem. To avoid exponential computation in exhaustive feature selection methods, many heuristic feature selection algorithms have been presented in rough set theory. However, these algorithms are still time-consuming to compute. It is therefore necessary to investigate effective and efficient heuristic algorithms. In this paper, rough entropy-based uncertainty measures are introduced to evaluate the roughness and accuracy of knowledge. Moreover, some of their properties are derived and the relationships among these measures are established. Furthermore, compared with several representative reducts, the proposed reduction method in incomplete decision systems can provide a mathematical quantitative measure of knowledge uncertainty. Then, a heuristic algorithm with low computational complexity is constructed to improve computational efficiency of feature selection in incomplete decision systems. Experimental results show that the proposed method is indeed efficient, and outperforms other available approaches for feature selection from incomplete and complete data sets.

Journal ArticleDOI
TL;DR: This paper investigates a channel model describing optical communication based on intensity modulation and derives both the high-power and low-power asymptotic capacities under simultaneously both a peak- power and an average-power constraint.
Abstract: This paper investigates a channel model describing optical communication based on intensity modulation. It is assumed that the main distortion is caused by additive Gaussian noise, however, with a noise variance depending on the current signal strength. Both the high-power and low-power asymptotic capacities under simultaneously both a peak-power and an average-power constraint are derived. The high-power results are based on a new firm (nonasymptotic) lower bound and a new asymptotic upper bound. The upper bound relies on a dual expression for channel capacity and the notion of capacity-achieving input distributions that escape to infinity. The lower bound is based on a new lower bound on the differential entropy of the channel output in terms of the differential entropy of the channel input. The low-power results make use of a theorem by Prelov and van der Meulen.

Journal ArticleDOI
TL;DR: A simplified form for the von Neumann entropy of a graph that can be computed in terms of node degree statistics is developed and the resulting complexity is compared with Estrada's heterogeneity index which measures the heterogeneity of the node degree across a graph.