scispace - formally typeset
Search or ask a question

Showing papers on "Entropy (information theory) published in 1991"


Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations


Journal ArticleDOI
TL;DR: Analysis of a recently developed family of formulas and statistics, approximate entropy (ApEn), suggests that ApEn can classify complex systems, given at least 1000 data values in diverse settings that include both deterministic chaotic and stochastic processes.
Abstract: Techniques to determine changing system complexity from data are evaluated. Convergence of a frequently used correlation dimension algorithm to a finite value does not necessarily imply an underlying deterministic model or chaos. Analysis of a recently developed family of formulas and statistics, approximate entropy (ApEn), suggests that ApEn can classify complex systems, given at least 1000 data values in diverse settings that include both deterministic chaotic and stochastic processes. The capability to discern changing complexity from such a relatively small amount of data holds promise for applications of ApEn in a variety of contexts.

5,055 citations


Journal ArticleDOI
J. Lin1
TL;DR: A novel class of information-theoretic divergence measures based on the Shannon entropy is introduced, which do not require the condition of absolute continuity to be satisfied by the probability distributions involved and are established in terms of bounds.
Abstract: A novel class of information-theoretic divergence measures based on the Shannon entropy is introduced. Unlike the well-known Kullback divergences, the new measures do not require the condition of absolute continuity to be satisfied by the probability distributions involved. More importantly, their close relationship with the variational distance and the probability of misclassification error are established in terms of bounds. These bounds are crucial in many applications of divergence measures. The measures are also well characterized by the properties of nonnegativity, finiteness, semiboundedness, and boundedness. >

4,113 citations


01 Jan 1991
TL;DR: In this article, a new statistic called approximate entropy (ApEn) was developed to quantify the amount of regularity in data, which has potential application throughout medicine, notably in electrocardiogram and related heart rate data analyses and in the analysis of endocrine hormone release pulsatility.
Abstract: A new statistic has been developed to quantify the amount of regularity in data. This statistic, ApEn (approximate entropy), appears to have potential application throughout medicine, notably in electrocardiogram and related heart rate data analyses and in the analysis of endocrine hormone release pulsatility. The focus of this article is ApEn. We commence with a simple example of what we are trying to discern. We then discuss exact regularity statistics and practical difficulties of using them in data analysis. The mathematic formula development for ApEn concludes the Solution section. We next discuss the two key input requirements, followed by an account of a pilot study successfully applying ApEn to neonatal heart rate analysis. We conclude with the important topic of ApEn as a relative (not absolute) measure, potential applications, and some caveats about appropriate usage of ApEn. Appendix A provides example ApEn and entropy computations to develop intuition about these measures. Appendix B contains a Fortran program for computing ApEn. This article can be read from at least three viewpoints. The practitioner who wishes to use a "black box" to measure regularity should concentrate on the exact formula, choices for the two input variables, potential applications, and caveats about appropriate usage. The physician who wishes to apply ApEn to heart rate analysis should particularly note the pilot study discussion. The more mathematically inclined reader will benefit from discussions of the relative (comparative) property of ApEn and from Appendix A.

508 citations


Journal ArticleDOI
01 Sep 1991
TL;DR: Four algorithms for object extraction using a Poisson distribution-based model of an ideal image and a concept of positional entropy giving any information regarding the location of an object in a scene is introduced.
Abstract: Shannon's definition of entropy is critically examined and a new definition of classical entropy based on the exponential behavior of information gain is proposed along with its justification. The concept is extended to defining the global, local, and conditional entropy of a gray-level image. Based on these definitions four algorithms for object extraction are developed. One of these algorithms uses a Poisson distribution-based model of an ideal image. A concept of positional entropy giving any information regarding the location of an object in a scene is introduced. >

306 citations


Journal ArticleDOI
TL;DR: Analogous convergence results for the relative entropy are shown to hold in general, for any class of log-density functions and sequence of finite-dimensional linear spaces having L2 and L.
Abstract: 1. Introduction. Consider the estimation of a probability density func- tion p(x) defined on a bounded interval. We approximate the logarithm of the density by a basis function expansion consisting of polynomials, splines or trigonometric series. The expansion yields a regular exponential family within which we estimate the density by the method of maximum likelihood. This method of density estimation arises by application of the principle of maxi- mum entropy or minimum relative entropy subject to empirical constraints. We show that if the logarithm of the density has r square-integrable deriva- tives, f IDr log p12 < ox, then the sequence of density estimators P converges to p in the sense of relative entropy (Kullback-Leibler fp log(p/ pn) at rate Opr(1/m2r + m/n) as m -X c and m2/n -* 0 in the spline and trigonometric cases and m3/n -O 0 in the polynomial case, where m is the dimension of the family and n is the sample size. Boundary conditions are assumed for the density in the trigonometric case. This convergence rate specializes to Op r(n-2r/(2r+ 1)) by setting m = nl/(2r+ 1) when the log-density is known to have degree of smoothness at least r. Analogous convergence results for the relative entropy are shown to hold in general, for any class of log-density functions and sequence of finite-dimensional linear spaces having L2 and L. approximation properties. The approximation of log-densities using polynomials has previously been considered by Neyman (1937) to define alternatives for goodness-of-fit tests, by Good (1963) as an application of the method of maximum entropy or minimum relative entropy, by Crain (1974, 1976a, b 1977) who demonstrates existence and consistency of the maximum likelihood estimator and by Mead and

244 citations


Journal ArticleDOI
TL;DR: In this paper, a new class of tests for nonparametric hypotheses, with special reference to the problem of testing for independence in time series in the presence of a non-parametric marginal distribution under the null, is proposed.
Abstract: form the test statistic. In order to obtain a normal null limiting distribution, a form of weighting is employed. The test is also shown to be consistent against a class of alternatives. The exposition focusses on testing for serial independence in time series, with a small application to testing the random walk hypothesis for exchange rate series, and tests of some other hypotheses of econometric interest are briefly described. This paper proposes a new class of tests for nonparametric hypotheses, with special reference to the problem of testing for independence in time series in the presence of a nonparametric marginal distribution under the null. The critical region of the tests is the upper tail of the distribution of an estimate of the Kullback-Leibler information criterion, whose desirable properties make it convenient to describe in a reasonably comprehensible fashion some of the consistent directions. A test is consistent against one direction of departure from the null hypothesis if the probability of rejection approaches one no matter how small the departure in that direction. For continuous distributions, two random variables Y and Z are independent if and only if their joint probability density f(y, z) equals the product of the marginal densities g(y) and h(z) for all y, z; our test for independence is consistent wherever f(y, z) deviates from g(y)h(z) by even small amounts on a set of arbitrarily small non-zero measure, providing some regularity conditions hold. It will be helpful to briefly introduce the Kullback-Leibler information criterion and describe some of its properties. Following definitions of the entropy of a distribution by Shannon (1948), Wiener (1948), a measure of information for discriminating between two hypotheses was proposed by Kullback and Leibler (1951). Let X be a p-vector-valued random variable with absolutely continuous distribution function. Consider the hypotheses HI: pdf(X) = f(x) H2: pdf(X) = g(x). The mean information for discrimination between HI and H2 per observation from f is

228 citations


Journal ArticleDOI
TL;DR: A parallel structured VLC decoder which decodes each codeword in one clock cycle regardless of its length is introduced and the required clock rate of the decoder is lower, and parallel processing architectures become easy to adopt in the entropy coding system.
Abstract: Run-length coding (RLC) and variable-length coding (VLC) are widely used techniques for lossless data compression. A high-speed entropy coding system using these two techniques is considered for digital high definition television (HDTV) applications. Traditionally, VLC decoding is implemented through a tree-searching algorithm as the input bits are received serially. For HDTV applications, it is very difficult to implement a real-time VLC decoder of this kind due to the very high data rate required. A parallel structured VLC decoder which decodes each codeword in one clock cycle regardless of its length is introduced. The required clock rate of the decoder is thus lower, and parallel processing architectures become easy to adopt in the entropy coding system. The parallel entropy coder and decoder are designed for implementation in two experimental prototype chips which are designed to encode and decode more than 52 million samples/s. Some related system issues, such as the synchronization of variable-length codewords and error concealment, are also discussed. >

219 citations


Journal ArticleDOI
01 Feb 1991-EPL
TL;DR: It is shown that sporadic systems give rise to peculiar scaling properties as a result of long-range correlations, and the potential implications of this possibility in the structure of natural languages are explored.
Abstract: The role of correlations in the a priori probability of occurrence of a symbolic sequence is analysed, on the basis of the scaling behaviour of the entropy as a function of the sequence length. It is shown that sporadic systems give rise to peculiar scaling properties as a result of long-range correlations. The potential implications of this possibility in the structure of natural languages are explored.

124 citations


Patent
10 May 1991
TL;DR: In this article, a method and apparatus for encoding interframe error data in an image transmission system, and in particular in a motion compensated image transmission systems for transmitting a sequence of image frames from a transmitter to a receiver, employ hierarchical entropy coded lattice threshold quantization (46) to increase the data compression of the images being transmitted.
Abstract: A method and apparatus for encoding interframe error data in an image transmission system, and in particular in a motion compensated image transmission system for transmitting a sequence of image frames from a transmitter (8) to a receiver (21), employ hierarchical entropy coded lattice threshold quantization (46) to increase the data compression of the images being transmitted. The method and apparatus decimate (502) an interframe predicted image data and an uncoded current image data (504), and apply hierarchical entropy coded lattice threshold quantization encoding (506) to the resulting pyramid data structures. Lossy coding is applied on a level-by-level basis for generating the encoded data representation of the image difference between the predicted image data and the uncoded original image. The method and apparatus are applicable to systems transmitting a sequence of image frames (or other pattern data, such as speech) both with and without motion compensation.

119 citations


Journal ArticleDOI
01 Jan 1991
TL;DR: A measure of information is developed based on the estimation entropy utilizing the Kalman filter state estimator that can be used to determine which process to observe in order to maximize a measure of global information flow.
Abstract: A method that maximizes the information flow through a constrained communications channel when it is desired to estimate the state of multiple nonstationary processes is described. The concept of a constrained channel is introduced as a channel that is not capable of transferring all of the information required. A measure of information is developed based on the estimation entropy utilizing the Kalman filter state estimator. It is shown that this measure of information can be used to determine which process to observe in order to maximize a measure of global information flow. For stationary processes, the sampling sequence can be computed a priori, but nonstationary processes require real-time sequence computation. >

Journal ArticleDOI
TL;DR: In this article, a lower bound on the entropy produced when independent random variables are summed and rescaled is established. But this lower bound is based on a Lyapunov functional governing approach to the Gaussian limit.
Abstract: We prove a strict lower bound on the entropy produced when independent random variables are summed and rescaled. Using this, we develop an approach to central limit theorems from a dynamical point of view in which the entropy is a Lyapunov functional governing approach to the Gaussian limit. This dynamical approach naturally extends to cover dependent variables, and leads to new results in pure probability theory as well as in statistical mechanics. It also provides a unified framework within which many previous results are easily derived.

Journal ArticleDOI
24 Jun 1991
TL;DR: An upper bound on the probability of a sequence drawn from a finite-state source is derived in terms of the number of phrases obtained by parsing the sequence according to the Lempel-Ziv (L-Z) incremental parsing rule, and is universal in the sense that it does not depend on the statistical parameters that characterize the source.
Abstract: An upper bound on the probability of a sequence drawn from a finite-state source is derived. The bound is given in terms of the number of phrases obtained by parsing the sequence according to the Lempel-Ziv (L-Z) incremental parsing rule, and is universal in the sense that it does not depend on the statistical parameters that characterize the source. This bound is used to derive an upper bound on the redundance of the L-Z universal data compression algorithm applied to finite-state sources, that depends on the length N of the sequence, on the number K of states of the source, and, eventually, on the source entropy. A variation of the L-Z algorithm is presented, and an upper bound on its redundancy is derived for finite-state sources. A method to derive tighter implicit upper bounds on the redundancy of both algorithms is also given, and it is shown that for the proposed variation this bound is smaller than for the original L-Z algorithm, or every value of N and K. >

Proceedings Article
02 Dec 1991
TL;DR: Criteria for training adaptive classifier networks to perform unsupervised data analysis is derived, which simplifies to an intuitively reasonable difference between two entropy functions, one encouraging 'decisiveness,' the other 'fairness' to the alternative interpretations of the input.
Abstract: We derive criteria for training adaptive classifier networks to perform unsupervised data analysis. The first criterion turns a simple Gaussian classifier into a simple Gaussian mixture analyser. The second criterion, which is much more generally applicable, is based on mutual information. It simplifies to an intuitively reasonable difference between two entropy functions, one encouraging 'decisiveness,' the other 'fairness' to the alternative interpretations of the input. This 'firm but fair' criterion can be applied to any network that produces probability-type outputs, but it does not necessarily lead to useful behavior.

Journal ArticleDOI
TL;DR: In this paper, a dynamical model adapted to evolutionary games is presented which has the property that relative entropy decreases monotonically, if the state of a (complex) population is close to an uninvadable state.
Abstract: Selection is often. viewed as a process that maximizes the average fitness of a population. However, there are often constraints even on the phenotypic level which may prevent fitness optimization. Consequently, in evolutionary game theory, models of frequency dependent selection are investigated, which focus on equilibrium states that are characterized by stability (or uninvadability) rather than by optimality. The aim of this article is to show that nevertheless there is a biologically meaningful quantity, namely cross (fitness) entropy, which is optimized during the course of evolution: a dynamical model adapted to evolutionary games is presented which has the property that relative entropy decreases monotonically, if the state of a (complex) population is close to an uninvadable state. This result may be interpreted as if evolution has an “order stabilizing” effect.

Patent
12 Jul 1991
TL;DR: In this article, a system for compressing information arranges unprocessed information into a plurality of data planes, and the data planes are converted into a combined planar data output.
Abstract: A system for compressing information arranges unprocessed information into a plurality of data planes. The data planes are converted into a combined planar data output. The combined planar data output is created by regrouping data elements which make up the unprocessed information. The regrouping is such that the entropy of the unprocessed information is increased. This provides increased compressibility of the data. The combined planar data output is compressed using standard information compression techniques. Data is reconstructed by uncompressing compressed data and rearranging it into its original format.

Journal ArticleDOI
TL;DR: The entropy of ^ is estimated for the case when /?
Abstract: An entropy was introduced by A. Garsia to study certain infinitely convolved Bernoulli measures (ICBMs) Up, and showed it was strictly less than 1 for /? the reciprocal of a Pisot-Vijayarghavan number. However, it is impossible to estimate values from Garsia's work. The first author and J. A. Yorke have shown this entropy is closely related to the 'information dimension' of the attractors of fat baker transformations Tfi. When the entropy is strictly less than 1, the attractor is a type of strange attractor. In this paper, the entropy of ^ is estimated for the case when /? = (jr, where is the golden ratio. The estimate is fine enough to determine the entropy to several decimal places. The method of proof is totally unlike usual methods for determining dimensions of attractors; rather a relation with the Euclidean algorithm is exploited, and the proof has a number-theoretic flavour. It suggests that some interesting features of the Euclidean algorithm remain to be explored.


Book
01 Jan 1991
TL;DR: This book presents an introduction to the maximum entropy method and some misconceptions about entropy, as well as a review of the literature on X-ray crystallographic phase problem Index and its applications.
Abstract: Preface Editors' introduction About the authors G.J. Daniell: Of maps and monkeys - an introduction to the maximum entropy method J. Skilling: Fundamentals of MaxEnt in data analysis P.J. Hore: Maximum entropy and nuclear magnetic resonance S. Davies, K.J. Packer, A. Baruya & A.I. Grant: Enhanced information recovery in spectroscopy using the maximum entropy method G.A. Cottrell: Maximum entropy and plasma physics A.J.M. Garrett: Macroreversibility and microreversibility reconciled - the second law S.F. Gull: Some misconceptions about entropy G. Bricogne: The X-ray crystallographic phase problem Index.

Journal ArticleDOI
A.D. Wyner1, Jacob Ziv
01 May 1991
TL;DR: It is demonstrated that a variant of the Lempel-Ziv data compression algorithm where the database is held fixed and is reused to encode successive strings of incoming input symbols is optimal, provided that the source is stationary and satisfies certain conditions.
Abstract: It is demonstrated that a variant of the Lempel-Ziv data compression algorithm where the database is held fixed and is reused to encode successive strings of incoming input symbols is optimal, provided that the source is stationary and satisfies certain conditions (e.g., a finite-order Markov source). A finite memory version of the Lempel-Ziv algorithm compresses (on the average) to about the entropy rate. The necessary memory size depends on the nature of the source. >

Book ChapterDOI
01 Jan 1991
TL;DR: In this paper, a Bayesian interpretation of maximum entropy image reconstruction is presented and it is shown that exp(aS(f,m) is the only consistent prior probability distribution for positive, additive images.
Abstract: This paper presents a Bayesian interpretation of maximum entropy image reconstruction and shows that exp(aS(f,m)), where S(f,m) is the entropy of image / relative to model m, is the only consistent prior probability distribution for positive, additive images. It also leads to a natural choice for the regularizing parameter a, that supersedes the traditional practice of setting ?2 = ?. The new condition is that the dimensionless measure of structure ? 2aS should be equal to the number of good singular values contained in the data. The performance of this new condition is discussed with reference to image deconvolution, but leads to a reconstruction that is visually disappointing. A deeper hypothesis space is proposed that overcomes these difficulties, by allowing for spatial correlations across the image.

Proceedings ArticleDOI
04 Apr 1991
TL;DR: A recently developed family of statistics, ApEn, can classify complex systems, given at least 1000 data values in diverse settings that include both deterministic chaotic and stochastic processes as discussed by the authors.
Abstract: Difficulties with, and the possible inappropriateness of, applications of dimension and entropy algorithms to biological data, such as heart rate and electroencephalography (EEG) data, are indicated A recently developed family of statistics, ApEn, can classify complex systems, given at least 1000 data values in diverse settings that include both deterministic chaotic and stochastic processes The ability to discern changing complexity from such a relatively small amount of data holds substantial promise for diverse applications ApEn can potentially distinguish low-dimensional deterministic systems, periodic and multiply periodic systems, high-dimensional chaotic systems, and stochastic and mixed (stochastic and deterministic) systems Variance estimates for ApEn yield rigorous error bars for appropriate statistical interpretation results; no such valid statistics have been established for dimension and entropy algorithms in the general setting >

Journal ArticleDOI
TL;DR: The theory of formation of an ideal image has been described which shows that the gray level in an image follows the Poisson distribution, and proposed algorithms for object background classification involve either the maximum entropy principle or the minimum χ2 statistic.
Abstract: The theory of formation of an ideal image has been described which shows that the gray level in an image follows the Poisson distribution. Based on this concept, various algorithms for object background classification have been developed. Proposed algorithms involve either the maximum entropy principle or the minimum χ2 statistic. The appropriateness of the Poisson distribution is further strengthened by comparing the results with those of similar algorithms which use conventional normal distribution. A set of images with various types of histograms has been considered here as the test data.

Journal ArticleDOI
TL;DR: Results of a second experiment show that in young listeners and with the sentences employed, manipulating linguistic entropy can result in an effect on SRT of approximately 4 dB in terms of signal-to-noise ratio; the range of this effect is approximately the same in elderly listeners.
Abstract: The rationale for a method to quantify the information content of linguistic stimuli, i.e., the linguistic entropy, is developed. The method is an adapted version of the letter-guessing procedure originally devised by Shannon [Bell Syst. Tech. J. 30, 50-64 (1951)]. It is applied to sentences included in a widely used test to measure speech-reception thresholds and originally selected to be approximately equally redundant. Results of a first experiment reveal that this method enables one to detect subtle differences between sentences and sentence lists with respect to linguistic entropy. Results of a second experiment show that (1) in young listeners and with the sentences employed, manipulating linguistic entropy can result in an effect on SRT of approximately 4 dB in terms of signal-to-noise ratio; (2) the range of this effect is approximately the same in elderly listeners.

Journal ArticleDOI
TL;DR: New architectures for short-kernel filters are developed which can reduce the entropy of subband signals better than conventional two-tap filters.
Abstract: The authors present a subband coding scheme which has the possibility of distortion-free encoding. The coding scheme, which divides input signals into frequency bands, lends itself to parallel implementation. Computer simulation is conducted using high-quality HDTV component signals. Quadrature mirror filters (QMFs) and short-kernel subband filters are compared in terms of entropy (bit-per-pel) and signal-to-noise ratio. Simulation results show that the short-kernel filters can reduce the entropy while maintaining the original picture quality. The number of subband-signal levels was found to be increased. Reduction of the number of signal levels by transformation during the filtering process is studied. From this study, new architectures for short-kernel filters are developed which can reduce the entropy of subband signals better than conventional two-tap filters. >

15 Feb 1991
TL;DR: A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy, by adaptively selecting the best of several easily implemented variable length coding algorithms.
Abstract: A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

Journal ArticleDOI
TL;DR: It is concluded that quantitative measures of information transmission can be used to locate message boundaries and provide insight into how receivers parse the behavioral stream.
Abstract: What's in a smile? Why do we respond so powerfully to this visual display? Or, phrased in information processing terms: What characteristics of the signal define the boundaries of the social message? We have used digital image analysis and subject ratings to answer this question for videotaped smiles. A simple measure of the information provided by facial movement—the entropy of the distribution of pixel intensities in the subtracted or difference images—traced the changing facial expression (the signal) through time. Raters categorized the individual videoprints in order to locate message boundaries. We found a remarkable coincidence between changes in signal entropy and message. In each smile, rapid increases in positive messages occurred within entropy crests. We conclude that quantitative measures of information transmission can be used to locate message boundaries and provide insight into how receivers parse the behavioral stream.

Journal ArticleDOI
TL;DR: In this article, an extensive analysis of the entropy of 22 nearly free electron metals is presented, where pair correlation entropy is computed from measured pair correlation functions, and the magnitude and temperature dependence of this entropy contribution shows approximately universal behaviour for most liquid metals.
Abstract: An extensive analysis of the entropy is reported for 22 nearly free electron metals. The pair correlation entropy is computed from measured pair correlation functions, and the magnitude and temperature dependence of this entropy contribution shows approximately universal behaviour for most liquid metals. The higher order correlation entropy is zero for the simple metals, but is non-zero for several complex liquid metals, including Hg, Ga and Sn. The results are interpreted in terms of the motion of ions in a liquid, in which each ion is approximately trapped in the potential well of its neighbours, and where the presence of higher-order correlation entropy implies the existence of an n-ion interaction, for n $\geq $ 3.

Journal ArticleDOI
TL;DR: Two special-purpose iterative algorithms for maximization of Burg's entropy function subject to linear inequalities are presented, both of which are “row-action” methods which use in each iteration the information contained in only one constraint.

Journal ArticleDOI
TL;DR: The main result is that the spectral roughness measure of a pyramid structure is less than that of the original fullband process, which means that substantial reduction in bits can be obtained by merely representing a source as a pyramid.
Abstract: An information-theoretic analysis of multiresolution pyramid structures is presented. The analysis is carried out using the concept of spectral entropy which, for Gaussian sources, is linearly related to the differential entropy. The spectral entropy is used to define the spectral roughness measure that, in turn, is an indicator of the amount of memory in a source. The more the memory in a source, the greater is the value of its spectral roughness measure. The spectral roughness measure also plays an important role in lower bounding the rate-distortion function. The main result is that the spectral roughness measure of a pyramid structure is less than that of the original fullband process. This means that substantial reduction in bits can be obtained by merely representing a source as a pyramid. Simulations using one-dimensional and two-dimensional sources verifying these claims are presented. >