scispace - formally typeset
Search or ask a question

Showing papers on "Entropy (information theory) published in 1972"


Journal ArticleDOI
TL;DR: In this paper, the authors discuss the selection of efficient portfolios in the context of market risk, market entropy, and market risk in a portfolio selection process, and apply it to the stock market.
Abstract: (1972) Entropy, market risk, and the selection of efficient portfolios Applied Economics: Vol 4, No 3, pp 209-220

171 citations


Journal ArticleDOI
TL;DR: It turns out that the optimum error exponents of variable-length- to-block coding are identical with those of block-to-variable-length coding and are related in an interesting way to Renyi's generalized entropy function.
Abstract: Variable-length-to-block codes are a generalization of run-length codes. A coding theorem is first proved. When the codes are used to transmit information from fixed-rate sources through fixed-rate noiseless channels, buffer overflow results. The latter phenomenon is an important consideration in the retrieval of compressed data from storage. The probability of buffer overflow decreases exponentially with buffer length and we determine the relation between rate and exponent size for memoryless sources. We obtain codes that maximize the overflow exponent for any given transmission rate exceeding the source entropy and present asymptotically optimal coding algorithms whose complexity grows linearly with codeword length. It turns out that the optimum error exponents of variable-length-to-block coding are identical with those of block-to-variable-length coding and are related in an interesting way to Renyi's generalized entropy function.

101 citations


Journal ArticleDOI
A.D. Wyner1
TL;DR: An upper bound is established for the entropy corresponding to a positive integer valued random variable X in terms of the expectation of certain functions of X if E log X is finite.
Abstract: An upper bound is established for the entropy corresponding to a positive integer valued random variable X in terms of the expectation of certain functions of X. In particular, we show that the entropy is finite if E log X

95 citations


Journal ArticleDOI
TL;DR: The most probable distribution of a stochastic variable is obtained by maximizing the entropy of its distribution under given constraints, by applying Lagrange's procedure, and the constraints then determine the type of frequency distribution as discussed by the authors.
Abstract: Summary The most probable distribution of a stochastic variable is obtained by maximizing the entropy of its distribution under given constraints, by applying Lagrange's procedure. The constraints then determine the type of frequency distribution. The above holds for continuous as well as for discrete distributions. In this note we give a survey of various constraints and the corresponding frequency distributions.

70 citations



Journal ArticleDOI
TL;DR: A general statistical description of learning processes is given, using an extension of classical information theory which is able to characterize a system's subjective uncertainty about a set of data and the relevance associated with them.
Abstract: A general statistical description of learning processes is given, using an extension of classical information theory which is able to characterize a system's subjective uncertainty (subjective entropy) about a set of data and the relevance associated with them. Learning is defined as a process in which the system's subjective entropy or, equivalently, its missing information decreases in time. This definition implies that learning enables the system to optimize its responses upon external stimuli such that its expected profit increases. A physical model is discussed where the system's subjective probabilities, which enter into the definition of its missing information, can be considered as internal physical parameters. It is conjectured that the same might be true in models of brain and memory. Two forms of the associative memory model are briefly discussed.

35 citations


Journal ArticleDOI
01 Mar 1972

17 citations


Journal ArticleDOI
TL;DR: This paper considers the theoretical example of the information required in the synthesis of complex linear polymers, and the information manifested by such compounds once formed, by distinguishing three categories: structural, functional, and bound information.

15 citations


Journal ArticleDOI
TL;DR: The maximum entropy method, which chooses a probability distribution P to best estimate the unknown probability measure P* underlying the authors' experiment, is particularly appealing and has received considerable attention in recent years.
Abstract: Consider an experiment with a finite number of outcomes. Suppose, for example, we are given the following data on the yield strength of a Bofors steel [8], [9]: Suppose also that we know the means of certain random variables relating to our experiment; in the above example we might calculate the mean and variance of the observed outcomes: ,/ = 35.6, o2 = 4.19. On the basis of this information, how should we choose a probability distribution P to best estimate the unknown probability measure P* underlying our experiment? There is, of course, no set solution to this problem. The maximum entropy method, however, is particularly appealing and has received considerable attention in recent years. Introduced by Shannon [7] in connection with communication theory, entropy was given an information theoretic interpretation first formulated by Jaynes [4] and further developed by Tribus [8]. According to Jaynes, P should be chosen to maximize the entropy E:

15 citations


Journal ArticleDOI
TL;DR: This short correspondence is to improve Pfaffelhuber's error estimation and thereby to give a new upper bound for the number of independent observations needed for obtaining a “reliable” approximation of the exact entropy.
Abstract: Pfaffelhuber in his paper [1] deals with the approximation of the entropy H of finite information sources on the basis of independent observations. He derives an error estimation for the experimental entropy, which depends only on the number of the possible source-cutputs. Using this result he succeeded in giving an upper estimation for the number of independent observations needed for obtaining a “reliable” approximation of the exact entropy. The aim of this short correspondence is to improve his error estimation and thereby to give a new upper bound for the number of the necessary observations which is nearly the logarithm of E. Pfaffelhuber's one. We mention only that the same assertion holds for the information rate T of observation channels, too.

7 citations


Journal ArticleDOI
TL;DR: Ergodic computational aspects of the Jacobi algorithm, a generalization to two dimensions of the continued fraction algorithm, are considered and an approximation to the invariant measure of the transformation associated with the algorithm is obtained.
Abstract: Ergodic computational aspects of the Jacobi algorithm, a generalization to two dimensions of the continued fraction algorithm, are considered. By means of such computations the entropy of the algorithm is estimated to be 3.5. An approximation to the invariant measure of the transformation associated with the algorithm is obtained. The computations are tested by application to the continued fraction algorithm for which both entropy and the invariant measure are known.

07 Jun 1972
TL;DR: A sequential adaptive experimental design procedure for a related problem to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible.
Abstract: A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.

Journal ArticleDOI
01 Dec 1972-Metrika
TL;DR: Equivalence of the generalized entropyHβ (P, Φt) defined in this paper and Kapur’s entropy of orderα and typeβ, ie.Hαβ(P), is established.
Abstract: Equivalence of the generalized entropyH β (P, Φ t ) defined in this paper andKapur’s entropy of orderα and typeβ, ie.H α β (P), is established. The results given recently byCampbell follow as special cases.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the fluctuations of any locally conserved quantity give rise to a contribution to the entropy density of a large system of at least a constant times T3, where T3 is the entropy of the phonons propagating at the hydrodynamic sound velocity determined by macroscopic compressibility.

Journal ArticleDOI
TL;DR: An extension of the concept of informational entropy, I, is introduced which allows for the fact that the size distribution is only known within an accuracy inherent to the method (instrument) and described by a conditional probability density.

Journal ArticleDOI
01 Jul 1972
TL;DR: The development and comparison of a class of nonparametric probability density function modeling algorithms is presented and variations of the algorithms are compared as to rate of convergence and limit cycle stability relative to ease of implementation.
Abstract: The development and comparison of a class of nonparametric probability density function modeling algorithms is presented. Each algorithm Iteratively estimates a model of the sampled density function based upon a description of a set of equiprobable regions over the range of the variable of interest. Minimization of computational complexity and memory capacity while maintaining convergence and stability are the principal considerations. Variations of the algorithms are compared as to rate of convergence and limit cycle stability relative to ease of implementation. Results including comparative curves are presented.

Journal ArticleDOI
TL;DR: The evaluation of the informational entropy of terpolymers is described, carried out by applying the methods of information theory.

Book ChapterDOI
W. Kuich1
01 Jan 1972
TL;DR: This chapter focuses on the entropy of transformed finite-state automata and associated languages, where the S transformation replaces a transition between two states of the original automaton by the transitions of an automaton of simple structure.
Abstract: This chapter focuses on the entropy of transformed finite-state automata and associated languages. This transformation replaces a transition between two states of the original automaton by the transitions of an automaton of simple structure. In terms of language theory, this transformation is equivalent to a language-preserving function called substitution or homomorphism. While defining the entropy of finite-state automata and associated languages, it is obvious to ask for the change in the entropy caused by applying the S transformation. S transformation, transforming M into M (0,…,0, r ) generalizes two transformations r = 1 yields the η k transformation, while k = 1 yields the η r transformation. Hence, a homomorphism, mapping each symbol on a word consisting of k symbols, diminishes the entropy H M of the original language to the k th part, k -1 HM.