scispace - formally typeset
Search or ask a question

Showing papers on "Entropy (information theory) published in 2014"


Journal ArticleDOI
TL;DR: In particular, the Renyi divergence of order 1 equals the Kullback-Leibler divergence as discussed by the authors, and the relation of the special order 0 to the Gaussian dichotomy and contiguity is discussed.
Abstract: Renyi divergence is related to Renyi entropy much like Kullback-Leibler divergence is related to Shannon's entropy, and comes up in many settings. It was introduced by Renyi as a measure of information that satisfies almost the same axioms as Kullback-Leibler divergence, and depends on a parameter that is called its order. In particular, the Renyi divergence of order 1 equals the Kullback-Leibler divergence. We review and extend the most important properties of Renyi divergence and Kullback-Leibler divergence, including convexity, continuity, limits of \(\sigma \) -algebras, and the relation of the special order 0 to the Gaussian dichotomy and contiguity. We also show how to generalize the Pythagorean inequality to orders different from 1, and we extend the known equivalence between channel capacity and minimax redundancy to continuous channel inputs (for all orders) and present several other minimax results.

1,234 citations


Journal ArticleDOI
19 Feb 2014-PLOS ONE
TL;DR: An accurate, non-binning MI estimator for the case of one discrete data set and one continuous data set is presented, which applies when measuring the relationship between base sequence and gene expression level, or the effect of a cancer drug on patient survival time.
Abstract: Mutual information (MI) is a powerful method for detecting relationships between data sets. There are accurate methods for estimating MI that avoid problems with “binning” when both data sets are discrete or when both data sets are continuous. We present an accurate, non-binning MI estimator for the case of one discrete data set and one continuous data set. This case applies when measuring, for example, the relationship between base sequence and gene expression level, or the effect of a cancer drug on patient survival time. We also show how our method can be adapted to calculate the Jensen–Shannon divergence of two or more data sets.

511 citations


Posted Content
TL;DR: Predictive Entropy Search (PES) as mentioned in this paper selects the next evaluation point that maximizes the expected information gained with respect to the global maximum in each iteration, at each iteration.
Abstract: We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore, PES can easily perform a fully Bayesian treatment of the model hyperparameters while ES cannot. We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance.

421 citations


Journal ArticleDOI
TL;DR: The notion of distance between two single valued neutrosophic sets is introduced and its properties are studied and several similarity measures are defined.
Abstract: In this paper we have introduced the notion of distance between two single valued neutrosophic sets and studied its properties. We have also defined several similarity measures between them and investigated their characteristics. A measure of entropy of a single valued neutrosophic set has also been introduced.

401 citations


Journal ArticleDOI
TL;DR: The results based on Kapur's entropy reveal that CS, ELR-CS and WDO method can be accurately and efficiently used in multilevel thresholding problem.
Abstract: The objective of image segmentation is to extract meaningful objects. A meaningful segmentation selects the proper threshold values to optimize a criterion using entropy. The conventional multilevel thresholding methods are efficient for bi-level thresholding. However, they are computationally expensive when extended to multilevel thresholding since they exhaustively search the optimal thresholds to optimize the objective functions. To overcome this problem, two successful swarm-intelligence-based global optimization algorithms, cuckoo search (CS) algorithm and wind driven optimization (WDO) for multilevel thresholding using Kapur's entropy has been employed. For this purpose, best solution as fitness function is achieved through CS and WDO algorithm using Kapur's entropy for optimal multilevel thresholding. A new approach of CS and WDO algorithm is used for selection of optimal threshold value. This algorithm is used to obtain the best solution or best fitness value from the initial random threshold values, and to evaluate the quality of a solution, correlation function is used. Experimental results have been examined on standard set of satellite images using various numbers of thresholds. The results based on Kapur's entropy reveal that CS, ELR-CS and WDO method can be accurately and efficiently used in multilevel thresholding problem.

392 citations


Proceedings Article
08 Dec 2014
TL;DR: This work proposes a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES), which codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution.
Abstract: We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore, PES can easily perform a fully Bayesian treatment of the model hyperparameters while ES cannot. We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance.

288 citations


Journal ArticleDOI
TL;DR: This work introduces incremental mechanisms for three representative information entropies and develops a group incremental rough feature selection algorithm based on information entropy that aims to find the new feature subset in a much shorter time when multiple objects are added to a decision table.
Abstract: Many real data increase dynamically in size. This phenomenon occurs in several fields including economics, population studies, and medical research. As an effective and efficient mechanism to deal with such data, incremental technique has been proposed in the literature and attracted much attention, which stimulates the result in this paper. When a group of objects are added to a decision table, we first introduce incremental mechanisms for three representative information entropies and then develop a group incremental rough feature selection algorithm based on information entropy. When multiple objects are added to a decision table, the algorithm aims to find the new feature subset in a much shorter time. Experiments have been carried out on eight UCI data sets and the experimental results show that the algorithm is effective and efficient.

264 citations


Journal ArticleDOI
TL;DR: The dimension of self-similar sets and measures on the line was studied in this article, where it was shown that if the dimension is less than the generic bound of minf1;sg, where s is the similarity dimension, then there are superexponentially close cylinders at all small enough scales.
Abstract: We study the dimension of self-similar sets and measures on the line. We show that if the dimension is less than the generic bound of minf1;sg, where s is the similarity dimension, then there are superexponentially close cylinders at all small enough scales. This is a step towards the conjecture that such a dimension drop implies exact overlaps and conrms it when the generating similarities have algebraic coecients. As applications we prove Furstenberg’s conjecture on projections of the one-dimensional Sierpinski gasket and achieve some progress on the Bernoulli convolutions problem and, more generally, on problems about parametric families of self-similar measures. The key tool is an inverse theorem on the structure of pairs of probability measures whose mean entropy at scale 2 n has only a small amount of growth under convolution.

257 citations


Journal ArticleDOI
TL;DR: The Java Information Dynamics Toolkit (JIDT) is introduced, a Google code project which provides a standalone, (GNU GPL v3 licensed) open-source code implementation for empirical estimation of information-theoretic measures from time-series data.
Abstract: Complex systems are increasingly being viewed as distributed information processing systems, particularly in the domains of computational neuroscience, bioinformatics and Artificial Life. This trend has resulted in a strong uptake in the use of (Shannon) information-theoretic measures to analyse the dynamics of complex systems in these fields. We introduce the Java Information Dynamics Toolkit (JIDT): a Google code project which provides a standalone, (GNU GPL v3 licensed) open-source code implementation for empirical estimation of information-theoretic measures from time-series data. While the toolkit provides classic information-theoretic measures (e.g. entropy, mutual information, conditional mutual information), it ultimately focusses on implementing higher-level measures for information dynamics. That is, JIDT focusses on quantifying information storage, transfer and modification, and the dynamics of these operations in space and time. For this purpose, it includes implementations of the transfer entropy and active information storage, their multivariate extensions and local or pointwise variants. JIDT provides implementations for both discrete and continuous-valued data for each measure, including various types of estimator for continuous data (e.g. Gaussian, box-kernel and Kraskov-Stoegbauer-Grassberger) which can be swapped at run-time due to Java's object-oriented polymorphism. Furthermore, while written in Java, the toolkit can be used directly in MATLAB, GNU Octave, Python and other environments. We present the principles behind the code design, and provide several examples to guide users.

250 citations


Journal ArticleDOI
TL;DR: Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation element methods of arbitrary order for the compressible Navier--Stokes equations.
Abstract: Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation element methods of arbitrary order for the compressible Navier--Stokes equations. The new methods are similar to strong form, nodal discontinuous Galerkin spectral elements but conserve entropy for the Euler equations and are entropy stable for the Navier--Stokes equations. Shock capturing follows immediately by combining them with a dissipative companion operator via a comparison approach. Smooth and discontinuous test cases are presented that demonstrate their efficacy.

246 citations


Posted Content
TL;DR: This analysis inherits the simplicity and elegance of information theory and leads to regret bounds that scale with the entropy of the optimal-action distribution, which strengthens preexisting results and yields new insight into how information improves performance.
Abstract: We provide an information-theoretic analysis of Thompson sampling that applies across a broad range of online optimization problems in which a decision-maker must learn from partial feedback. This analysis inherits the simplicity and elegance of information theory and leads to regret bounds that scale with the entropy of the optimal-action distribution. This strengthens preexisting results and yields new insight into how information improves performance.

Journal ArticleDOI
TL;DR: The Java Information Dynamics Toolkit (JIDT) as discussed by the authors is a toolkit for empirical estimation of Shannon information-theoretic measures from time-series data, focusing on quantifying information storage, transfer and modification.
Abstract: Complex systems are increasingly being viewed as distributed information processing systems, particularly in the domains of computational neuroscience, bioinformatics and Artificial Life. This trend has resulted in a strong uptake in the use of (Shannon) information-theoretic measures to analyse the dynamics of complex systems in these fields. We introduce the Java Information Dynamics Toolkit (JIDT): a Google code project which provides a standalone, (GNU GPL v3 licensed) open-source code implementation for empirical estimation of information-theoretic measures from time-series data. While the toolkit provides classic information-theoretic measures (e.g. entropy, mutual information, conditional mutual information), it ultimately focusses on implementing higher-level measures for information dynamics. That is, JIDT focusses on quantifying information storage, transfer and modification, and the dynamics of these operations in space and time. For this purpose, it includes implementations of the transfer entropy and active information storage, their multivariate extensions and local or pointwise variants. JIDT provides implementations for both discrete and continuous-valued data for each measure, including various types of estimator for continuous data (e.g. Gaussian, box-kernel and Kraskov-Stoegbauer-Grassberger) which can be swapped at run-time due to Java's object-oriented polymorphism. Furthermore, while written in Java, the toolkit can be used directly in MATLAB, GNU Octave, Python and other environments. We present the principles behind the code design, and provide several examples to guide users.

Journal ArticleDOI
TL;DR: This Perspectives article gives a broad introduction to the maximum entropy method, in an attempt to encourage its further adoption and highlights each of these contributions in turn.
Abstract: A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.

Journal ArticleDOI
14 Oct 2014-PLOS ONE
TL;DR: Different approaches to evaluate transfer entropy are compared, some already proposed, some novel, and present their implementation in a freeware MATLAB toolbox and applications to simulated and real data are presented.
Abstract: A challenge for physiologists and neuroscientists is to map information transfer between components of the systems that they study at different scales, in order to derive important knowledge on structure and function from the analysis of the recorded dynamics. The components of physiological networks often interact in a nonlinear way and through mechanisms which are in general not completely known. It is then safer that the method of choice for analyzing these interactions does not rely on any model or assumption on the nature of the data and their interactions. Transfer entropy has emerged as a powerful tool to quantify directed dynamical interactions. In this paper we compare different approaches to evaluate transfer entropy, some of them already proposed, some novel, and present their implementation in a freeware MATLAB toolbox. Applications to simulated and real data are presented.

Journal ArticleDOI
TL;DR: In this article, a novel fault feature extraction method based on the local mean decomposition technology and multi-scale entropy is proposed, which is used as a pretreatment to decompose the nonstationary vibration signal of a roller bearing into a number of product functions.

Journal ArticleDOI
TL;DR: A measure called causation entropy is developed and it is shown that its application can lead to reliable identification of true couplings.

Journal ArticleDOI
TL;DR: How statistical entropy and entropy rate relate to other notions of entropy that are relevant to probability theory, computer Sciences, computer sciences, the ergodic theory of dynamical systems, statistical physics and statistical physics is described.
Abstract: Statistical entropy was introduced by Shannon as a basic concept in information theory measuring the average missing information in a random source. Extended into an entropy rate, it gives bounds in coding and compression theorems. In this paper, I describe how statistical entropy and entropy rate relate to other notions of entropy that are relevant to probability theory (entropy of a discrete probability distribution measuring its unevenness), computer sciences (algorithmic complexity), the ergodic theory of dynamical systems (Kolmogorov–Sinai or metric entropy) and statistical physics (Boltzmann entropy). Their mathematical foundations and correlates (the entropy concentration, Sanov, Shannon–McMillan–Breiman, Lempel–Ziv and Pesin theorems) clarify their interpretation and offer a rigorous basis for maximum entropy principles. Although often ignored, these mathematical perspectives give a central position to entropy and relative entropy in statistical laws describing generic collective behaviours, and provide insights into the notions of randomness, typicality and disorder. The relevance of entropy beyond the realm of physics, in particular for living systems and ecosystems, is yet to be demonstrated.

ReportDOI
TL;DR: In this paper, the authors propose two performance measures for asset pricing models and apply them to representative agent models with recursive preferences, habits, and jumps, and compare their magnitudes to estimates derived from asset returns.
Abstract: We propose two performance measures for asset pricing models and apply them to representative agent models with recursive preferences, habits, and jumps. The measures describe the pricing kernel’s dispersion (the entropy of the title) and dynamics (horizon dependence, a measure of how entropy varies over different time horizons). We show how each model generates entropy and horizon dependence, and compare their magnitudes to estimates derived from asset returns. This exercise — and transparent loglinear approximations — clarify the mechanisms underlying these models. It also reveals, in some cases, tension between entropy, which should be large enough to account for observed excess returns, and horizon dependence, which should be small enough to account for mean yield spreads. JEL Classication Codes: E44, G12.

Journal ArticleDOI
TL;DR: The experimental results indicate that HE can depict the characteristics of the bearing vibration signal more accurately and more completely than MSE, and the proposed approach based on HE can identify various bearing conditions effectively and accurately and is superior to that based on MSE.

Journal ArticleDOI
TL;DR: Stochastic thermodynamics is generalized to the presence of an information reservoir and it is shown that both the entropy production involving mutual information between system and controller and the one involving a Shannon entropy difference of an Information reservoir like a tape carry an extra term different from the usual current times affinity.
Abstract: So far, feedback-driven systems have been discussed using (i) measurement and control, (ii) a tape interacting with a system, or (iii) by identifying an implicit Maxwell demon in steady-state transport. We derive the corresponding second laws from one master fluctuation theorem and discuss their relationship. In particular, we show that both the entropy production involving mutual information between system and controller and the one involving a Shannon entropy difference of an information reservoir like a tape carry an extra term different from the usual current times affinity. We, thus, generalize stochastic thermodynamics to the presence of an information reservoir.

Journal ArticleDOI
TL;DR: A general universal achievability procedure for finite blocklength analyses of other network information theory problems such as the multiple-access channel and broadcast channel is developed and a so-called information dispersion matrix characterizes these inner bounds.
Abstract: We analyze the dispersions of distributed lossless source coding (the Slepian-Wolf problem), the multiple-access channel, and the asymmetric broadcast channel. For the two-encoder Slepian-Wolf problem, we introduce a quantity known as the entropy dispersion matrix, which is analogous to the scalar dispersions that have gained interest recently. We prove a global dispersion result that can be expressed in terms of this entropy dispersion matrix and provides intuition on the approximate rate losses at a given blocklength and error probability. To gain better intuition about the rate at which the nonasymptotic rate region converges to the Slepian-Wolf boundary, we define and characterize two operational dispersions: 1) the local dispersion and 2) the weighted sum-rate dispersion. The former represents the rate of convergence to a point on the Slepian-Wolf boundary, whereas the latter represents the fastest rate for which a weighted sum of the two rates converges to its asymptotic fundamental limit. Interestingly, when we approach either of the two corner points, the local dispersion is characterized not by a univariate Gaussian, but a bivariate one as well as a subset of off-diagonal elements of the aforementioned entropy dispersion matrix. Finally, we demonstrate the versatility of our achievability proof technique by providing inner bounds for the multiple-access channel and the asymmetric broadcast channel in terms of dispersion matrices. All our proofs are unified by a so-called vector rate redundancy theorem, which is proved using the multidimensional Berry-Esseen theorem.

Book
01 May 2014
TL;DR: This book presents a systematic framework for system identification and information processing, investigating system identification from an information theory point of view, and contains numerous illustrative examples to help the reader grasp basic methods.
Abstract: Recently, criterion functions based on information theoretic measures (entropy, mutual information, information divergence) have attracted attention and become an emerging area of study in signal processing and system identification domain. This book presents a systematic framework for system identification and information processing, investigating system identification from an information theory point of view. The book is divided into six chapters, which cover the information needed to understand the theory and application of system parameter identification. The authors' research provides a base for the book, but it incorporates the results from the latest international research publications. One of the first books to present system parameter identification with information theoretic criteria so readers can track the latest developmentsContains numerous illustrative examples to help the reader grasp basic methods

Journal ArticleDOI
28 Jul 2014-PLOS ONE
TL;DR: This work combines the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series and tests the performance and robustness of the implementation on data from numerical simulations of stochastic processes.
Abstract: Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.

Journal ArticleDOI
TL;DR: In this article, an information-theoretic statistic called causation entropy was proposed for model-free causality inference, where the causal parents form the minimal set of nodes that maximizes the causal entropy.
Abstract: The broad abundance of time series data, which is in sharp contrast to limited knowledge of the underlying network dynamic processes that produce such observations, calls for a rigorous and efficient method of causal network inference. Here we develop mathematical theory of causation entropy, an information-theoretic statistic designed for model-free causality inference. For stationary Markov processes, we prove that for a given node in the network, its causal parents forms the minimal set of nodes that maximizes causation entropy, a result we refer to as the optimal causation entropy principle. Furthermore, this principle guides us to develop computational and data efficient algorithms for causal network inference based on a two-step discovery and removal algorithm for time series data for a network-couple dynamical system. Validation in terms of analytical and numerical results for Gaussian processes on large random networks highlight that inference by our algorithm outperforms previous leading methods including conditioned Granger causality and transfer entropy. Interestingly, our numerical results suggest that the number of samples required for accurate inference depends strongly on network characteristics such as the density of links and information diffusion rate and not necessarily on the number of nodes.

Journal ArticleDOI
TL;DR: The root assumptions of the A* algorithm are examined and reformulated in a manner that enables a direct use of the search strategy as the driving force behind the generation of new samples in a motion graph, leading to a highly exploitative method which does not sacrifice entropy.
Abstract: This paper presents a generalization of the classic A* algorithm to the domain of sampling-based motion planning. The root assumptions of the A* algorithm are examined and reformulated in a manner that enables a direct use of the search strategy as the driving force behind the generation of new samples in a motion graph. Formal analysis is presented to show probabilistic completeness and convergence of the method. This leads to a highly exploitative method which does not sacrifice entropy. Many improvements are presented to this versatile method, most notably, an optimal connection strategy, a bias towards the goal region via an Anytime A* heuristic, and balancing of exploration and exploitation on a simulated annealing schedule. Empirical results are presented to assess the proposed method both qualitatively and quantitatively in the context of high-dimensional planning problems. The potential of the proposed methods is apparent, both in terms of reliability and quality of solutions found.

Journal ArticleDOI
24 Apr 2014-Entropy
TL;DR: A novel expression for entropy inspired in the properties of Fractional Calculus is formulates, which reveals that tuning the fractional order allow an high sensitivity to the signal evolution, useful in describing the dynamics of complex systems.
Abstract: This paper formulates a novel expression for entropy inspired in the properties of Fractional Calculus. The characteristics of the generalized fractional entropy are tested both in standard probability distributions and real world data series. The results reveal that tuning the fractional order allow an high sensitivity to the signal evolution, which is useful in describing the dynamics of complex systems. The concepts are also extended to relative distances and tested with several sets of data, confirming the goodness of the generalization.

Journal ArticleDOI
TL;DR: This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint and proves to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
Abstract: Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

Journal ArticleDOI
13 May 2014-PLOS ONE
TL;DR: It is suggested that schizophrenia is associated with more complex signal patterns when compared to healthy controls, supporting the increase in complexity hypothesis, where system complexity increases with age or disease, and also consistent with the notion that schizophrenia was characterised by a dysregulation of the nonlinear dynamics of underlying neuronal systems.
Abstract: We investigated the differences in brain fMRI signal complexity in patients with schizophrenia while performing the Cyberball social exclusion task, using measures of Sample entropy and Hurst exponent (H). 13 patients meeting diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM IV) criteria for schizophrenia and 16 healthy controls underwent fMRI scanning at 1.5 T. The fMRI data of both groups of participants were pre-processed, the entropy characterized and the Hurst exponent extracted. Whole brain entropy and H maps of the groups were generated and analysed. The results after adjusting for age and sex differences together show that patients with schizophrenia exhibited higher complexity than healthy controls, at mean whole brain and regional levels. Also, both Sample entropy and Hurst exponent agree that patients with schizophrenia have more complex fMRI signals than healthy controls. These results suggest that schizophrenia is associated with more complex signal patterns when compared to healthy controls, supporting the increase in complexity hypothesis, where system complexity increases with age or disease, and also consistent with the notion that schizophrenia is characterised by a dysregulation of the nonlinear dynamics of underlying neuronal systems.

Journal ArticleDOI
TL;DR: In this paper, it is shown that the continuous entropy of interval-valued intuitionistic fuzzy set is the average of the entropies of its interval- values of IVIFVs, and the programming model to determine optimal weight of criteria with the principle of minimum entropy is established.
Abstract: In this paper, we propose the interval-valued intuitionistic fuzzy continuous weighted entropy which generalizes intuitionistic fuzzy entropy measures defined by Szmidt and Kacprzyk on the basis of the continuous ordered weighted averaging (COWA) operator. It is shown that the continuous entropy of interval-valued intuitionistic fuzzy set is the average of the entropies of its interval-valued intuitionistic fuzzy values (IVIFVs). We also establish the programming model to determine optimal weight of criteria with the principle of minimum entropy. Furthermore, we investigate the multi-criteria group decision making (MCGDM) problems in which criteria values take the form of interval-valued intuitionistic fuzzy information. An approach to interval-valued intuitionistic fuzzy multi-criteria group decision making is given, which is based on the weighted relative closeness and the IVIFV attitudinal expected score function. Finally, emergency risk management (ERM) evaluation is provided to illustrate the application of the developed approach.

Journal ArticleDOI
TL;DR: All the FD metrics were highly sensitive to failing to measure the traits of all the species present and the potential impact of the sampling regime of both traits and species and the scale at which the computations are made on the behaviour of metrics and subsequent robustness of the results.
Abstract: Summary Functional diversity (FD) is an important concept for studies of both ecosystem processes and community assembly, so it is important to understand the behaviour of common metrics used to express it. Data from an existing study of the relationship between FD and environmental drivers were used to simulate the impact of a progressive failure to measure the traits of all the species present under three scenarios: intraspecific variation between sites ignored (i), assessed (ii) or (iii) ignored but with metrics calculated at the sampling unit rather than the site level. All the FD metrics were highly sensitive to failing to measure the traits of all the species present. Functional dispersion, functional richness and Rao's entropy all generally declined with a reduced proportion of species or cover assessed for traits, whilst functional divergence and evenness increased for some sites and decreased for others. Functional richness was the most sensitive (mean absolute deviation at 70% of species assessed had a range of 11·2–28·2% across scenarios), followed by functional evenness (range 6·4–38·5%), functional divergence (5·2–8·3%), Rao's entropy (1·4–7·0%) and functional dispersion (0·7–3·5%). It is clear that failing to measure the traits of all species at a site can have a serious impact on the value of any functional trait metric computed and on any conclusions drawn from such data. Future studies of FD need to concentrate on the potential impact of the sampling regime of both traits and species and the scale at which the computations are made on the behaviour of metrics and subsequent robustness of the results.