scispace - formally typeset
Search or ask a question

Showing papers in "Entropy in 2017"


Journal ArticleDOI
23 Mar 2017-Entropy
TL;DR: The quantum Otto cycle serves as a bridge between the macroscopic world of heat engines and the quantum regime of thermal devices composed from a single element, and the dynamical model enables the study of finite time cycles limiting time on the adiabatic and the thermalization times.
Abstract: The quantum Otto cycle serves as a bridge between the macroscopic world of heat engines and the quantum regime of thermal devices composed from a single element. We compile recent studies of the quantum Otto cycle with a harmonic oscillator as a working medium. This model has the advantage that it is analytically trackable. In addition, an experimental realization has been achieved, employing a single ion in a harmonic trap. The review is embedded in the field of quantum thermodynamics and quantum open systems. The basic principles of the theory are explained by a specific example illuminating the basic definitions of work and heat. The relation between quantum observables and the state of the system is emphasized. The dynamical description of the cycle is based on a completely positive map formulated as a propagator for each stroke of the engine. Explicit solutions for these propagators are described on a vector space of quantum thermodynamical observables. These solutions which employ different assumptions and techniques are compared. The tradeoff between power and efficiency is the focal point of finite-time-thermodynamics. The dynamical model enables the study of finite time cycles limiting time on the adiabatic and the thermalization times. Explicit finite time solutions are found which are frictionless (meaning that no coherence is generated), and are also known as shortcuts to adiabaticity.The transition from frictionless to sudden adiabats is characterized by a non-hermitian degeneracy in the propagator. In addition, the influence of noise on the control is illustrated. These results are used to close the cycles either as engines or as refrigerators. The properties of the limit cycle are described. Methods to optimize the power by controlling the thermalization time are also introduced. At high temperatures, the Novikov–Curzon–Ahlborn efficiency at maximum power is obtained. The sudden limit of the engine which allows finite power at zero cycle time is shown. The refrigerator cycle is described within the frictionless limit, with emphasis on the cooling rate when the cold bath temperature approaches zero.

323 citations


Journal ArticleDOI
26 Jan 2017-Entropy
TL;DR: This work forms a chain of connections from univariate methods like the Kolmogorov-Smirnov test, PP/QQ plots and ROC/ODC curves, to multivariate tests involving energy statistics and kernel based maximum mean discrepancy, to provide useful connections for theorists and practitioners familiar with one subset of methods but not others.
Abstract: Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed, both for the unidimensional and the multivariate setting. In this short survey, we focus on test statistics that involve the Wasserstein distance. Using an entropic smoothing of the Wasserstein distance, we connect these to very different tests including multivariate methods involving energy statistics and kernel based maximum mean discrepancy and univariate methods like the Kolmogorov–Smirnov test, probability or quantile (PP/QQ) plots and receiver operating characteristic or ordinal dominance (ROC/ODC) curves. Some observations are implicit in the literature, while others seem to have not been noticed thus far. Given nonparametric two-sample testing’s classical and continued importance, we aim to provide useful connections for theorists and practitioners familiar with one subset of methods but not others.

287 citations


Journal ArticleDOI
19 Oct 2017-Entropy
TL;DR: The authors place the choice of prior into the context of the entire Bayesian analysis, from inference to prediction, from prediction to model evaluation, and show that the prior distribution can be chosen without reference to the model of the measurement process, while most common prior modeling techniques are implicitly motivated by a reference likelihood.
Abstract: A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.

259 citations


Journal ArticleDOI
24 May 2017-Entropy
TL;DR: This framework introduces a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet) and enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet.
Abstract: Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert knowledge and effort, due to a large number of architectural design choices. In this article, we present an efficient framework that automatically designs a high-performing CNN architecture for a given problem. In this framework, we introduce a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet). The new objective function allows the hyperparameters of the CNN architecture to be optimized in a way that enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet. The actual optimization of the objective function is carried out via the Nelder-Mead Method (NMM). Further, our new objective function results in much faster convergence towards a better architecture. The proposed framework has the ability to explore a CNN architecture’s numerous design choices in an efficient way and also allows effective, distributed execution and synchronization via web services. Empirically, we demonstrate that the CNN architecture designed with our approach outperforms several existing approaches in terms of its error rate. Our results are also competitive with state-of-the-art results on the MNIST dataset and perform reasonably against the state-of-the-art results on CIFAR-10 and CIFAR-100 datasets. Our approach has a significant role in increasing the depth, reducing the size of strides, and constraining some convolutional layers not followed by pooling layers in order to find a CNN architecture that produces a high recognition performance.

203 citations


Journal ArticleDOI
27 May 2017-Entropy
TL;DR: Experimental results demonstrate that the proposed epileptic seizure detection method can achieve a high average accuracy of 99.25%, indicating a powerful method in the detection and classification of epileptic seizures.
Abstract: Epileptic seizure detection is commonly implemented by expert clinicians with visual observation of electroencephalography (EEG) signals, which tends to be time consuming and sensitive to bias. The epileptic detection in most previous research suffers from low power and unsuitability for processing large datasets. Therefore, a computerized epileptic seizure detection method is highly required to eradicate the aforementioned problems, expedite epilepsy research and aid medical professionals. In this work, we propose an automatic epilepsy diagnosis framework based on the combination of multi-domain feature extraction and nonlinear analysis of EEG signals. Firstly, EEG signals are pre-processed by using the wavelet threshold method to remove the artifacts. We then extract representative features in the time domain, frequency domain, time-frequency domain and nonlinear analysis features based on the information theory. These features are further extracted in five frequency sub-bands based on the clinical interest, and the dimension of the original feature space is then reduced by using both a principal component analysis and an analysis of variance. Furthermore, the optimal combination of the extracted features is identified and evaluated via different classifiers for the epileptic seizure detection of EEG signals. Finally, the performance of the proposed method is investigated by using a public EEG database at the University Hospital Bonn, Germany. Experimental results demonstrate that the proposed epileptic seizure detection method can achieve a high average accuracy of 99.25%, indicating a powerful method in the detection and classification of epileptic seizures. The proposed seizure detection scheme is thus hoped to eliminate the burden of expert clinicians when they are processing a large number of data by visual observation and to speed-up the epilepsy diagnosis.

194 citations


Journal ArticleDOI
21 Dec 2017-Entropy
TL;DR: An abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy is considered, in a similar fashion to thermodynamic systems undergoing isothermal transformations.
Abstract: Living organisms from single cells to humans need to adapt continuously to respond to changes in their environment. The process of behavioural adaptation can be thought of as improving decision-making performance according to some utility function. Here, we consider an abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy, in a similar fashion to thermodynamic systems undergoing isothermal transformations. Such systems minimize the free energy to reach equilibrium states that balance internal energy and entropic cost. When there is a fast change in the environment, these systems evolve in a non-equilibrium fashion because they are unable to follow the path of equilibrium distributions. Here, we apply concepts from non-equilibrium thermodynamics to characterize decision-makers that adapt to changing environments under the assumption that the temporal evolution of the utility function is externally driven and does not depend on the decision-maker's action. This allows one to quantify performance loss due to imperfect adaptation in a general manner and, additionally, to find relations for decision-making similar to Crooks' fluctuation theorem and Jarzynski's equality. We provide simulations of several exemplary decision and inference problems in the discrete and continuous domains to illustrate the new relations.

189 citations


Journal ArticleDOI
19 Apr 2017-Entropy
TL;DR: Improved LMD is proposed based on the self-similarity of roller bearing vibration signal by extending the right and left side of the original signal to suppress its edge effect and can effectively identify the different faults of the rolling bearing.
Abstract: Based on the combination of improved Local Mean Decomposition (LMD), Multi-scale Permutation Entropy (MPE) and Hidden Markov Model (HMM), the fault types of bearings are diagnosed. Improved LMD is proposed based on the self-similarity of roller bearing vibration signal by extending the right and left side of the original signal to suppress its edge effect. First, the vibration signals of the rolling bearing are decomposed into several product function (PF) components by improved LMD respectively. Then, the phase space reconstruction of the PF1 is carried out by using the mutual information (MI) method and the false nearest neighbor (FNN) method to calculate the delay time and the embedding dimension, and then the scale is set to obtain the MPE of PF1. After that, the MPE features of rolling bearings are extracted. Finally, the features of MPE are used as HMM training and diagnosis. The experimental results show that the proposed method can effectively identify the different faults of the rolling bearing.

128 citations


Journal ArticleDOI
29 Jun 2017-Entropy
TL;DR: This work presents a new measure of redundancy which measures the common change in surprisal shared between variables at the local or pointwise level, and shows how this redundancy measure can be used within the framework of the Partial Information Decomposition (PID) to give an intuitive decomposition of the multivariate mutual information into redundant, unique and synergistic contributions.
Abstract: The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables X i . It can be thought of as quantifying overlapping information content or similarities in the representation of S between the X i . We present a new measure of redundancy which measures the common change in surprisal shared between variables at the local or pointwise level. We provide a game-theoretic operational definition of unique information, and use this to derive constraints which are used to obtain a maximum entropy distribution. Redundancy is then calculated from this maximum entropy distribution by counting only those local co-information terms which admit an unambiguous interpretation as redundant information. We show how this redundancy measure can be used within the framework of the Partial Information Decomposition (PID) to give an intuitive decomposition of the multivariate mutual information into redundant, unique and synergistic contributions. We compare our new measure to existing approaches over a range of example systems, including continuous Gaussian variables. Matlab code for the measure is provided, including all considered examples.

125 citations


Journal ArticleDOI
13 Sep 2017-Entropy
TL;DR: The proposed automated method for automatic diagnosis of MI using ECG beat with flexible analytic wavelet transform (FAWT) method can be installed in the intensive care units of hospitals to aid the clinicians in confirming their diagnosis.
Abstract: Myocardial infarction (MI) is a silent condition that irreversibly damages the heart muscles. It expands rapidly and, if not treated timely, continues to damage the heart muscles. An electrocardiogram (ECG) is generally used by the clinicians to diagnose the MI patients. Manual identification of the changes introduced by MI is a time-consuming and tedious task, and there is also a possibility of misinterpretation of the changes in the ECG. Therefore, a method for automatic diagnosis of MI using ECG beat with flexible analytic wavelet transform (FAWT) method is proposed in this work. First, the segmentation of ECG signals into beats is performed. Then, FAWT is applied to each ECG beat, which decomposes them into subband signals. Sample entropy (SEnt) is computed from these subband signals and fed to the random forest (RF), J48 decision tree, back propagation neural network (BPNN), and least-squares support vector machine (LS-SVM) classifiers to choose the highest performing one. We have achieved highest classification accuracy of 99.31% using LS-SVM classifier. We have also incorporated Wilcoxon and Bhattacharya ranking methods and observed no improvement in the performance. The proposed automated method can be installed in the intensive care units (ICUs) of hospitals to aid the clinicians in confirming their diagnosis.

117 citations


Journal ArticleDOI
11 Oct 2017-Entropy
TL;DR: A short survey on the concept of contact Hamiltonian dynamics and its use in several areas of physics, namely reversible and irreversible thermodynamics, statistical physics and classical mechanics, and insights into possible future directions are given.
Abstract: We give a short survey on the concept of contact Hamiltonian dynamics and its use in several areas of physics, namely reversible and irreversible thermodynamics, statistical physics and classical mechanics. Some relevant examples are provided along the way. We conclude by giving insights into possible future directions.

115 citations


Journal ArticleDOI
23 Aug 2017-Entropy
TL;DR: The original motivation of the development of long memory and Mandelbrot’s influence on this fascinating field are discussed and the sometimes contrasting approaches to long memory in different scientific communities are elucidated.
Abstract: Long memory plays an important role in many fields by determining the behaviour and predictability of systems; for instance, climate, hydrology, finance, networks and DNA sequencing. In particular, it is important to test if a process is exhibiting long memory since that impacts the accuracy and confidence with which one may predict future events on the basis of a small amount of historical data. A major force in the development and study of long memory was the late Benoit B. Mandelbrot. Here, we discuss the original motivation of the development of long memory and Mandelbrot’s influence on this fascinating field. We will also elucidate the sometimes contrasting approaches to long memory in different scientific communities.

Journal ArticleDOI
14 Jul 2017-Entropy
TL;DR: A family of estimators based on a pairwise distance function between mixture components, and it is proved that this estimator class has many attractive properties, is very useful in optimization problems involving maximization/minimization of entropy and mutual information, such as MaxEnt and rate distortion problems.
Abstract: Mixture distributions arise in many parametric and non-parametric settings—for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of approximation necessary. We propose a family of estimators based on a pairwise distance function between mixture components, and show that this estimator class has many attractive properties. For many distributions of interest, the proposed estimators are efficient to compute, differentiable in the mixture parameters, and become exact when the mixture components are clustered. We prove this family includes lower and upper bounds on the mixture entropy. The Chernoff α -divergence gives a lower bound when chosen as the distance function, with the Bhattacharyaa distance providing the tightest lower bound for components that are symmetric and members of a location family. The Kullback–Leibler divergence gives an upper bound when used as the distance function. We provide closed-form expressions of these bounds for mixtures of Gaussians, and discuss their applications to the estimation of mutual information. We then demonstrate that our bounds are significantly tighter than well-known existing bounds using numeric simulations. This estimator class is very useful in optimization problems involving maximization/minimization of entropy and mutual information, such as MaxEnt and rate distortion problems.

Journal ArticleDOI
29 Aug 2017-Entropy
TL;DR: The aim of this paper is to determine the number of clusters of a dataset in a model-based clustering by using an Analytic Hierarchy Process (AHP) using the information criteria Akaike’s Information Criterion, Approximate Weight of Evidence (AWE), Bayesian Information Criteria (BIC), Classification Likelihood Criterion (CLC), and Kullback Informationcriterion (KIC).
Abstract: To determine the number of clusters in the clustering analysis that has a broad range of applied sciences, such as physics, chemistry, biology, engineering, economics etc., many methods have been proposed in the literature. The aim of this paper is to determine the number of clusters of a dataset in a model-based clustering by using an Analytic Hierarchy Process (AHP). In this study, the AHP model has been created by using the information criteria Akaike’s Information Criterion (AIC), Approximate Weight of Evidence (AWE), Bayesian Information Criterion (BIC), Classification Likelihood Criterion (CLC), and Kullback Information Criterion (KIC). The achievement of the proposed approach has been tested on common real and synthetic datasets. The proposed approach based on the corresponding information criteria has produced accurate results. The currently produced results have been seen to be more accurate than those corresponding to the information criteria.

Journal ArticleDOI
23 Dec 2017-Entropy
TL;DR: This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities and shows that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.
Abstract: Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.

Journal ArticleDOI
01 Dec 2017-Entropy
TL;DR: This work introduces a novel context-aware privacy framework called GAP, which leverages recent advancements in generative adversarial networks to allow the data holder to learn privatization schemes from the dataset itself, and demonstrates that the framework can be easily applied in practice, even in the absence of dataset statistics.
Abstract: Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals’ private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP’s performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model; and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.

Journal ArticleDOI
22 Nov 2017-Entropy
TL;DR: This paper derives alternative upper bounds and extends those to the case of two discrete random variables and normalized mutual information (NMI) measures are then obtained from those bounds, emphasizing the use of least upper bounds.
Abstract: Starting with a new formulation for the mutual information (MI) between a pair of events, this paper derives alternative upper bounds and extends those to the case of two discrete random variables. Normalized mutual information (NMI) measures are then obtained from those bounds, emphasizing the use of least upper bounds. Conditional NMI measures are also derived for three different events and three different random variables. Since the MI formulation for a pair of events is always nonnegative, it can properly be extended to include weighted MI and NMI measures for pairs of events or for random variables that are analogous to the well-known weighted entropy. This weighted MI is generalized to the case of continuous random variables. Such weighted measures have the advantage over previously proposed measures of always being nonnegative. A simple transformation is derived for the NMI, such that the transformed measures have the value-validity property necessary for making various appropriate comparisons between values of those measures. A numerical example is provided.

Journal ArticleDOI
26 Apr 2017-Entropy
TL;DR: It is argued that systems have a particular causal capacity, and that different descriptions of those systems take advantage of that capacity to various degrees, and this provides a general framework for understanding how the causal structure of some systems cannot be fully captured by even the most detailed microscale description.
Abstract: The causal structure of any system can be analyzed at a multitude of spatial and temporal scales. It has long been thought that while higher scale (macro) descriptions may be useful to observers, they are at best a compressed description and at worse leave out critical information and causal relationships. However, recent research applying information theory to causal analysis has shown that the causal structure of some systems can actually come into focus and be more informative at a macroscale. That is, a macroscale description of a system (a map) can be more informative than a fully detailed microscale description of the system (the territory). This has been called “causal emergence.” While causal emergence may at first seem counterintuitive, this paper grounds the phenomenon in a classic concept from information theory: Shannon’s discovery of the channel capacity. I argue that systems have a particular causal capacity, and that different descriptions of those systems take advantage of that capacity to various degrees. For some systems, only macroscale descriptions use the full causal capacity. These macroscales can either be coarse-grains, or may leave variables and states out of the model (exogenous, or “black boxed”) in various ways, which can improve the efficacy and informativeness via the same mathematical principles of how error-correcting codes take advantage of an information channel’s capacity. The causal capacity of a system can approach the channel capacity as more and different kinds of macroscales are considered. Ultimately, this provides a general framework for understanding how the causal structure of some systems cannot be fully captured by even the most detailed microscale description.

Journal ArticleDOI
27 Dec 2017-Entropy
TL;DR: A novel three-dimensional chaotic system with three nonlinearities with multiple attractors caused by different initial values is reported about, and an S-Box is developed for cryptographic operations.
Abstract: This paper reports about a novel three-dimensional chaotic system with three nonlinearities. The system has one stable equilibrium, two stable equilibria and one saddle node, two saddle foci and one saddle node for different parameters. One salient feature of this novel system is its multiple attractors caused by different initial values. With the change of parameters, the system experiences mono-stability, bi-stability, mono-periodicity, bi-periodicity, one strange attractor, and two coexisting strange attractors. The complex dynamic behaviors of the system are revealed by analyzing the corresponding equilibria and using the numerical simulation method. In addition, an electronic circuit is given for implementing the chaotic attractors of the system. Using the new chaotic system, an S-Box is developed for cryptographic operations. Moreover, we test the performance of this produced S-Box and compare it to the existing S-Box studies.

Journal ArticleDOI
03 Mar 2017-Entropy
TL;DR: The complexity of multivariate electroencephalogram (EEG) signals in different frequency scales is analyzed for the analysis and classification of focal and non-focal EEG signals and the proposed multivariate sub-band entropy measure has been built based on tunable-Q wavelet transform (TQWT).
Abstract: This paper analyses the complexity of multivariate electroencephalogram (EEG) signals in different frequency scales for the analysis and classification of focal and non-focal EEG signals. The proposed multivariate sub-band entropy measure has been built based on tunable-Q wavelet transform (TQWT). In the field of multivariate entropy analysis, recent studies have performed analysis of biomedical signals with a multi-level filtering approach. This approach has become a useful tool for measuring inherent complexity of the biomedical signals. However, these methods may not be well suited for quantifying the complexity of the individual multivariate sub-bands of the analysed signal. In this present study, we have tried to resolve this difficulty by employing TQWT for analysing the sub-band signals of the analysed multivariate signal. It should be noted that higher value of Q factor is suitable for analysing signals with oscillatory nature, whereas the lower value of Q factor is suitable for analysing signals with non-oscillatory transients in nature. Moreover, with an increased number of sub-bands and a higher value of Q-factor, a reasonably good resolution can be achieved simultaneously in high and low frequency regions of the considered signals. Finally, we have employed multivariate fuzzy entropy (mvFE) to the multivariate sub-band signals obtained from the analysed signal. The proposed Q-based multivariate sub-band entropy has been studied on the publicly available bivariate Bern Barcelona focal and non-focal EEG signals database to investigate the statistical significance of the proposed features in different time segmented signals. Finally, the features are fed to random forest and least squares support vector machine (LS-SVM) classifiers to select the best classifier. Our method has achieved the highest classification accuracy of 84.67% in classifying focal and non-focal EEG signals with LS-SVM classifier. The proposed multivariate sub-band fuzzy entropy can also be applied to measure complexity of other multivariate biomedical signals.

Journal ArticleDOI
18 Aug 2017-Entropy
TL;DR: This communication addresses a comparison of newly presented non-integer order derivatives with and without singular kernel, namely Michele Caputo–Mauro Fabrizio (CF) C F and Atangana–Baleanu (AB) A B ( ∂ α / ∂ t α ) fractional derivatives.
Abstract: This communication addresses a comparison of newly presented non-integer order derivatives with and without singular kernel, namely Michele Caputo–Mauro Fabrizio (CF) C F ( ∂ β / ∂ t β ) and Atangana–Baleanu (AB) A B ( ∂ α / ∂ t α ) fractional derivatives. For this purpose, second grade fluids flow with combined gradients of mass concentration and temperature distribution over a vertical flat plate is considered. The problem is first written in non-dimensional form and then based on AB and CF fractional derivatives, it is developed in fractional form, and then using the Laplace transform technique, exact solutions are established for both cases of AB and CF derivatives. They are then expressed in terms of newly defined M-function M q p ( z ) and generalized Hyper-geometric function p Ψ q ( z ) . The obtained exact solutions are plotted graphically for several pertinent parameters and an interesting comparison is made between AB and CF derivatives results with various similarities and differences.

Journal ArticleDOI
01 Aug 2017-Entropy
TL;DR: The results show that the proposed model of landslide hazard susceptibility can help to produce more objective and accurate landslide susceptibility maps, which not only take advantage of the information from the original data, but also reflect an expert’s knowledge and the opinions of decision-makers.
Abstract: Landslides are a common type of natural disaster in mountainous areas. As a result of the comprehensive influences of geology, geomorphology and climatic conditions, the susceptibility to landslide hazards in mountainous areas shows obvious regionalism. The evaluation of regional landslide susceptibility can help reduce the risk to the lives of mountain residents. In this paper, the Shannon entropy theory, a fuzzy comprehensive method and an analytic hierarchy process (AHP) have been used to demonstrate a variable type of weighting for landslide susceptibility evaluation modeling, combining subjective and objective weights. Further, based on a single factor sensitivity analysis, we established a strict criterion for landslide susceptibility assessments. Eight influencing factors have been selected for the study of Zhen’an County, Shan’xi Province: the lithology, relief amplitude, slope, aspect, slope morphology, altitude, annual mean rainfall and distance to the river. In order to verify the advantages of the proposed method, the landslide index, prediction accuracy P, the R-index and the area under the curve were used in this paper. The results show that the proposed model of landslide hazard susceptibility can help to produce more objective and accurate landslide susceptibility maps, which not only take advantage of the information from the original data, but also reflect an expert’s knowledge and the opinions of decision-makers.

Journal ArticleDOI
08 Aug 2017-Entropy
TL;DR: In this paper, the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales are derived using the theory of state space models.
Abstract: Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information decomposition and partial information decomposition, can thus be analytically obtained for different time scales from the parameters of the VAR model that fits the processes. We report the application of the proposed methodology firstly to benchmark Gaussian systems, showing that this class of systems may generate patterns of information decomposition characterized by prevalently redundant or synergistic information transfer persisting across multiple time scales or even by the alternating prevalence of redundant and synergistic source interaction depending on the time scale. Then, we apply our method to an important topic in neuroscience, i.e., the detection of causal interactions in human epilepsy networks, for which we show the relevance of partial information decomposition to the detection of multiscale information transfer spreading from the seizure onset zone.

Journal ArticleDOI
14 Apr 2017-Entropy
TL;DR: It is argued that the most compelling account of the relationship between life and mind treats them as strongly continuous, and that this continuity is based on particular concepts of life (autopoiesis and adaptivity) and mind (basic and non-semantic).
Abstract: This paper considers questions about continuity and discontinuity between life and mind. It begins by examining such questions from the perspective of the free energy principle (FEP). The FEP is becoming increasingly influential in neuroscience and cognitive science. It says that organisms act to maintain themselves in their expected biological and cognitive states, and that they can do so only by minimizing their free energy given that the long-term average of free energy is entropy. The paper then argues that there is no singular interpretation of the FEP for thinking about the relation between life and mind. Some FEP formulations express what we call an independence view of life and mind. One independence view is a cognitivist view of the FEP. It turns on information processing with semantic content, thus restricting the range of systems capable of exhibiting mentality. Other independence views exemplify what we call an overly generous non-cognitivist view of the FEP, and these appear to go in the opposite direction. That is, they imply that mentality is nearly everywhere. The paper proceeds to argue that non-cognitivist FEP, and its implications for thinking about the relation between life and mind, can be usefully constrained by key ideas in recent enactive approaches to cognitive science. We conclude that the most compelling account of the relationship between life and mind treats them as strongly continuous, and that this continuity is based on particular concepts of life (autopoiesis and adaptivity) and mind (basic and non-semantic).

Journal ArticleDOI
20 Jul 2017-Entropy
TL;DR: Experimental results conducted in two real-world datasets show the proposed ensemble learning method for predicting missing QoS in 5G network environments can produce superior prediction accuracy.
Abstract: Mobile Service selection is an important but challenging problem in service and mobile computing. Quality of service (QoS) predication is a critical step in service selection in 5G network environments. The traditional methods, such as collaborative filtering (CF), suffer from a series of defects, such as failing to handle data sparsity. In mobile network environments, the abnormal QoS data are likely to result in inferior prediction accuracy. Unfortunately, these problems have not attracted enough attention, especially in a mixed mobile network environment with different network configurations, generations, or types. An ensemble learning method for predicting missing QoS in 5G network environments is proposed in this paper. There are two key principles: one is the newly proposed similarity computation method for identifying similar neighbors; the other is the extended ensemble learning model for discovering and filtering fake neighbors from the preliminary neighbors set. Moreover, three prediction models are also proposed, two individual models and one combination model. They are used for utilizing the user similar neighbors and servicing similar neighbors, respectively. Experimental results conducted in two real-world datasets show our approaches can produce superior prediction accuracy.

Journal ArticleDOI
28 Apr 2017-Entropy
TL;DR: A new version of permutation entropy, which is interpreted as distance to white noise, has a scale similar to the well-known χ 2 distributions and can be supported by a statistical model.
Abstract: Permutation entropy and order patterns in an EEG signal have been applied by several authors to study sleep, anesthesia, and epileptic absences. Here, we discuss a new version of permutation entropy, which is interpreted as distance to white noise. It has a scale similar to the well-known χ 2 distributions and can be supported by a statistical model. Critical values for significance are provided. Distance to white noise is used as a parameter which measures depth of sleep, where the vigilant awake state of the human EEG is interpreted as “almost white noise”. Classification of sleep stages from EEG data usually relies on delta waves and graphic elements, which can be seen on a macroscale of several seconds. The distance to white noise can anticipate such emerging waves before they become apparent, evaluating invisible tendencies of variations within 40 milliseconds. Data segments of 30 s of high-resolution EEG provide a reliable classification. Application to the diagnosis of sleep disorders is indicated.

Journal ArticleDOI
17 Jun 2017-Entropy
TL;DR: This work offers an automatic exploring words and characters level features approach: a recurrent neural network using bidirectional long short-term memory (L STM) with Conditional Random Fields decoding (LSTM-CRF), which outperforms the best system in the DDI2013 challenge.
Abstract: Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific knowledge which are difficult to collect and define. Therefore, we offer an automatic exploring words and characters level features approach: a recurrent neural network using bidirectional long short-term memory (LSTM) with Conditional Random Fields decoding (LSTM-CRF). Two kinds of word representations are used in this work: word embedding, which is trained from a large amount of text, and character-based representation, which can capture orthographic feature of words. Experimental results on the DDI2011 and DDI2013 dataset show the effect of the proposed LSTM-CRF method. Our method outperforms the best system in the DDI2013 challenge.

Journal ArticleDOI
19 Dec 2017-Entropy
TL;DR: José Francisco Gómez-Aguilar 1,* ID, María Guadalupe López-López 2 ID, Victor Manuel Alvarado-Martínez 2, Dumitru Baleanu 3,4 and Hasib Khan 5,6,* 1 CONACyT-Tecnológico Nacional de Mexico/CENIDET, Interior Internado Palmira s/n Col. Palmira C.
Abstract: In this paper, a three-dimensional cancer model was considered using the Caputo-Fabrizio-Caputo and the new fractional derivative with Mittag-Leffler kernel in Liouville-Caputo sense. Special solutions using an iterative scheme via Laplace transform, Sumudu-Picard integration method and Adams-Moulton rule were obtained. We studied the uniqueness and existence of the solutions. Novel chaotic attractors with total order less than three are obtained.

Journal ArticleDOI
14 Jun 2017-Entropy
TL;DR: Three parallel corpora are used, encompassing ca.
Abstract: The choice associated with words is a fundamental property of natural languages. It lies at the heart of quantitative linguistics, computational linguistics and language sciences more generally. Information theory gives us tools at hand to measure precisely the average amount of choice associated with words: the word entropy. Here, we use three parallel corpora, encompassing ca. 450 million words in 1916 texts and 1259 languages, to tackle some of the major conceptual and practical problems of word entropy estimation: dependence on text size, register, style and estimation method, as well as non-independence of words in co-text. We present two main findings: Firstly, word entropies display relatively narrow, unimodal distributions. There is no language in our sample with a unigram entropy of less than six bits/word. We argue that this is in line with information-theoretic models of communication. Languages are held in a narrow range by two fundamental pressures: word learnability and word expressivity, with a potential bias towards expressivity. Secondly, there is a strong linear relationship between unigram entropies and entropy rates. The entropy difference between words with and without co-textual information is narrowly distributed around ca. three bits/word. In other words, knowing the preceding text reduces the uncertainty of words by roughly the same amount across languages of the world.

Journal ArticleDOI
09 Sep 2017-Entropy
TL;DR: A theoretical and a mathematical model is presented to determine the entropy generation on electro-kinetically modulated peristaltic propulsion on the magnetized nanofluid flow through a microchannel with joule heating.
Abstract: A theoretical and a mathematical model is presented to determine the entropy generation on electro-kinetically modulated peristaltic propulsion on the magnetized nanofluid flow through a microchannel with joule heating. The mathematical modeling is based on the energy, momentum, continuity, and entropy equation in the Cartesian coordinate system. The effects of viscous dissipation, heat absorption, magnetic field, and electrokinetic body force are also taken into account. The electric field terms are helpful to model the electrical potential terms by means of Poisson–Boltzmann equations, ionic Nernst–Planck equation, and Debye length approximation. A perturbation method has been applied to solve the coupled nonlinear partial differential equations and a series solution is obtained up to second order. The physical behavior of all the governing parameters is discussed for pressure rise, velocity profile, entropy profile, and temperature profile.

Journal ArticleDOI
18 Aug 2017-Entropy
TL;DR: An overview of recent research efforts on alternative approaches for securing IoT wireless communications at the physical layer, specifically the key topics of key generation and physical layer encryption are provided.
Abstract: The security of the Internet of Things (IoT) is receiving considerable interest as the low power constraints and complexity features of many IoT devices are limiting the use of conventional cryptographic techniques. This article provides an overview of recent research efforts on alternative approaches for securing IoT wireless communications at the physical layer, specifically the key topics of key generation and physical layer encryption. These schemes can be implemented and are lightweight, and thus offer practical solutions for providing effective IoT wireless security. Future research to make IoT-based physical layer security more robust and pervasive is also covered.