scispace - formally typeset
Search or ask a question

Showing papers on "Entropy (information theory) published in 2016"


Journal ArticleDOI
TL;DR: Fixed-to-fixed length, invertible, and low complexity encoders and decoders based on constant composition and arithmetic coding are presented and the encoder achieves the maximum rate of the desired distribution, asymptotically in the blocklength.
Abstract: Distribution matching transforms independent and Bernoulli(1/2) distributed input bits into a sequence of output symbols with a desired distribution. Fixed-to-fixed length, invertible, and low complexity encoders and decoders based on constant composition and arithmetic coding are presented. The encoder achieves the maximum rate, namely, the entropy of the desired distribution, asymptotically in the blocklength. Furthermore, the normalized divergence of the encoder output and the desired distribution goes to zero in the blocklength.

510 citations


Journal ArticleDOI
TL;DR: A new method, termed dispersion entropy (DE), is introduced, to quantify the regularity of time series and gain insight into the dependency of DE on several straightforward signal-processing concepts via a set of synthetic time series.
Abstract: One of the most powerful tools to assess the dynamical characteristics of time series is entropy. Sample entropy (SE), though powerful, is not fast enough, especially for long signals. Permutation entropy (PE), as a broadly used irregularity indicator, considers only the order of the amplitude values and hence some information regarding the amplitudes may be discarded. To tackle these problems, we introduce a new method, termed dispersion entropy (DE), to quantify the regularity of time series. We gain insight into the dependency of DE on several straightforward signal-processing concepts via a set of synthetic time series. The results show that DE, unlike PE, can detect the noise bandwidth and simultaneous frequency and amplitude change. We also employ DE to three publicly available real datasets. The simulations on real-valued signals show that the DE method considerably outperforms PE to discriminate different groups of each dataset. In addition, the computation time of DE is significantly less than that of SE and PE.

429 citations


Journal ArticleDOI
TL;DR: This is the first known application of combined sample entropy and SBPM to battery health prognosis and the proposed approach allows for an analytical integration of temperature effects.
Abstract: Battery health monitoring and management is of extreme importance for the performance and cost of electric vehicles. This paper is concerned with machine-learning-enabled battery state-of-health (SOH) indication and prognosis. The sample entropy of short voltage sequence is used as an effective signature of capacity loss. Advanced sparse Bayesian predictive modeling (SBPM) methodology is employed to capture the underlying correspondence between the capacity loss and sample entropy. The SBPM-based SOH monitor is compared with a polynomial model developed in our prior work. The proposed approach allows for an analytical integration of temperature effects such that an explicitly temperature-perspective SOH estimator is established, whose performance and complexity is contrasted to the support vector machine (SVM) scheme. The forecast of remaining useful life is also performed via a combination of SBPM and bootstrap sampling concepts. Large amounts of experimental data from multiple lithium-ion battery cells at three different temperatures are deployed for model construction, verification, and comparison. Such a multi-cell setting is more useful and valuable than only considering a single cell (a common scenario). This is the first known application of combined sample entropy and SBPM to battery health prognosis.

370 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of communication over a discrete memoryless channel (DMC) or an additive white Gaussian noise (AWGN) channel subject to the constraint that the probability that an adversary who observes the channel outputs can detect the communication is low.
Abstract: This paper considers the problem of communication over a discrete memoryless channel (DMC) or an additive white Gaussian noise (AWGN) channel subject to the constraint that the probability that an adversary who observes the channel outputs can detect the communication is low. In particular, the relative entropy between the output distributions when a codeword is transmitted and when no input is provided to the channel must be sufficiently small. For a DMC whose output distribution induced by the “off” input symbol is not a mixture of the output distributions induced by other input symbols, it is shown that the maximum amount of information that can be transmitted under this criterion scales like the square root of the blocklength. The same is true for the AWGN channel. Exact expressions for the scaling constant are also derived.

326 citations


Journal ArticleDOI
TL;DR: It is proved that the newly-defined entropy meets the common requirement of monotonicity and can equivalently characterize the existing attribute reductions in the fuzzy rough set theory.

259 citations


Journal ArticleDOI
TL;DR: This work establishes an analytical connection between the coding pattern of an arbitrary coding metasurface and its far-field pattern, and introduces geometrical entropy and physical entropy to describe the information of the far- field pattern of the metAsurface.
Abstract: Because of their exceptional capability to tailor the effective medium parameters, metamaterials have been widely used to control electromagnetic waves, which has led to the observation of many interesting phenomena, for example, negative refraction, invisibility cloaking, and anomalous reflections and transmissions. However, the studies of metamaterials or metasurfaces are mainly limited to their physical features; currently, there is a lack of viewpoints on metamaterials and metasurfaces from the information perspective. Here we propose to measure the information of a coding metasurface using Shannon entropy. We establish an analytical connection between the coding pattern of an arbitrary coding metasurface and its far-field pattern. We introduce geometrical entropy to describe the information of the coding pattern (or coding sequence) and physical entropy to describe the information of the far-field pattern of the metasurface. The coding metasurface is demonstrated to enhance the information in transmitting messages, and the amount of enhanced information can be manipulated by designing the coding pattern with different information entropies. The proposed concepts and entropy control method will be helpful in new information systems (for example, communication, radar and imaging) that are based on the coding metasurfaces. The amount of information held in a reflection from a metasurface coded with a digital pattern can be analysed using the concept of entropy. While metamaterials have been widely used to control electromagnetic waves, they have been little studied from an information perspective. Tie Jun Cui of Southeast University in China and co-workers have discovered that the far-field reflection pattern of a metasurface is the Fourier transform of its coding pattern. Using full-wave numerical simulations, the team studied the behaviour of various metasurface patterns, including periodic, non-periodic and random codes. They found that the amount of information in the far-field reflection is controlled by the coding pattern. The entropy of the far-field reflection pattern increases with increasing entropy of the code. This approach could find application in multi-target radar, imaging systems and multichannel communication.

247 citations


Journal ArticleDOI
TL;DR: In this paper, the Renyi divergence in terms of the relative information spectrum is derived, leading to a bound on the total variation distance and its relation to the relative entropy, including reverse Pinsker inequalities.
Abstract: This paper develops systematic approaches to obtain $f$ -divergence inequalities, dealing with pairs of probability measures defined on arbitrary alphabets. Functional domination is one such approach, where special emphasis is placed on finding the best possible constant upper bounding a ratio of $f$ -divergences. Another approach used for the derivation of bounds among $f$ -divergences relies on moment inequalities and the logarithmic-convexity property, which results in tight bounds on the relative entropy and Bhattacharyya distance in terms of $\chi ^{2}$ divergences. A rich variety of bounds are shown to hold under boundedness assumptions on the relative information. Special attention is devoted to the total variation distance and its relation to the relative information and relative entropy, including “reverse Pinsker inequalities,” as well as on the $E_\gamma $ divergence, which generalizes the total variation distance. Pinsker’s inequality is extended for this type of $f$ -divergence, a result which leads to an inequality linking the relative entropy and relative information spectrum. Integral expressions of the Renyi divergence in terms of the relative information spectrum are derived, leading to bounds on the Renyi divergence in terms of either the variational distance or relative entropy.

214 citations


Journal ArticleDOI
TL;DR: In this article, an information-theoretic analysis of Thompson sampling is presented for online optimization problems, in which a decision-maker must learn from partial feedback and leads to regret bounds that scale with the entropy of the optimal-action distribution.
Abstract: We provide an information-theoretic analysis of Thompson sampling that applies across a broad range of online optimization problems in which a decision-maker must learn from partial feedback. This analysis inherits the simplicity and elegance of information theory and leads to regret bounds that scale with the entropy of the optimal-action distribution. This strengthens preexisting results and yields new insight into how information improves performance.

201 citations


Journal ArticleDOI
TL;DR: The novel practical measure Φ* is derived by introducing a concept of mismatched decoding developed from information theory that is properly bounded from below and above, as required, as a measure of integrated information.
Abstract: Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT) of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information) or when the system comprises independent parts (no integration). The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology.

195 citations


Journal ArticleDOI
TL;DR: The proposed classification method has the potential for identifying the epileptogenic zones, which is an important step prior to resective surgery usually performed on patients with low responsiveness to anti-epileptic medications.

185 citations


Journal ArticleDOI
31 Dec 2016-Entropy
TL;DR: The experiment results show that the proposed EDOMFE method can effectively extract fault features from the vibration signal and the proposed EOMSMFD method can accurately diagnose the fault types and fault severities for the inner race fault, the outerRace fault, and rolling element fault of the motor bearing.
Abstract: Feature extraction is one of the most important, pivotal, and difficult problems in mechanical fault diagnosis, which directly relates to the accuracy of fault diagnosis and the reliability of early fault prediction. Therefore, a new fault feature extraction method, called the EDOMFE method based on integrating ensemble empirical mode decomposition (EEMD), mode selection, and multi-scale fuzzy entropy is proposed to accurately diagnose fault in this paper. The EEMD method is used to decompose the vibration signal into a series of intrinsic mode functions (IMFs) with a different physical significance. The correlation coefficient analysis method is used to calculate and determine three improved IMFs, which are close to the original signal. The multi-scale fuzzy entropy with the ability of effective distinguishing the complexity of different signals is used to calculate the entropy values of the selected three IMFs in order to form a feature vector with the complexity measure, which is regarded as the inputs of the support vector machine (SVM) model for training and constructing a SVM classifier (EOMSMFD based on EDOMFE and SVM) for fulfilling fault pattern recognition. Finally, the effectiveness of the proposed method is validated by real bearing vibration signals of the motor with different loads and fault severities. The experiment results show that the proposed EDOMFE method can effectively extract fault features from the vibration signal and that the proposed EOMSMFD method can accurately diagnose the fault types and fault severities for the inner race fault, the outer race fault, and rolling element fault of the motor bearing. Therefore, the proposed method provides a new fault diagnosis technology for rotating machinery.

Journal ArticleDOI
TL;DR: An effective small-target detection approach based on weighted image entropy that aims to improve the signal-to-noise ratio for cases in which jamming objects in the scene have similar thermal intensity measure with respect to the background as small target is proposed.
Abstract: We propose an effective small-target detection approach based on weighted image entropy. The approach weights the local entropy measure by the multiscale grayscale difference followed by an adaptive threshold operation, which aims to improve the signal-to-noise ratio for cases in which jamming objects in the scene have similar thermal intensity measure with respect to the background as small target. The detection capability of the proposed approach has been validated on six real sequences, and the results demonstrate its significance and improvement.

Journal ArticleDOI
TL;DR: A secure image encryption scheme based on logistic and spatiotemporal chaotic systems is proposed that can resistant different attacks, such as the brute-force attack, statistical attack and differential attack.
Abstract: Information security has became more and more important issue in modern society, one of which is the digital image protection. In this paper, a secure image encryption scheme based on logistic and spatiotemporal chaotic systems is proposed. The extreme sensitivity of chaotic system can greatly increase the complexity of the proposed scheme. Further more, the scheme also takes advantage of DNA coding and eight DNA coding rules are mixed to enhance the efficiency of image confusion and diffusion. To resist the chosen-plaintext attack, information entropy of DNA coded image is modulated as the parameter of spatiotemporal chaotic system, which can also guarantee the sensitivity of plain image in the encryption process. So even a slight change in plain image can cause the complete change in cipher image. The experimental analysis shows that it can resistant different attacks, such as the brute-force attack, statistical attack and differential attack. What's more, The image encryption scheme can be easily implemented by software and is promising in practical application.

Journal ArticleDOI
TL;DR: The authors of the paper have combined the best features of the entropy method and the CILOS approach to obtain a new method – Integrated Determination of Objective CRIteria Weights, or (IDOCRIW).
Abstract: In multiple criteria evaluation, criteria weights are of great importance. In practice, subjective criteria weights determined by specialists/experts are commonly used. The types of elements of a decision matrix also play an important role in the evaluation of alternatives. The objective weights help to estimate the structure of data. The entropy method is widely used for determining the weights (significances) of criteria. A new method of the criterion impact loss, CILOS, is used for determining a relative impact loss experienced by the criterion of an alternative, when another criterion is chosen to be the best. The authors of the paper have combined the best features of the entropy method and the CILOS approach to obtain a new method – Integrated Determination of Objective CRIteria Weights, or (IDOCRIW).

Journal ArticleDOI
TL;DR: This study presents a robust block-based image watermarking scheme based on the singular value decomposition (SVD) and human visual system in the discrete wavelet transform (DWT) domain that outperformed several previous schemes in terms of imperceptibility and robustness.
Abstract: Digital watermarking has been suggested as a way to achieve digital protection. The aim of digital watermarking is to insert the secret data into the image without significantly affecting the visual quality. This study presents a robust block-based image watermarking scheme based on the singular value decomposition (SVD) and human visual system in the discrete wavelet transform (DWT) domain. The proposed method is considered to be a block-based scheme that utilises the entropy and edge entropy as HVS characteristics for the selection of significant blocks to embed the watermark, which is a binary watermark logo. The blocks of the lowest entropy values and edge entropy values are selected as the best regions to insert the watermark. After the first level of DWT decomposition, the SVD is performed on the low-low sub-band to modify several elements in its U matrix according to predefined conditions. The experimental results of the proposed scheme showed high imperceptibility and high robustness against all image processing attacks and several geometrical attacks using examples of standard and real images. Furthermore, the proposed scheme outperformed several previous schemes in terms of imperceptibility and robustness. The security issue is improved by encrypting a portion of the important information using Advanced Standard Encryption a key size of 192-bits (AES-192).

Journal ArticleDOI
TL;DR: The proposed rule is more effective to perform fault diagnosis than classical evidence theory in fusing multi-symptom domains and seems more reasonable than before using the new belief function to determine the weight.
Abstract: Dempster–Shafer evidence theory is widely used in information fusion. However, it may lead to an unreasonable result when dealing with high conflict evidence. In order to solve this problem, we put forward a new method based on the credibility of evidence. First, a novel belief entropy, Deng entropy, is applied to measure the information volume of the evidence and then the discounting coefficients of each evidence are obtained. Finally, weighted averaging the evidence in the system, the Dempster combination rule was used to realize information fusion. A weighted averaging combination role is presented for multi-sensor data fusion in fault diagnosis. It seems more reasonable than before using the new belief function to determine the weight. A numerical example is given to illustrate that the proposed rule is more effective to perform fault diagnosis than classical evidence theory in fusing multi-symptom domains.

Journal Article
TL;DR: In this article, a class of statistics based on spacings of increasing order is studied and it is shown that these statistics are almost surely consistent and that the entropy estimator is efficient under certain conditions on the unknown density.
Abstract: We consider estimation of functionals of a probability density of the elements of a sample. We discuss a class of statistics based on spacings of increasing order and show that these statistics are almost surely consistent. Special attention is paid to entropy estimation. In particular we derive asymptotic normality in that case. It turns out that the entropy estimator is efficient under certain conditions on the unknown density.

Journal ArticleDOI
TL;DR: In this article, a new method of fault detection and classification in asymmetrical distribution systems with dispersed generation to detect islanding and perform protective action based on applying a combination of wavelet singular entropy and fuzzy logic.

Journal ArticleDOI
TL;DR: It is shown that the transfer entropy and a derivative quantity, the causation entropy, do not, in fact, quantify the flow of information, and why this is the case is isolated and several avenues to alternate measures for information flow are proposed.
Abstract: A central task in analyzing complex dynamics is to determine the loci of information storage and the communication topology of information flows within a system. Over the last decade and a half, diagnostics for the latter have come to be dominated by the transfer entropy. Via straightforward examples, we show that it and a derivative quantity, the causation entropy, do not, in fact, quantify the flow of information. At one and the same time they can overestimate flow or underestimate influence. We isolate why this is the case and propose several avenues to alternate measures for information flow. We also address an auxiliary consequence: The proliferation of networks as a now-common theoretical model for large-scale systems, in concert with the use of transferlike entropies, has shoehorned dyadic relationships into our structural interpretation of the organization and behavior of complex systems. This interpretation thus fails to include the effects of polyadic dependencies. The net result is that much of the sophisticated organization of complex systems may go undetected.

Journal ArticleDOI
TL;DR: It is argued that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.
Abstract: Information is a precise concept that can be defined mathematically, but its relationship to what we call ‘knowledge’ is not always made clear. Furthermore, the concepts ‘entropy’ and ‘information’, while deeply related, are distinct and must be used with care, something that is not always achieved in the literature. In this elementary introduction, the concepts of entropy and information are laid out one by one, explained intuitively, but defined rigorously. I argue that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.

Journal ArticleDOI
TL;DR: In this article, the authors introduce the basin entropy, a measure to quantify the uncertainty in nonlinear dynamics, and provide a sufficient condition for the existence of fractal basin boundaries.
Abstract: In nonlinear dynamics, basins of attraction link a given set of initial conditions to its corresponding final states. This notion appears in a broad range of applications where several outcomes are possible, which is a common situation in neuroscience, economy, astronomy, ecology and many other disciplines. Depending on the nature of the basins, prediction can be difficult even in systems that evolve under deterministic rules. From this respect, a proper classification of this unpredictability is clearly required. To address this issue, we introduce the basin entropy, a measure to quantify this uncertainty. Its application is illustrated with several paradigmatic examples that allow us to identify the ingredients that hinder the prediction of the final state. The basin entropy provides an efficient method to probe the behavior of a system when different parameters are varied. Additionally, we provide a sufficient condition for the existence of fractal basin boundaries: when the basin entropy of the boundaries is larger than log2, the basin is fractal.

Journal ArticleDOI
TL;DR: This paper solves the key technical challenge of analytically computing the expected value and variance of generalized IT measures and proposes guidelines for using ARI and AMI as external validation indices.
Abstract: Adjusted for chance measures are widely used to compare partitions/clusterings of the same data set. In particular, the Adjusted Rand Index (ARI) based on pair-counting, and the Adjusted Mutual Information (AMI) based on Shannon information theory are very popular in the clustering community. Nonetheless it is an open problem as to what are the best application scenarios for each measure and guidelines in the literature for their usage are sparse, with the result that users often resort to using both. Generalized Information Theoretic (IT) measures based on the Tsallis entropy have been shown to link pair-counting and Shannon IT measures. In this paper, we aim to bridge the gap between adjustment of measures based on pair-counting and measures based on information theory. We solve the key technical challenge of analytically computing the expected value and variance of generalized IT measures. This allows us to propose adjustments of generalized IT measures, which reduce to well known adjusted clustering comparison measures as special cases. Using the theory of generalized IT measures, we are able to propose the following guidelines for using ARI and AMI as external validation indices: ARI should be used when the reference clustering has large equal sized clusters; AMI should be used when the reference clustering is unbalanced and there exist small clusters.

Journal ArticleDOI
TL;DR: In this paper, a flow is defined as a divergenceless norm-bounded vector field, or equivalently a set of Planck-thickness "bit threads", which represent entanglement between points on the boundary, and naturally implement the holographic principle.
Abstract: The Ryu-Takayanagi (RT) formula relates the entanglement entropy of a region in a holographic theory to the area of a corresponding bulk minimal surface. Using the max flow-min cut principle, a theorem from network theory, we rewrite the RT formula in a way that does not make reference to the minimal surface. Instead, we invoke the notion of a "flow", defined as a divergenceless norm-bounded vector field, or equivalently a set of Planck-thickness "bit threads". The entanglement entropy of a boundary region is given by the maximum flux out of it of any flow, or equivalently the maximum number of bit threads that can emanate from it. The threads thus represent entanglement between points on the boundary, and naturally implement the holographic principle. As we explain, this new picture clarifies several conceptual puzzles surrounding the RT formula. We give flow-based proofs of strong subadditivity and related properties; unlike the ones based on minimal surfaces, these proofs correspond in a transparent manner to the properties' information-theoretic meanings. We also briefly discuss certain technical advantages that the flows offer over minimal surfaces. In a mathematical appendix, we review the max flow-min cut theorem on networks and on Riemannian manifolds, and prove in the network case that the set of max flows varies Lipshitz continuously in the network parameters.

Journal ArticleDOI
TL;DR: An iterative local strategy for updating individual beliefs in multi-agent networks is developed using an optimization-based framework and a Kullback-Leibler cost is introduced to compare the efficiency of the algorithm to its centralized counterpart.
Abstract: This paper addresses the problem of distributed detection in multi-agent networks. Agents receive private signals about an unknown state of the world. The underlying state is globally identifiable, yet informative signals may be dispersed throughout the network. Using an optimization-based framework, we develop an iterative local strategy for updating individual beliefs. In contrast to the existing literature which focuses on asymptotic learning, we provide a finite-time analysis. Furthermore, we introduce a Kullback-Leibler cost to compare the efficiency of the algorithm to its centralized counterpart. Our bounds on the cost are expressed in terms of network size, spectral gap, centrality of each agent and relative entropy of agents' signal structures. A key observation is that distributing more informative signals to central agents results in a faster learning rate. Furthermore, optimizing the weights, we can speed up learning by improving the spectral gap. We also quantify the effect of link failures on learning speed in symmetric networks. We finally provide numerical simulations for our method which verify our theoretical results.

Journal ArticleDOI
TL;DR: A simple two-state hidden Markov model is shown to emulate exactly the statistics of bit sequences generated both from natural and white noise iris images, including their imposter distributions, and may be useful for generating large synthetic IrisCode databases.
Abstract: Iris recognition has legendary resistance to false matches, and the tools of information theory can help to explain why. The concept of entropy is fundamental to understanding biometric collision avoidance. This paper analyses the bit sequences of IrisCodes computed both from real iris images and from synthetic white noise iris images, whose pixel values are random and uncorrelated. The capacity of the IrisCode as a channel is found to be 0.566 bits per bit encoded, of which 0.469 bits of entropy per bit is encoded from natural iris images. The difference between these two rates reflects the existence of anatomical correlations within a natural iris, and the remaining gap from one full bit of entropy per bit encoded reflects the correlations in both phase and amplitude introduced by the Gabor wavelets underlying the IrisCode. A simple two-state hidden Markov model is shown to emulate exactly the statistics of bit sequences generated both from natural and white noise iris images, including their imposter distributions, and may be useful for generating large synthetic IrisCode databases.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: Experimental results show that if entropy-based anomaly detection is applied to all CAN messages it is only possible to detect attacks that comprise a high volume of forged CAN messages, and that attacks characterized by the injection of few forgedCAN messages attacks can be detected only by applying several independent instances of the entropy based anomaly detector.
Abstract: This paper evaluates the effectiveness of information-theoretic anomaly detection algorithms applied to networks included in modern vehicles. In particular, we focus on providing an experimental evaluation of anomaly detectors based on entropy. Attacks to in-vehicle networks were simulated by injecting different classes of forged CAN messages in traces captured from a modern licensed vehicle. Experimental results show that if entropy-based anomaly detection is applied to all CAN messages it is only possible to detect attacks that comprise a high volume of forged CAN messages. On the other hand, attacks characterized by the injection of few forged CAN messages attacks can be detected only by applying several independent instances of the entropy based anomaly detector, one for each class of CAN messages.

Journal ArticleDOI
TL;DR: Comparative study shows the potential application of proposed methodology with machine learning techniques for the development of real time system to diagnose fault and it’s severity in ball bearings.
Abstract: The present study attempts to diagnose severity of faults in ball bearings using various machine learning techniques, like support vector machine (SVM) and artificial neural network (ANN). Various features are extracted from raw vibration signals which include statistical features such as skewness, kurtosis, standard deviation and measures of uncertainty such as Shannon entropy, log energy entropy, sure entropy, etc. The calculated features are examined for their sensitivity towards fault of different severity in bearings. The proposed methodology incorporates extraction of most appropriate features from raw vibration signals. Results revealed that apart from statistical features uncertainty measures like log energy entropy and sure entropy are also good indicators of variation in fault severity. This work attempts to classify faults of different severity level in each bearing component which is not considered in most of the previous studies. Classification efficiency achieved by proposed methodology is c...

Journal ArticleDOI
TL;DR: The basin entropy provides an efficient method to probe the behavior of a system when different parameters are varied and provides a sufficient condition for the existence of fractal basin boundaries: when the basin entropy of the boundaries is larger than log2, the basin is fractal.
Abstract: In nonlinear dynamics, basins of attraction link a given set of initial conditions to its corresponding final states. This notion appears in a broad range of applications where several outcomes are possible, which is a common situation in neuroscience, economy, astronomy, ecology and many other disciplines. Depending on the nature of the basins, prediction can be difficult even in systems that evolve under deterministic rules. From this respect, a proper classification of this unpredictability is clearly required. To address this issue, we introduce the basin entropy, a measure to quantify this uncertainty. Its application is illustrated with several paradigmatic examples that allow us to identify the ingredients that hinder the prediction of the final state. The basin entropy provides an efficient method to probe the behavior of a system when different parameters are varied. Additionally, we provide a sufficient condition for the existence of fractal basin boundaries: when the basin entropy of the boundaries is larger than $\log 2 $, the basin is fractal.

Journal ArticleDOI
TL;DR: The strong data processing inequalities (SDPI) as mentioned in this paper can be used to quantify the noisiness of a channel by controlling entropy-like functionals of the input distribution by suitable measures of input-output correlation.
Abstract: The noisiness of a channel can be measured by comparing suitable functionals of the input and output distributions. For instance, the worst case ratio of output relative entropy to input relative entropy for all possible pairs of input distributions is bounded from above by unity, by the data processing theorem. However, for a fixed reference input distribution, this quantity may be strictly smaller than one, giving the so-called strong data processing inequalities (SDPIs). The same considerations apply to an arbitrary $\Phi $ -divergence. This paper presents a systematic study of optimal constants in the SDPIs for discrete channels, including their variational characterizations, upper and lower bounds, structural results for channels on product probability spaces, and the relationship between the SDPIs and the so-called $\Phi $ -Sobolev inequalities (another class of inequalities that can be used to quantify the noisiness of a channel by controlling entropy-like functionals of the input distribution by suitable measures of input–output correlation). Several applications to information theory, discrete probability, and statistical physics are discussed.

Book ChapterDOI
08 May 2016
TL;DR: The first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated is constructed, building a computationally secure and an information-theoretically secure construction for large-alphabet sources.
Abstract: Fuzzy extractors Dodis et al., Eurocrypt 2004 convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors Boyen, CCS 2004 remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings for example, when a single person's biometric is enrolled with multiple unrelated organizations. We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated the only prior construction assumed a very specific, unrealistic class of correlations. The extractor works for binary strings with Hamming noise; it achieves computational security under assumptions on the security of hash functions or in the random oracle model. It is simple and efficient and tolerates near-linear error rates. Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates--lower than those supported by prior nonreusable constructions--assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. We show that such structural assumptions are necessary to support low entropy rates. We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, building a computationally secure and an information-theoretically secure construction for large-alphabet sources.