scispace - formally typeset
Search or ask a question

Showing papers on "Entropy (information theory) published in 2021"


Journal ArticleDOI
TL;DR: The presented model is effective for selecting important features with the higher stability of classification in neighborhood decision systems and the Fisher score model is utilized to delete irrelevant features to decrease the complexity of high-dimensional data sets.
Abstract: For heterogeneous data sets containing numerical and symbolic feature values, feature selection based on fuzzy neighborhood multigranulation rough sets (FNMRS) is a very significant step to preprocess data and improve its classification performance. This article presents an FNMRS-based feature selection approach in neighborhood decision systems. First, some concepts of fuzzy neighborhood rough sets and neighborhood multigranulation rough sets are given, and then the FNMRS model is investigated to construct uncertainty measures. Second, the optimistic and pessimistic FNMRS models are built by using fuzzy neighborhood multigranulation lower and upper approximations from algebra view, and some fuzzy neighborhood entropy-based uncertainty measures are developed in information view. Inspired by both algebra and information views based on the FNMRS model, the fuzzy neighborhood pessimistic multigranulation entropy is proposed. Third, the Fisher score model is utilized to delete irrelevant features to decrease the complexity of high-dimensional data sets, and then, a forward feature selection algorithm is provided to promote the performance of heterogeneous data classification. Experimental results on 12 data sets show that the presented model is effective for selecting important features with the higher stability of classification in neighborhood decision systems.

132 citations


Journal ArticleDOI
TL;DR: The proposed coordinated VMD-TSMDE-VHHO-SVM approach to fault diagnosis for rolling bearings can achieve better diagnosis performance than other comparative ones.

90 citations


Journal ArticleDOI
TL;DR: This paper proposes a Recurrent Learned Video Compression (RLVC) approach with the Recurrent Auto-Encoder (RAE) and Recurrent Probability Model (RPM), which achieves the state-of-the-art learned video compression performance in terms of both PSNR and MS-SSIM.
Abstract: The past few years have witnessed increasing interests in applying deep learning to video compression. However, the existing approaches compress a video frame with only a few number of reference frames, which limits their ability to fully exploit the temporal correlation among video frames. To overcome this shortcoming, this paper proposes a Recurrent Learned Video Compression (RLVC) approach with the Recurrent Auto-Encoder (RAE) and Recurrent Probability Model (RPM). Specifically, the RAE employs recurrent cells in both the encoder and decoder. As such, the temporal information in a large range of frames can be used for generating latent representations and reconstructing compressed outputs. Furthermore, the proposed RPM network recurrently estimates the Probability Mass Function (PMF) of the latent representation, conditioned on the distribution of previous latent representations. Due to the correlation among consecutive frames, the conditional cross entropy can be lower than the independent cross entropy, thus reducing the bit-rate. The experiments show that our approach achieves the state-of-the-art learned video compression performance in terms of both PSNR and MS-SSIM. Moreover, our approach outperforms the default Low-Delay P (LDP) setting of x265 on PSNR, and also has better performance on MS-SSIM than the SSIM-tuned x265 and the slowest setting of x265. The codes are available at https://github.com/RenYang-home/RLVC.git .

87 citations


Journal ArticleDOI
TL;DR: In this article, the authors present data-driven discussions with examples that have been collected from the fields of geology and materials science over the past 50 years to highlight critical thermodynamic parameters and principles that can be used for the design of HEOs.

86 citations


Journal ArticleDOI
TL;DR: In this article, the authors compared results of flood susceptibility modelling in the part of Middle Ganga Plain, Ganga foreland basin, and found that 12 major flood explanatory factors were included.
Abstract: This work focuses on comparing results of flood susceptibility modelling in the part of Middle Ganga Plain, Ganga foreland basin. Following inclusivity rule, 12 major flood explanatory factors incl...

84 citations


Journal ArticleDOI
TL;DR: In this article, a causal context model is proposed that separates the latent space across channels and makes use of channel-wise relationships to generate highly informative adjacent contexts, and a causal global prediction model is used to find global reference points for accurate predictions of undecoded points.
Abstract: utf8 Over the past several years, we have witnessed impressive progress in the field of learned image compression. Recent learned image codecs are commonly based on autoencoders, that first encode an image into low-dimensional latent representations and then decode them for reconstruction purposes. To capture spatial dependencies in the latent space, prior works exploit hyperprior and spatial context model to build an entropy model, which estimates the bit-rate for end-to-end rate-distortion optimization. However, such an entropy model is suboptimal from two aspects: (1) It fails to capture global-scope spatial correlations among the latents. (2) Cross-channel relationships of the latents remain unexplored. In this paper, we propose the concept of separate entropy coding to leverage a serial decoding process for causal contextual entropy prediction in the latent space. A causal context model is proposed that separates the latents across channels and makes use of channel-wise relationships to generate highly informative adjacent contexts. Furthermore, we propose a causal global prediction model to find global reference points for accurate predictions of undecoded points. Both these two models facilitate entropy estimation without the transmission of overhead. In addition, we further adopt a new group-separated attention module to build more powerful transform networks. Experimental results demonstrate that our full image compression model outperforms standard VVC/H.266 codec on Kodak dataset in terms of both PSNR and MS-SSIM, yielding the state-of-the-art rate-distortion performance.

70 citations


Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a weighted network method based on the ordered visibility graph, named OVGWP, which considers not only the belief value itself, but also the cardinality of basic probability assignment.

67 citations


Journal ArticleDOI
01 Jan 2021
TL;DR: The EM, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) based on the correlation coefficient is investigated and it is verified that the explored work can distinguish highly similar but inconsistent Cq-ROFS.
Abstract: Entropy measure (EM) and similarity measure (SM) are important techniques in the environment of fuzzy set (FS) theory to resolve the similarity between two objects. The q-rung orthopair FS (q-ROFS) and complex FS are new extensions of FS theory and have been widely used in various fields. In this article, the EM, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) based on the correlation coefficient is investigated. It is very important to study the SM of Cq-ROFS. Then, the established approaches and the existing drawbacks are compared by an example, and it is verified that the explored work can distinguish highly similar but inconsistent Cq-ROFS. Finally, to examine the reliability and feasibility of the new approaches, we illustrate an example using the TOPSIS method based on Cq-ROFS to manage a case related to the selection of firewall productions, and then, a situation concerning the security evaluation of computer systems is given to conduct the comparative analysis between the established TOPSIS method based on Cq-ROFS and previous decision-making methods for validating the advantages of the established work by comparing them with the other existing drawbacks.

63 citations


Journal ArticleDOI
Fuyuan Xiao1
TL;DR: In this paper, a generalized intelligent quality-based approach for fusing multisource information is proposed, which aims to fuse the multicomplex-valued information while maintaining a high quality of the fused result by considering the usage of credible information sources.
Abstract: In this article, we propose a generalized intelligent quality-based approach for fusing multisource information. The goal of the proposed approach is to fuse the multicomplex-valued information while maintaining a high quality of the fused result by considering the usage of credible information sources. First, a vector representation of complex-valued distribution is defined, as well as the definitions of compatibility and conflict degrees between complex-valued distributions. Based on that, the information quality measure of complex-valued distribution is devised by leveraging the concept of Gini entropy. After that, we study some special cases of the information quality measure in maximally certain and uncertain complex-valued distributions. Additionally, a uniform fusion method for complex-valued distributions is proposed on the basis of the complex-valued information quality as an initial feasible basis of decision-making. Taking into account a credibility measure in terms of the subsets of information sources, a weighted fusion method is then presented for complex-valued distributions. Particularly, the weighted fusion method can achieve the highest quality of the fused result from the associated aggregations of information that are modeled in complex-valued distributions. Finally, some examples are illustrated to demonstrate the feasibility and effectiveness of the proposed methods.

63 citations


Journal ArticleDOI
TL;DR: In this paper, a possible explanation of power set is proposed, where a power set of n events can be seen as all possible k-combination, where k ranges from 0 to n.
Abstract: A power set of a set S is defined as the set of all subsets of S, including set S itself and empty set, denoted as P(S) or 2S. Given a finite set S with |S|=n hypothesis, one property of power set is that the amount of subsets of S is |P(S)| = 2n. However, the physica meaning of power set needs exploration. To address this issue, a possible explanation of power set is proposed in this paper. A power set of n events can be seen as all possible k-combination, where k ranges from 0 to n. It means the power set extends the event space in probability theory into all possible combination of the single basic event. From the view of power set, all subsets or all combination of basic events, are created equal. These subsets are assigned with the mass function, whose uncertainty can be measured by Deng entropy. The relationship between combinatorial number, Pascal's triangle and power set is revealed by Deng entropy quantitively from the view of information measure.

58 citations


Journal ArticleDOI
01 Nov 2021-Energy
TL;DR: An integrated approach based on entropy, Step-wise Weight Assessment Ratio Analysis (SWARA) and Complex Proportional assessment (COPRAS) methods under the Pythagorean fuzzy environment for the FCH supplier selection can overcome the insufficiencies which arise either in an objective-weighting model or a subjective- weighting model.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an approach based on high-throughput simulation combined machine learning to obtain medium entropy alloys with high strength and low cost, which not only obtains a large amount of data quickly and accurately, but also helps us to determine the relationship between the composition and mechanical properties.

Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this article, the authors proposed a method that reduces the uncertainty of predictions on the target domain data by minimizing the entropy of the predicted posterior, and maximizing the noise robustness of the feature representation.
Abstract: Traditional methods for Unsupervised Domain Adaptation (UDA) targeting semantic segmentation exploit information common to the source and target domains, using both labeled source data and unlabeled target data. In this paper, we investigate a setting where the source data is un-available, but the classifier trained on the source data is; hence named "model adaptation". Such a scenario arises when data sharing is prohibited, for instance, because of privacy, or Intellectual Property (IP) issues.To tackle this problem, we propose a method that reduces the uncertainty of predictions on the target domain data. We accomplish this in two ways: minimizing the entropy of the predicted posterior, and maximizing the noise robustness of the feature representation. We show the efficacy of our method on the transfer of segmentation from computer generated images to real-world driving images, and transfer between data collected in different cities, and surprisingly reach performance comparable with that of the methods that have access to source data.

Journal ArticleDOI
TL;DR: In this article, a scale-adaptive mathematical morphology spectrum entropy (AMMSE) is proposed to improve the scale selection, which is not constrained by the information of the experiment and the signal.
Abstract: Mathematical morphology spectrum entropy is a signal feature extraction method based on information entropy and mathematical morphology. The scale of structure element is a critical parameter, whose value determines the accuracy of feature extraction. Existing scale selection methods depend on experiment parameters or external indicators including noise ratio, fault frequencies, etc. In many cases, existing methods obtain fix scale and they are not suitable for quantifying the performance degradation and the fault degree of bearings. There are few researches on scale selection based on the properties of mathematical morphology spectrum. In this study, a scale-adaptive mathematical morphology spectrum entropy (AMMSE) is proposed to improve the scale selection. To support the proposed method, two properties of the mathematical morphology spectrum (MMS), namely non-negativity and monotonic decreasing, are proved. It can be concluded from the two properties that the feature loss of MMS decreases with the increase of scale. Based on the conclusion, two adaptive scale selection strategies are proposed to automatically determine the scale by reducing the feature loss of MMS. AMMSE is the integration of two strategies. Compare to the existing methods, AMMSE is not constrained by the information of the experiment and the signal. The scale of AMMSE changes with the signal characteristics and is no longer fixed by experimental parameters. The parameters of AMMSE are more generalizable as well. The presented method is applied to identify fault degree on CWRU bearing data set and evaluate performance degradation on IMS bearing data set. The experiment result shows that AMMSE has better results in both experiments with the same parameters.

Journal ArticleDOI
TL;DR: Class-specific information measures deepen existing classification-based information measures by a hierarchical isomorphism, while the informational class-specific reducts systematically perfect attribute reduction by level and viewpoint isomorphisms facilitate uncertainty measurement and information processing, especially at the class level.

Journal ArticleDOI
Fuyuan Xiao1
TL;DR: In this paper, a generalized model of the traditional negation method is proposed to represent the knowledge involved with uncertain information, and an entropy measure is proposed for the complex-valued distribution, called $\mathcal {X}$ entropy.
Abstract: In real applications of artificial and intelligent decision-making systems, how to represent the knowledge involved with uncertain information is still an open issue. The negation method has great significance to address this issue from another perspective. However, it has the limitation that can be used only for the negation of the probability distribution. In this article, therefore, we propose a generalized model of the traditional one, so that it can have more powerful capability to represent the knowledge, and uncertainty measure. In particular, we first define a vector representation of complex-valued distribution. Then, an entropy measure is proposed for the complex-valued distribution, called $\mathcal {X}$ entropy. In this context, a transformation function to acquire the negation of the complex-valued distribution is exploited on the basis of the newly defined $\mathcal {X}$ entropy. Afterward, the properties of this negation function are analyzed, and investigated, as well as some special cases. Finally, we study the negation function on the view from the $\mathcal {X}$ entropy. It is verified that the proposed negation method for the complex-valued distribution is a scheme with a maximal entropy.

Journal ArticleDOI
TL;DR: It is shown that this functional estimator enables straightforward measurement of information flow in realistic convolutional neural networks (CNNs) without any approximation and the partial information decomposition (PID) framework is introduced and three quantities are developed to analyze the synergy and redundancy in Convolutional layer representations.
Abstract: A novel functional estimator for Renyi’s $\alpha $ -entropy and its multivariate extension was recently proposed in terms of the normalized eigenspectrum of a Hermitian matrix of the projected data in a reproducing kernel Hilbert space (RKHS). However, the utility and possible applications of these new estimators are rather new and mostly unknown to practitioners. In this brief, we first show that this estimator enables straightforward measurement of information flow in realistic convolutional neural networks (CNNs) without any approximation. Then, we introduce the partial information decomposition (PID) framework and develop three quantities to analyze the synergy and redundancy in convolutional layer representations. Our results validate two fundamental data processing inequalities and reveal more inner properties concerning CNN training.

Journal ArticleDOI
TL;DR: A fault diagnosis scheme based on multiscale diversity entropy (MDE) and extreme learning machine (ELM) and the highest classification accuracy compared with three existing approaches: sample entropy, fuzzy entropy, and permutation entropy is presented.
Abstract: In this article, a fault diagnosis scheme based on multiscale diversity entropy (MDE) and extreme learning machine (ELM) is presented. First, a novel entropy method called diversity entropy (DE) is proposed to quantify the dynamical complexity. DE utilizes the distribution of cosine similarity between adjacent orbits to track the inside pattern change, resulting in better performance in complexity estimation. Then, the proposed DE is extended to multiscale analysis called MDE for a comprehensive feature description by combining with the coarse gaining process. Third, the obtained features using MDE are fed into the ELM classifier for pattern identification of rotating machinery. The effectiveness of the proposed MDE method is verified using simulated signals and two experimental signals collected from the bearing test and the dual-rotator of the aeroengine test. The analysis results show that our proposed method has the highest classification accuracy compared with three existing approaches: sample entropy, fuzzy entropy, and permutation entropy.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a provably entropy stable shock capturing approach for the high order entropy stable Discontinuous Galerkin Spectral Element Method (DGSEM) based on a hybrid blending with a subcell low order variant.

Journal ArticleDOI
TL;DR: In this article, the geometric Renyi divergence (GRD) has been studied from the point of view of quantum information theory and it has many appealing structural properties, which are not satisfied by other quantum Renyi divergences.
Abstract: Having a distance measure between quantum states satisfying the right properties is of fundamental importance in all areas of quantum information. In this work, we present a systematic study of the geometric Renyi divergence (GRD), also known as the maximal Renyi divergence, from the point of view of quantum information theory. We show that this divergence, together with its extension to channels, has many appealing structural properties, which are not satisfied by other quantum Renyi divergences. For example we prove a chain rule inequality that immediately implies the “amortization collapse” for the geometric Renyi divergence, addressing an open question by Berta et al. [Letters in Mathematical Physics 110:2277–2336, 2020, Equation (55)] in the area of quantum channel discrimination. As applications, we explore various channel capacity problems and construct new channel information measures based on the geometric Renyi divergence, sharpening the previously best-known bounds based on the max-relative entropy while still keeping the new bounds single-letter and efficiently computable. A plethora of examples are investigated and the improvements are evident for almost all cases.

Journal ArticleDOI
TL;DR: An intuitionistic fuzzy MABAC method to solve real-life multiple attribute group decision-making problems by utilizing the conventional multi-attributive border approximation area comparison (MABAC) model.
Abstract: The development of information measures associated with fuzzy and intuitionistic fuzzy sets is an important research area from the past few decades. Divergence and entropy are two significant information measures in the intuitionistic fuzzy set (IFS) theory, which have gained wider attention from researchers due to their extensive applications in different areas. In the literature, the existing information measures for IFSs have some drawbacks, which make them irrelevant to use in application areas. In order to obtain more robust and flexible information measures for IFSs, the present work develops and studies some parametric information measures under the intuitionistic fuzzy environment. First, the paper reviews the existing intuitionistic fuzzy divergence measures in detail with their shortcomings and then proposes four new order-α divergence measures between two IFSs. It is worth mentioning that the developed divergence measures satisfy several elegant mathematical properties. Second, we define four new entropy measures called order-α intuitionistic fuzzy entropy measures in order to quantify the fuzziness associated with an IFS. We prove basic and advanced properties of the order-α intuitionistic fuzzy entropy measures for justifying their validity. The paper shows that the introduced measures include various existing fuzzy and intuitionistic fuzzy information measures as their special cases. Further, utilizing the conventional multi-attributive border approximation area comparison (MABAC) model, we develop an intuitionistic fuzzy MABAC method to solve real-life multiple attribute group decision-making problems. Finally, the proposed method is demonstrated by using a practical application of personnel selection.

Journal ArticleDOI
TL;DR: A new method based on Deng entropy to measure the uncertainty of orderable sets is proposed and some numerical examples are used to illustrate the efficiency and accuracy of the proposed method.

Journal ArticleDOI
TL;DR: A methodology for measuring the degree of unpredictability in dynamical systems with memory, i.e., systems with responses dependent on a history of past states is developed, which can be employed in a variety of settings.
Abstract: The aim of this article is to develop a methodology for measuring the degree of unpredictability in dynamical systems with memory, i.e., systems with responses dependent on a history of past states. The proposed model is generic, and can be employed in a variety of settings, although its applicability here is examined in the particular context of an industrial environment: gas turbine engines. The given approach consists in approximating the probability distribution of the outputs of a system with a deep recurrent neural network; such networks are capable of exploiting the memory in the system for enhanced forecasting capability. Once the probability distribution is retrieved, the entropy or missing information about the underlying process is computed, which is interpreted as the uncertainty with respect to the system's behavior. Hence, the model identifies how far the system dynamics are from its typical response, in order to evaluate the system reliability and to predict system faults and/or normal accidents . The validity of the model is verified with sensor data recorded from commissioning gas turbines, belonging to normal and faulty conditions.

Journal ArticleDOI
TL;DR: The concept of blind fault component separation should be used to separate low-frequency periodic components (a deterministic signal) from high-frequency repetitive transients (a sparse signal) before complexity measures are used for machine condition monitoring to avoid the uncertainty of machine condition Monitoring.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a method to learn physical systems from data that employs feedforward neural networks and whose predictions comply with the first and second principles of thermodynamics, by enforcing the metriplectic structure of dissipative Hamiltonian systems in the form of the so-called General Equation for the Non-Equilibrium Reversible-Irreversible Coupling.

Journal ArticleDOI
TL;DR: In this article, a new entropy metric is proposed that is universally applicable for crystalline materials, from simple solid-solution high-entropy alloys to highentropy materials with complex crystal structures and multiple sublattices.

Journal ArticleDOI
TL;DR: This paper has defined an entropy-based objective function for the initialization process, which is better than other existing initialization methods of K-means clustering and designed an algorithm to calculate the correct number of clusters of datasets using some cluster validity indexes.
Abstract: Clustering is an unsupervised learning approach used to group similar features using specific mathematical criteria. This mathematical criterion is known as the objective function. Any clustering is done depending on some objective function. K-means is one of the widely used partitional clustering algorithms whose performance depends on the initial point and the value of K. In this paper, we have combined both these parameters. We have defined an entropy-based objective function for the initialization process, which is better than other existing initialization methods of K-means clustering. Here, we have also designed an algorithm to calculate the correct number of clusters of datasets using some cluster validity indexes. In this paper, the entropy-based initialization algorithm has been proposed and applied to different 2D and 3D data sets. The comparison with other existing initialization methods has been represented in this paper.

Journal ArticleDOI
TL;DR: A novel approach to forecast carbon price is proposed, which combines the data preprocessing mechanism, decomposition technology, forecast module with selection and matching strategy and ensemble model based on an original hybrid optimization algorithm, which provides a convincing tool for the operation and investment of the carbon markets.

Journal ArticleDOI
TL;DR: This paper critically analyze the available ranking techniques for q‐rung orthopair fuzzy values and proposes a new graphical ranking method based on hesitancy index and entropy that is intuitive and convenient for real life applications.
Abstract: In intuitionistic fuzzy set and their generalizations such as Pythagorean fuzzy sets and q‐rung orthopair fuzzy sets, ranking is not easy to define. There are several techniques available in literature for ranking values in above mentioned orthopair fuzzy sets. It is interesting to see that almost all the proposed ranking methods produce distinct ranking. Notion of knowledge base is very important to study ranking proposed by different techniques. Aim of this paper is to critically analyze the available ranking techniques for q‐rung orthopair fuzzy values and propose a new graphical ranking method based on hesitancy index and entropy. Several numerical examples are tested with the proposed technique, which shows that the technique is intuitive and convenient for real life applications.

Journal ArticleDOI
TL;DR: An efficient method of kernel sample equivalence replacement is established to replace the partial differential operations of the kernel gradient algorithm, which can convert nonlinear fault detection indicators into the standard quadratic forms of the original variable sample, thereby making it possible to solve the non linear fault diagnosis problem by linear manners.
Abstract: This article is devoted to solving the problem of quality-related root cause diagnosis for nonlinear process. First, an orthogonal kernel principal component regression model is constructed to achieve orthogonal decomposition of feature space, such that quality-related and quality-unrelated faults can be separately detected in the subspaces of opposite correlations to the output, without any effect on each other. Then, in view of the high complexity of traditional nonlinear fault diagnosis methods, an efficient method of kernel sample equivalence replacement is established to replace the partial differential operations of the kernel gradient algorithm, which can convert nonlinear fault detection indicators into the standard quadratic forms of the original variable sample, thereby making it possible to solve the nonlinear fault diagnosis problem by linear manners. Furthermore, a transfer entropy algorithm is utilized to the new model to analyze the causality between the diagnosed candidate faulty variables to find out the accurate root cause of the fault. Finally, comparative studies between the latest result and the proposed one are carried out in the Tennessee Eastman process to verify the effectiveness and superiority of the new method.