scispace - formally typeset
Search or ask a question

Showing papers on "Entropy (information theory) published in 2019"


Proceedings ArticleDOI
15 Jun 2019
TL;DR: This work proposes two novel, complementary methods using (i) entropy loss and (ii) adversarial loss respectively for unsupervised domain adaptation in semantic segmentation with losses based on the entropy of the pixel-wise predictions.
Abstract: Semantic segmentation is a key problem for many computer vision tasks. While approaches based on convolutional neural networks constantly break new records on different benchmarks, generalizing well to diverse testing environments remains a major challenge. In numerous real-world applications, there is indeed a large gap between data distributions in train and test domains, which results in severe performance loss at run-time. In this work, we address the task of unsupervised domain adaptation in semantic segmentation with losses based on the entropy of the pixel-wise predictions. To this end, we propose two novel, complementary methods using (i) entropy loss and (ii) adversarial loss respectively. We demonstrate state-of-the-art performance in semantic segmentation on two challenging “synthetic-2-real” set-ups and show that the approach can also be used for detection.

1,034 citations


Journal ArticleDOI
Fuyuan Xiao1
TL;DR: A novel method for multi-sensor data fusion based on a new belief divergence measure of evidences and the belief entropy was proposed, which outperforms other related methods where the basic belief assignment of the true target is 89.73%.

447 citations


Journal ArticleDOI
Wu Deng, Rui Yao1, Huimin Zhao, Xinhua Yang1, Guangyu Li1 
01 Apr 2019
TL;DR: The fuzzy information entropy can accurately and more completely extract the characteristics of the vibration signal, the improved PSO algorithm can effectively improve the classification accuracy of LS-SVM, and the proposed fault diagnosis method outperforms the other mentioned methods.
Abstract: Aiming at the problem that the most existing fault diagnosis methods could not effectively recognize the early faults in the rotating machinery, the empirical mode decomposition, fuzzy information entropy, improved particle swarm optimization algorithm and least squares support vector machines are introduced into the fault diagnosis to propose a novel intelligent diagnosis method, which is applied to diagnose the faults of the motor bearing in this paper. In the proposed method, the vibration signal is decomposed into a set of intrinsic mode functions (IMFs) by using empirical mode decomposition method. The fuzzy information entropy values of IMFs are calculated to reveal the intrinsic characteristics of the vibration signal and considered as feature vectors. Then the diversity mutation strategy, neighborhood mutation strategy, learning factor strategy and inertia weight strategy for basic particle swarm optimization (PSO) algorithm are used to propose an improved PSO algorithm. The improved PSO algorithm is used to optimize the parameters of least squares support vector machines (LS-SVM) in order to construct an optimal LS-SVM classifier, which is used to classify the fault. Finally, the proposed fault diagnosis method is fully evaluated by experiments and comparative studies for motor bearing. The experiment results indicate that the fuzzy information entropy can accurately and more completely extract the characteristics of the vibration signal. The improved PSO algorithm can effectively improve the classification accuracy of LS-SVM, and the proposed fault diagnosis method outperforms the other mentioned methods in this paper and published in the literature. It provides a new method for fault diagnosis of rotating machinery.

365 citations


Journal ArticleDOI
TL;DR: The entropy theory is used to describe the distribution and powering of small electronics in the era of internet of things and it is concluded that the “ordered” energy is able to solve part of the power need for distributed electronics for IoTs, but the remaining part has to be supplied by the "random" energy harvested from environment.

292 citations


Posted Content
TL;DR: A novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model for semi-supervised domain adaptation (SSDA) setting, setting a new state of the art for SSDA.
Abstract: Contemporary domain adaptation methods are very effective at aligning feature distributions of source and target domains without any target supervision. However, we show that these techniques perform poorly when even a few labeled examples are available in the target. To address this semi-supervised domain adaptation (SSDA) setting, we propose a novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model. Our base model consists of a feature encoding network, followed by a classification layer that computes the features' similarity to estimated prototypes (representatives of each class). Adaptation is achieved by alternately maximizing the conditional entropy of unlabeled target data with respect to the classifier and minimizing it with respect to the feature encoder. We empirically demonstrate the superiority of our method over many baselines, including conventional feature alignment and few-shot methods, setting a new state of the art for SSDA.

263 citations


Proceedings ArticleDOI
01 Oct 2019
TL;DR: In this paper, a novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model is proposed to align feature distributions of source and target domains without any target supervision.
Abstract: Contemporary domain adaptation methods are very effective at aligning feature distributions of source and target domains without any target supervision. However, we show that these techniques perform poorly when even a few labeled examples are available in the target domain. To address this semi-supervised domain adaptation (SSDA) setting, we propose a novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model. Our base model consists of a feature encoding network, followed by a classification layer that computes the features' similarity to estimated prototypes (representatives of each class). Adaptation is achieved by alternately maximizing the conditional entropy of unlabeled target data with respect to the classifier and minimizing it with respect to the feature encoder. We empirically demonstrate the superiority of our method over many baselines, including conventional feature alignment and few-shot methods, setting a new state of the art for SSDA. Our code is available at \url{http://cs-people.bu.edu/keisaito/research/MME.html}.

262 citations


Proceedings ArticleDOI
01 Oct 2019
TL;DR: Zhou et al. as discussed by the authors proposed the maximum squares loss to balance the gradient of well-classified target samples. But the maximum square loss was not applied to the unlabeled target domain, and the image-wise weighting ratio was introduced to alleviate the class imbalance.
Abstract: Deep neural networks for semantic segmentation always require a large number of samples with pixel-level labels, which becomes the major difficulty in their real-world applications. To reduce the labeling cost, unsupervised domain adaptation (UDA) approaches are proposed to transfer knowledge from labeled synthesized datasets to unlabeled real-world datasets. Recently, some semi-supervised learning methods have been applied to UDA and achieved state-of-the-art performance. One of the most popular approaches in semi-supervised learning is the entropy minimization method. However, when applying the entropy minimization to UDA for semantic segmentation, the gradient of the entropy is biased towards samples that are easy to transfer. To balance the gradient of well-classified target samples, we propose the maximum squares loss. Our maximum squares loss prevents the training process being dominated by easy-to-transfer samples in the target domain. Besides, we introduce the image-wise weighting ratio to alleviate the class imbalance in the unlabeled target domain. Both synthetic-to-real and cross-city adaptation experiments demonstrate the effectiveness of our proposed approach. The code is released at https://github. com/ZJULearning/MaxSquareLoss.

170 citations


Posted Content
TL;DR: A general theory of regularized Markov Decision Processes that generalizes these approaches in two directions: a larger class of regularizers, and the general modified policy iteration approach, encompassing both policy iteration and value iteration.
Abstract: Many recent successful (deep) reinforcement learning algorithms make use of regularization, generally based on entropy or Kullback-Leibler divergence. We propose a general theory of regularized Markov Decision Processes that generalizes these approaches in two directions: we consider a larger class of regularizers, and we consider the general modified policy iteration approach, encompassing both policy iteration and value iteration. The core building blocks of this theory are a notion of regularized Bellman operator and the Legendre-Fenchel transform, a classical tool of convex optimization. This approach allows for error propagation analyses of general algorithmic schemes of which (possibly variants of) classical algorithms such as Trust Region Policy Optimization, Soft Q-learning, Stochastic Actor Critic or Dynamic Policy Programming are special cases. This also draws connections to proximal convex optimization, especially to Mirror Descent.

160 citations


Journal ArticleDOI
TL;DR: A heuristic feature selection algorithm with low computational complexity is presented to improve the performance of cancer classification using gene expression data and outperforms other related methods in terms of the number of selected genes and the classification accuracy, especially as the size of the genes increases.

153 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed preprocessing method is of great significance to extract effective FWPDE features, and is more powerful in comparison with the classical wavelet package decomposition energy entropy.
Abstract: The train plug door is the only way for passengers getting on and off. Its failures will make train operation ineffective. Taking the developed digital signal processing technologies into consideration, a data-driven diagnosis method for train plug doors is proposed based on sound recognition. First, a novel preprocessing method based on empirical mode decomposition and hybrid intrinsic mode functions (IMFs) selection criterion is proposed. The selected significant IMFs are used to reconstruct the signals. Inspired by the idea of fractional calculus, novel entropy named fractional wavelet package decomposition energy entropy (FWPDE) is proposed. Finally, multi-class support vector machine is used for classification and validation. Experimental results indicate that the proposed preprocessing method is of great significance to extract effective FWPDE features. In addition, FWPDE is more powerful in comparison with the classical wavelet package decomposition energy entropy. The identification accuracy using the proposed method reaches 96.28%, which demonstrates its effectiveness and superiority.

147 citations


Journal ArticleDOI
TL;DR: Fractional Fourier entropy (FrFE)-based hyperspectral anomaly detection method can significantly distinguish signal from background and noise, and is implemented in the optimal fractional domain.
Abstract: Anomaly detection is an important task in hyperspectral remote sensing. Most widely used detectors, such as Reed–Xiaoli (RX), have been developed only using original spectral signatures, which may lack the capability of signal enhancement and noise suppression. In this article, an effective alternative approach, fractional Fourier entropy (FrFE)-based hyperspectral anomaly detection method, is proposed. First, fractional Fourier transform (FrFT) is employed as preprocessing, which obtains features in an intermediate domain between the original reflectance spectrum and its Fourier transform with complementary strengths by space-frequency representations. It is desirable for noise removal so as to enhance the discrimination between anomalies and background. Furthermore, an FrFE-based step is developed to automatically determine an optimal fractional transform order. With a more flexible constraint, i.e., Shannon entropy uncertainty principle on FrFT, the proposed method can significantly distinguish signal from background and noise. Finally, the proposed FrFE-based anomaly detection method is implemented in the optimal fractional domain. Experimental results obtained on real hyperspectral datasets demonstrate that the proposed method is quite competitive.

Posted Content
TL;DR: The maximum squares loss prevents the training process being dominated by easy-to-transfer samples in the target domain, and introduces the image-wise weighting ratio to alleviate the class imbalance in the unlabeled target domain.
Abstract: Deep neural networks for semantic segmentation always require a large number of samples with pixel-level labels, which becomes the major difficulty in their real-world applications. To reduce the labeling cost, unsupervised domain adaptation (UDA) approaches are proposed to transfer knowledge from labeled synthesized datasets to unlabeled real-world datasets. Recently, some semi-supervised learning methods have been applied to UDA and achieved state-of-the-art performance. One of the most popular approaches in semi-supervised learning is the entropy minimization method. However, when applying the entropy minimization to UDA for semantic segmentation, the gradient of the entropy is biased towards samples that are easy to transfer. To balance the gradient of well-classified target samples, we propose the maximum squares loss. Our maximum squares loss prevents the training process being dominated by easy-to-transfer samples in the target domain. Besides, we introduce the image-wise weighting ratio to alleviate the class imbalance in the unlabeled target domain. Both synthetic-to-real and cross-city adaptation experiments demonstrate the effectiveness of our proposed approach. The code is released at https://github. com/ZJULearning/MaxSquareLoss.

Journal ArticleDOI
TL;DR: It is found using both simulated and real-world complex data that constraint- based algorithms are often less accurate than score-based algorithms, but are seldom faster (even at large sample sizes); and that hybrid algorithms are neither faster nor more accurate than constraint-based algorithm.

Journal ArticleDOI
TL;DR: A multi-objective particle swarm optimization (MOPSO) algorithm is proposed to optimize the parameters of VMD, and it is applied to the composite fault diagnosis of the gearbox.
Abstract: The selection of variational mode decomposition (VMD) parameters usually adopts the empirical method, trial-and-error method, or single-objective optimization method. The above-mentioned method cannot achieve the global optimal effect. Therefore, a multi-objective particle swarm optimization (MOPSO) algorithm is proposed to optimize the parameters of VMD, and it is applied to the composite fault diagnosis of the gearbox. The specific steps are: first, symbol dynamic entropy (SDE) can effectively remove background noise, and use state mode probability and state transition to preserve fault information. Power spectral entropy (PSE) reflects the complexity of signal frequency composition. Therefore, the SDE and PSE are selected as fitness functions and then the Pareto frontier optimal solution set is obtained by the MOPSO algorithm. Finally, the optimal combination of VMD parameters (k, a) is obtained by normalization. The improved VMD is used to analyze the simulation signal and gearbox fault signal. The effectiveness of the proposed method is verified by comparing with the ensemble empirical mode decomposition (EEMD).

Journal ArticleDOI
TL;DR: This paper puts forward an ELECTRE II method with the probabilistic linguistic information to handle the edge node selection problem and compares with previous methods to verify the superiority of this method.
Abstract: The edge node selection problem in edge computing is a typical multi-criteria group decision-making problem. In this paper, we put forward an ELECTRE II method with the probabilistic linguistic information to handle the edge node selection problem. First, a novel distance measure is developed for probabilistic linguistic term sets (PLTSs) and an entropy measure is devised to measure the uncertainty degree of PLTSs. Based on the score value and entropy, a novel method is put forward to compare two PLTSs. Next, a weight-determining method for criteria based on multiple correlation coefficient and a weight-determining method for experts based on entropy theory are proposed. After that, a novel probabilistic linguistic ELECTRE II method is put forward to deal with the edge node selection problem. Comparison with previous methods is provided to verify the superiority of our method.

Journal ArticleDOI
TL;DR: Uncertainty measures are introduced by using new defined divergence-based cross entropy measure of Atanassov's intuitionistic fuzzy sets and it is demonstrated by application examples that the proposed measures can get reasonable results coinciding with other existing methods.

Journal ArticleDOI
TL;DR: Using statistical analyses, it is shown that this approach can protect the image against the statistical attacks and the entropy test results illustrate that the entropy values are close to the ideal, and hence, the proposed algorithm is secure against the entropy attacks.
Abstract: In this paper, a novel image encryption algorithm is proposed based on the combination of the chaos sequence and the modified AES algorithm. In this method, the encryption key is generated by Arnold chaos sequence. Then, the original image is encrypted using the modified AES algorithm and by implementing the round keys produced by the chaos system. The proposed approach not only reduces the time complexity of the algorithm but also adds the diffusion ability to the proposed algorithm, which make the encrypted images by the proposed algorithm resistant to the differential attacks. The key space of the proposed method is large enough to resist the brute-force attacks. This method is so sensitive to the initial values and input image so that the small changes in these values can lead to significant changes in the encrypted image. Using statistical analyses, we show that this approach can protect the image against the statistical attacks. The entropy test results illustrate that the entropy values are close to the ideal, and hence, the proposed algorithm is secure against the entropy attacks. The simulation results clarify that the small changes in the original image and key result in the significant changes in the encrypted image and the original image cannot be accessed.

Proceedings Article
01 Jan 2019
TL;DR: Several fundamental statistical bounds for entropic OT are proved with the squared Euclidean cost between subgaussian probability measures in arbitrary dimension and a central limit theorem is established based on techniques developed by Del Barrio and Loubes.
Abstract: We prove several fundamental statistical bounds for entropic OT with the squared Euclidean cost between subgaussian probability measures in arbitrary dimension. First, through a new sample complexity result we establish the rate of convergence of entropic OT for empirical measures. Our analysis improves exponentially on the bound of Genevay et al.~(2019) and extends their work to unbounded measures. Second, we establish a central limit theorem for entropic OT, based on techniques developed by Del Barrio and Loubes~(2019). Previously, such a result was only known for finite metric spaces. As an application of our results, we develop and analyze a new technique for estimating the entropy of a random variable corrupted by gaussian noise.

Journal ArticleDOI
TL;DR: A novel method to obtain the negation of the basic probability assignment (BPA) is proposed, several methods are used to measure the uncertainty of the BPA after each negation process, and the connection between uncertain information and entropy is discussed.
Abstract: In the field of information science, how to represent the uncertain information is still an open issue. The negation is an important way to represent the information. However, existing negation method has the limitations since it can only be applied to the probability distributions. To address this issue, this paper proposed a novel method to obtain the negation of the basic probability assignment (BPA). Moreover, several methods are used to measure the uncertainty of the BPA after each negation process, and the connection between uncertain information and entropy is discussed in this paper. Furthermore, based on the negation, this paper proposed a method to measure the uncertainty of the BPA. Finally, numerical examples are used to demonstrate the efficiency of the proposed method.

Journal ArticleDOI
TL;DR: The proposed belief entropy provides a promising way to measure the uncertain information and is proposed by considering the scale of the frame of discernment and the influence of the intersection between statements on uncertainty.
Abstract: How to manage the uncertainty of the basic probability assignment accurately and efficiently is of significance and also an open issue. Plenty of functions have been established to cover the issue, especially Deng entropy recently. Deng entropy can deal with the more complex situation of the focal elements (propositions). However, Deng entropy has some limitations when the propositions are of the intersection. In this paper, a modified function is proposed by considering the scale of the frame of discernment and the influence of the intersection between statements on uncertainty. The proposed belief entropy provides a promising way to measure the uncertain information. Some numerical examples and an application in pattern recognition are used to show the efficiency and accuracy of the proposed belief entropy.

Posted Content
TL;DR: This work proposes learning both the energy function and an amortized approximate sampling mechanism using a neural generator network, which provides an efficient approximation of the log-likelihood gradient.
Abstract: Maximum likelihood estimation of energy-based models is a challenging problem due to the intractability of the log-likelihood gradient. In this work, we propose learning both the energy function and an amortized approximate sampling mechanism using a neural generator network, which provides an efficient approximation of the log-likelihood gradient. The resulting objective requires maximizing entropy of the generated samples, which we perform using recently proposed nonparametric mutual information estimators. Finally, to stabilize the resulting adversarial game, we use a zero-centered gradient penalty derived as a necessary condition from the score matching literature. The proposed technique can generate sharp images with Inception and FID scores competitive with recent GAN techniques, does not suffer from mode collapse, and is competitive with state-of-the-art anomaly detection techniques.

Journal ArticleDOI
TL;DR: The main conceptual contribution of this paper is to clarify how the choice of a covertness metric impacts the information-theoretic limits of covert communications.
Abstract: We study the first- and second-order asymptotics of covert communication over binary-input discrete memoryless channels for three different covertness metrics and under maximum probability of error constraint. When covertness is measured in terms of the relative entropy between the channel output distributions induced with and without communication, we characterize the exact first- and second-order asymptotics of the number of bits that can be reliably transmitted with a maximum probability of error less than $\epsilon $ and a relative entropy less than $\delta $ . When covertness is measured in terms of the variational distance between the channel output distributions or in terms of the probability of missed detection for fixed probability of false alarm, we establish the exact first-order asymptotics and bound the second-order asymptotics. Pulse position modulation achieves the optimal first-order asymptotics for all three metrics, as well as the optimal second-order asymptotics for relative entropy. The main conceptual contribution of this paper is to clarify how the choice of a covertness metric impacts the information-theoretic limits of covert communications. The main technical contribution underlying our results is a detailed expurgation argument to show the existence of a code satisfying the reliability and covertness criteria.

Journal ArticleDOI
TL;DR: In this article, the authors study weighted averages of the estimators originally proposed by Kozachenko and Leonenko [Probl. Inform. Transm. 23 (1987), 95−101], based on the $k$-nearest neighbor distances of a sample of independent and identically distributed random vectors in $\mathbb{R}^{d}$.
Abstract: Many statistical procedures, including goodness-of-fit tests and methods for independent component analysis, rely critically on the estimation of the entropy of a distribution. In this paper, we seek entropy estimators that are efficient and achieve the local asymptotic minimax lower bound with respect to squared error loss. To this end, we study weighted averages of the estimators originally proposed by Kozachenko and Leonenko [Probl. Inform. Transm. 23 (1987), 95–101], based on the $k$-nearest neighbour distances of a sample of $n$ independent and identically distributed random vectors in $\mathbb{R}^{d}$. A careful choice of weights enables us to obtain an efficient estimator in arbitrary dimensions, given sufficient smoothness, while the original unweighted estimator is typically only efficient when $d\leq 3$. In addition to the new estimator proposed and theoretical understanding provided, our results facilitate the construction of asymptotically valid confidence intervals for the entropy of asymptotically minimal width.

Proceedings ArticleDOI
15 Jun 2019
TL;DR: In this article, the relationship between the input feature maps and 2D kernels is revealed in a theoretical framework, based on which a kernel sparsity and entropy (KSE) indicator is proposed to quantitate the feature map importance in a feature-agnostic manner to guide model compression.
Abstract: Compressing convolutional neural networks (CNNs) has received ever-increasing research focus. However, most existing CNN compression methods do not interpret their inherent structures to distinguish the implicit redundancy. In this paper, we investigate the problem of CNN compression from a novel interpretable perspective. The relationship between the input feature maps and 2D kernels is revealed in a theoretical framework, based on which a kernel sparsity and entropy (KSE) indicator is proposed to quantitate the feature map importance in a feature-agnostic manner to guide model compression. Kernel clustering is further conducted based on the KSE indicator to accomplish high-precision CNN compression. KSE is capable of simultaneously compressing each layer in an efficient way, which is significantly faster compared to previous data-driven feature map pruning methods. We comprehensively evaluate the compression and speedup of the proposed method on CIFAR-10, SVHN and ImageNet 2012. Our method demonstrates superior performance gains over previous ones. In particular, it achieves 4.7× FLOPs reduction and 2.9× compression on ResNet-50 with only a top-5 accuracy drop of 0.35% on ImageNet 2012, which significantly outperforms state-of-the-art methods.

Journal ArticleDOI
TL;DR: This work proposes a test of independence of two multivariate random vectors, given a sample from the underlying population, based on the estimation of mutual information, whose decomposition into joint and marginal entropies facilitates the use of recently-developed efficient entropy estimators derived from nearest neighbour distances.
Abstract: We propose a test of independence of two multivariate random vectors, given a sample from the underlying population. Our approach, which we call MINT, is based on the estimation of mutual information, whose decomposition into joint and marginal entropies facilitates the use of recently-developed efficient entropy estimators derived from nearest neighbour distances. The proposed critical values, which may be obtained from simulation (in the case where one marginal is known) or resampling, guarantee that the test has nominal size, and we provide local power analyses, uniformly over classes of densities whose mutual information satisfies a lower bound. Our ideas may be extended to provide a new goodness-of-fit tests of normal linear models based on assessing the independence of our vector of covariates and an appropriately-defined notion of an error vector. The theory is supported by numerical studies on both simulated and real data.

Journal ArticleDOI
TL;DR: A novel neighborhood rough sets and entropy measure-based gene selection with Fisher score for tumor classification is proposed, which has the ability of dealing with real-value data whilst maintaining the original gene classification information.
Abstract: Tumor classification is one of the most vital technologies for cancer diagnosis. Due to the high dimensionality, gene selection (finding a small, closely related gene set to accurately classify tumor) is an important step for improving gene expression data classification performance. Traditional rough set model as a classical attribute reduction method deals with discrete data only. As for the gene expression data containing real-value or noisy data, they are usually employed by a discrete preprocessing, which may result in poor classification accuracy. In this paper, a novel neighborhood rough sets and entropy measure-based gene selection with Fisher score for tumor classification is proposed, which has the ability of dealing with real-value data whilst maintaining the original gene classification information. First, the Fisher score method is employed to eliminate irrelevant genes to significantly reduce computation complexity. Next, some neighborhood entropy-based uncertainty measures are investigated for handling the uncertainty and noisy of gene expression data. Moreover, some of their properties are derived and the relationships among these measures are established. Finally, a joint neighborhood entropy-based gene selection algorithm with the Fisher score is presented to improve the classification performance of gene expression data. The experimental results under an instance and several public gene expression data sets prove that the proposed method is very effective for selecting the most relevant genes with high classification accuracy.

Journal ArticleDOI
26 Jun 2019-Entropy
TL;DR: This study proposed a novel method to handle the spherical fuzzy multi-criteria group decision-making (MCGDM) problems, and presented some novel logarithmic operations of spherical fuzzy sets (SFSs), and proposed the spheres fuzzy entropy to find the unknown weights information of the criteria.
Abstract: Keeping in view the importance of new defined and well growing spherical fuzzy sets, in this study, we proposed a novel method to handle the spherical fuzzy multi-criteria group decision-making (MCGDM) problems. Firstly, we presented some novel logarithmic operations of spherical fuzzy sets (SFSs). Then, we proposed series of novel logarithmic operators, namely spherical fuzzy weighted average operators and spherical fuzzy weighted geometric operators. We proposed the spherical fuzzy entropy to find the unknown weights information of the criteria. We study some of its desirable properties such as idempotency, boundary and monotonicity in detail. Finally, the detailed steps for the spherical fuzzy decision-making problems were developed, and a practical case was given to check the created approach and to illustrate its validity and superiority. Besides this, a systematic comparison analysis with other existent methods is conducted to reveal the advantages of our proposed method. Results indicate that the proposed method is suitable and effective for the decision process to evaluate their best alternative.

Journal ArticleDOI
01 Sep 2019-Entropy
TL;DR: Deng entropy is used as a measure of splitting rules to construct an evidential decision tree for fuzzy dataset classification and it can measure the uncertain degree of Basic Belief Assignment in terms of uncertain problems.
Abstract: Decision Tree is widely applied in many areas, such as classification and recognition. Traditional information entropy and Pearson’s correlation coefficient are often applied as measures of splitting rules to find the best splitting attribute. However, these methods can not handle uncertainty, since the relation between attributes and the degree of disorder of attributes can not be measured by them. Motivated by the idea of Deng Entropy, it can measure the uncertain degree of Basic Belief Assignment (BBA) in terms of uncertain problems. In this paper, Deng entropy is used as a measure of splitting rules to construct an evidential decision tree for fuzzy dataset classification. Compared to traditional combination rules used for combination of BBAs, the evidential decision tree can be applied to classification directly, which efficiently reduces the complexity of the algorithm. In addition, the experiments are conducted on iris dataset to build an evidential decision tree that achieves the goal of more accurate classification.

Journal ArticleDOI
TL;DR: The proposed composite multi-scale weighted permutation entropy methodology is proposed and the results show that CMWPE has less dependence on data length and the estimated entropy values are much more stable than the other existing methods.

Journal ArticleDOI
29 Sep 2019-Entropy
TL;DR: A TOPSIS method is proposed for probabilistic linguistic MAGDM in which the attribute weights are completely unknown, and the decision information is in the form of probabilism linguistic numbers (PLNs).
Abstract: In multiple attribute group decision making (MAGDM) problems, uncertain decision information is well-represented by linguistic term sets (LTSs). These LTSs are easily converted into probabilistic linguistic sets (PLTSs). In this paper, a TOPSIS method is proposed for probabilistic linguistic MAGDM in which the attribute weights are completely unknown, and the decision information is in the form of probabilistic linguistic numbers (PLNs). First, the definition of the scoring function is used to solve the probabilistic linguistic entropy, which is then employed to objectively derive the attribute weights. Second, the optimal alternatives are determined by calculating the shortest distance from the probabilistic linguistic positive ideal solution (PLPIS) and on the other side the farthest distance of the probabilistic linguistic negative ideal solution (PLNIS). This proposed method extends the applications range of the traditional entropy-weighted method. Moreover, it doesn’t need the decision-maker to give the attribute weights in advance. Finally, a numerical example for supplier selection of new agricultural machinery products is used to illustrate the use of the proposed method. The result shows the approach is simple, effective and easy to calculate. The proposed method can contribute to the selection of suitable alternative successfully in other selection problems.