scispace - formally typeset
Search or ask a question

Showing papers on "Entropy (information theory) published in 2020"


Posted Content
TL;DR: This paper proposes to use discretized Gaussian Mixture Likelihoods to parameterize the distributions of latent codes, which can achieve a more accurate and flexible entropy model and achieves a state-of-the-art performance against existing learned compression methods.
Abstract: Image compression is a fundamental research field and many well-known compression standards have been developed for many decades. Recently, learned compression methods exhibit a fast development trend with promising results. However, there is still a performance gap between learned compression algorithms and reigning compression standards, especially in terms of widely used PSNR metric. In this paper, we explore the remaining redundancy of recent learned compression algorithms. We have found accurate entropy models for rate estimation largely affect the optimization of network parameters and thus affect the rate-distortion performance. Therefore, in this paper, we propose to use discretized Gaussian Mixture Likelihoods to parameterize the distributions of latent codes, which can achieve a more accurate and flexible entropy model. Besides, we take advantage of recent attention modules and incorporate them into network architecture to enhance the performance. Experimental results demonstrate our proposed method achieves a state-of-the-art performance compared to existing learned compression methods on both Kodak and high-resolution datasets. To our knowledge our approach is the first work to achieve comparable performance with latest compression standard Versatile Video Coding (VVC) regarding PSNR. More importantly, our approach generates more visually pleasant results when optimized by MS-SSIM. This project page is at this https URL this https URL

310 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: In this paper, a discretized Gaussian mixture likelihood (GMM) is used to parameterize the distributions of latent codes, which can achieve a more accurate and flexible entropy model.
Abstract: Image compression is a fundamental research field and many well-known compression standards have been developed for many decades. Recently, learned compression methods exhibit a fast development trend with promising results. However, there is still a performance gap between learned compression algorithms and reigning compression standards, especially in terms of widely used PSNR metric. In this paper, we explore the remaining redundancy of recent learned compression algorithms. We have found accurate entropy models for rate estimation largely affect the optimization of network parameters and thus affect the rate-distortion performance. Therefore, in this paper, we propose to use discretized Gaussian Mixture Likelihoods to parameterize the distributions of latent codes, which can achieve a more accurate and flexible entropy model. Besides, we take advantage of recent attention modules and incorporate them into network architecture to enhance the performance. Experimental results demonstrate our proposed method achieves a state-of-the-art performance compared to existing learned compression methods on both Kodak and high-resolution datasets. To our knowledge our approach is the first work to achieve comparable performance with latest compression standard Versatile Video Coding (VVC) regarding PSNR. More importantly, our approach generates more visually pleasant results when optimized by MS-SSIM.

240 citations


Journal ArticleDOI
TL;DR: The development of Deng entropy as an effective way to measure uncertainty, including introducing its definition, analyzing its properties, and comparing it to other measures are discussed, and the challenges for future studies on uncertainty measurement in evidence theory are examined.
Abstract: As an extension of probability theory, evidence theory is able to better handle unknown and imprecise information. Owing toits advantages, evidence theory has more flexibility and effectiveness for modeling and processing uncertain information.Uncertainty measure plays an essential role both in evidence theory and probability theory.In probability theory, Shannon entropy provides a novel perspective for measuring uncertainty. Various entropies exist for measuring the uncertainty of basic probability assignment (BPA) in evidence theory. However, from the standpoint of the requirements of uncertainty measurement and physics, these entropies are controversial. Therefore, the process for measuring BPA uncertainty currently remains an open issue in the literature.Firstly, this paper reviews the measures of uncertainty in evidence theory followed by an analysis of some related controversies. Secondly, we discuss the development of Deng entropy as an effective way to measure uncertainty, including introducing its definition, analyzing its properties, and comparing it to other measures. We also examine the concept of maximum Deng entropy, the pseudo-Pascal triangle of maximum Deng entropy, generalized belief entropy, and measures of divergence.In addition, we conduct an analysis of the application of Deng entropy and further examine the challenges for future studies on uncertainty measurement in evidence theory. Finally, a conclusion is provided to summarize this study.

223 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: To improve both discriminability and diversity, the proposed Batch Nuclear-norm Maximization (BNM) on the output matrix could boost the learning under typical label insufficient learning scenarios, such as semi-supervised learning, domain adaptation and open domain recognition.
Abstract: The learning of the deep networks largely relies on the data with human-annotated labels. In some label insufficient situations, the performance degrades on the decision boundary with high data density. A common solution is to directly minimize the Shannon Entropy, but the side effect caused by entropy minimization, \it i.e., reduction of the prediction diversity, is mostly ignored. To address this issue, we reinvestigate the structure of classification output matrix of a randomly selected data batch. We find by theoretical analysis that the prediction discriminability and diversity could be separately measured by the Frobenius-norm and rank of the batch output matrix. Besides, the nuclear-norm is an upperbound of the Frobenius-norm, and a convex approximation of the matrix rank. Accordingly, to improve both discriminability and diversity, we propose Batch Nuclear-norm Maximization (BNM) on the output matrix. BNM could boost the learning under typical label insufficient learning scenarios, such as semi-supervised learning, domain adaptation and open domain recognition. On these tasks, extensive experimental results show that BNM outperforms competitors and works well with existing well-known methods. The code is available at https://github.com/cuishuhao/BNM

205 citations


Journal ArticleDOI
TL;DR: A new hybrid chaotic map and a different way of using optimization technique to improve the performance of encryption algorithms are proposed, which establishes an excellent randomness performance and sensitivity.
Abstract: This paper proposes a new hybrid chaotic map and a different way of using optimization technique to improve the performance of encryption algorithms. Compared to other chaotic functions, the proposed chaotic map establishes an excellent randomness performance and sensitivity. Based on its Lyapunov exponents and entropy measure, the characteristics of the new mathematical function are better than those of classical maps. We propose a new image cipher based on confusion/diffusion Shannon properties. The substitution phase of the proposed encryption algorithm, which depends on a new optimized substitution box, was carried out by chaotic Jaya optimization algorithm to generate S-boxes according to their nonlinearity score. The goal of the optimization process is to have a bijective matrix with high nonlinearity score. Furthermore, a dynamic key depending on the output of encrypted image is proposed. Security analysis indicates that the proposed encryption scheme can withstand different crypt analytics attacks.

161 citations


Journal ArticleDOI
TL;DR: A novel method based on enhanced deep gated recurrent unit and complex wavelet packet energy moment entropy and a modified training algorithm based on learning rate decay strategy is developed to enhance the prognosis capability of the constructed deep model.
Abstract: Early fault prognosis of bearing is a very meaningful yet challenging task to improve the security of rotating machinery. For this purpose, a novel method based on enhanced deep gated recurrent unit and complex wavelet packet energy moment entropy is proposed in this paper. First, complex wavelet packet energy moment entropy is defined as a new monitoring index to characterize bearing performance degradation. Second, deep gated recurrent unit network is constructed to capture the nonlinear mapping relationship hidden in the defined monitoring index. Finally, a modified training algorithm based on learning rate decay strategy is developed to enhance the prognosis capability of the constructed deep model. The proposed method is applied to analyze the simulated and experimental signals of bearing. The results demonstrate that the proposed method is more superior in sensibility and accuracy to the existing methods.

157 citations


Posted Content
David Minnen1, Saurabh Singh1
TL;DR: In this article, channel-conditioning and latent residual prediction are introduced to improve the performance of the entropy-constrained autoencoder with an entropy model that uses both forward and backward adaptation.
Abstract: In learning-based approaches to image compression, codecs are developed by optimizing a computational model to minimize a rate-distortion objective. Currently, the most effective learned image codecs take the form of an entropy-constrained autoencoder with an entropy model that uses both forward and backward adaptation. Forward adaptation makes use of side information and can be efficiently integrated into a deep neural network. In contrast, backward adaptation typically makes predictions based on the causal context of each symbol, which requires serial processing that prevents efficient GPU / TPU utilization. We introduce two enhancements, channel-conditioning and latent residual prediction, that lead to network architectures with better rate-distortion performance than existing context-adaptive models while minimizing serial processing. Empirically, we see an average rate savings of 6.7% on the Kodak image set and 11.4% on the Tecnick image set compared to a context-adaptive baseline model. At low bit rates, where the improvements are most effective, our model saves up to 18% over the baseline and outperforms hand-engineered codecs like BPG by up to 25%.

129 citations


Posted Content
TL;DR: This work proposes a more universally applicable domain adaptation approach that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE), and uses entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy.
Abstract: Unsupervised domain adaptation methods traditionally assume that all source categories are present in the target domain. In practice, little may be known about the category overlap between the two domains. While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori. We propose a more universally applicable domain adaptation framework that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE). DANCE combines two novel ideas: First, as we cannot fully rely on source categories to learn features discriminative for the target, we propose a novel neighborhood clustering technique to learn the structure of the target domain in a self-supervised way. Second, we use entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy. We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings. Implementation is available at this https URL.

127 citations


Journal ArticleDOI
TL;DR: In this paper, confusion matrix is endowed with semantics of three-way decisions, and a collection of measures are deduced and summarized into seven measure modes, which can deduce more satisfying three- way regions.

126 citations


Journal ArticleDOI
TL;DR: Based on Deng entropy, the information volume of mass function is presented in this paper and when the cardinal of the frame of discernment is identical, both the total uncertainty case and the BPA distribution of the maximum Deng entropy have the same information volume.
Abstract: Given a probability distribution, its corresponding information volume is Shannon entropy. However, how to determine the information volume of a given mass function is still an open issue. Based on Deng entropy, the information volume of mass function is presented in this paper. Given a mass function, the corresponding information volume is larger than its uncertainty measured by Deng entropy. In addition, when the cardinal of the frame of discernment is identical, both the total uncertainty case and the BPA distribution of the maximum Deng entropy have the same information volume. Some numerical examples are illustrated to show the efficiency of the proposed information volume of mass function.

126 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: This work introduces a new viewpoint entropy formulation, which is the basis of a novel active learning strategy for semantic segmentation that exploits viewpoint consistency in multi-view datasets and proposes uncertainty computations on a superpixel level, which exploits inherently localized signal in the segmentation task, directly lowering the annotation costs.
Abstract: We propose ViewAL, a novel active learning strategy for semantic segmentation that exploits viewpoint consistency in multi-view datasets. Our core idea is that inconsistencies in model predictions across viewpoints provide a very reliable measure of uncertainty and encourage the model to perform well irrespective of the viewpoint under which objects are observed. To incorporate this uncertainty measure, we introduce a new viewpoint entropy formulation, which is the basis of our active learning strategy. In addition, we propose uncertainty computations on a superpixel level, which exploits inherently localized signal in the segmentation task, directly lowering the annotation costs. This combination of viewpoint entropy and the use of superpixels allows to efficiently select samples that are highly informative for improving the network. We demonstrate that our proposed active learning strategy not only yields the best-performing models for the same amount of required labeled data, but also significantly reduces labeling effort. For instance, our method achieves 95% of maximum achievable network performance using only 7%, 17%, and 24% labeled data on SceneNet-RGBD, ScanNet, and Matterport3D, respectively. On these datasets, the best state-of-the-art method achieves the same performance with 14%, 27% and 33% labeled data. Finally, we demonstrate that labeling using superpixels yields the same quality of ground-truth compared to labeling whole images, but requires 25% less time.

Proceedings Article
01 Jan 2020
TL;DR: An entropy regularization term is proposed that measures the dependency between the learned features and the class labels and thus can learn classifiers with better generalization capabilities and is guaranteed to learn conditional-invariant features across all source domains.
Abstract: Domain generalization aims to learn from multiple source domains a predictive model that can generalize to unseen target domains. One essential problem in domain generalization is to learn discriminative domain-invariant features. To arrive at this, some methods introduce a domain discriminator through adversarial learning to match the feature distributions in multiple source domains. However, adversarial training can only guarantee that the learned features have invariant marginal distributions, while the invariance of conditional distributions is more important for prediction in new domains. To ensure the conditional invariance of learned features, we propose an entropy regularization term that measures the dependency between the learned features and the class labels. Combined with the typical task-related loss, e.g., cross-entropy loss for classification, and adversarial loss for domain discrimination, our overall objective is guaranteed to learn conditional-invariant features across all source domains and thus can learn classifiers with better generalization capabilities. We demonstrate the effectiveness of our method through comparison with state-of-the-art methods on both simulated and real-world datasets. Code is available at: https://github.com/sshan-zhao/DG_via_ER.

Posted Content
TL;DR: This work develops nonasymptotic convergence guarantees for entropy-regularized NPG methods under softmax parameterization, focusing on tabular discounted Markov decision processes and demonstrates that the algorithm converges linearly at an astonishing rate that is independent of the dimension of the state-action space.
Abstract: Natural policy gradient (NPG) methods are among the most widely used policy optimization algorithms in contemporary reinforcement learning. This class of methods is often applied in conjunction with entropy regularization -- an algorithmic scheme that encourages exploration -- and is closely related to soft policy iteration and trust region policy optimization. Despite the empirical success, the theoretical underpinnings for NPG methods remain limited even for the tabular setting. This paper develops $\textit{non-asymptotic}$ convergence guarantees for entropy-regularized NPG methods under softmax parameterization, focusing on discounted Markov decision processes (MDPs). Assuming access to exact policy evaluation, we demonstrate that the algorithm converges linearly -- or even quadratically once it enters a local region around the optimal policy -- when computing optimal value functions of the regularized MDP. Moreover, the algorithm is provably stable vis-a-vis inexactness of policy evaluation. Our convergence results accommodate a wide range of learning rates, and shed light upon the role of entropy regularization in enabling fast convergence.

Journal ArticleDOI
01 May 2020-Entropy
TL;DR: This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans.
Abstract: Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists' efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.

Journal ArticleDOI
TL;DR: From the results of the comprehensive evaluation of China ’s electric power development, it is found that compared with other countries, the queueing indicator value to the subjective evaluation weight is lower in China.

Proceedings ArticleDOI
David Minnen1, Saurabh Singh1
01 Oct 2020
TL;DR: This work introduces two enhancements, channel-conditioning and latent residual prediction, that lead to network architectures with better rate-distortion performance than existing context-adaptive models while minimizing serial processing.
Abstract: In learning-based approaches to image compression, codecs are developed by optimizing a computational model to minimize a rate-distortion objective. Currently, the most effective learned image codecs take the form of an entropy-constrained autoencoder with an entropy model that uses both forward and backward adaptation. Forward adaptation makes use of side information and can be efficiently integrated into a deep neural network. In contrast, backward adaptation typically makes predictions based on the causal context of each symbol, which requires serial processing that prevents efficient GPU / TPU utilization. We introduce two enhancements, channel-conditioning and latent residual prediction, that lead to network architectures with better rate-distortion performance than existing context-adaptive models while minimizing serial processing. Empirically, we see an average rate savings of 6.7% on the Kodak image set and 11.4% on the Tecnick image set compared to a context-adaptive baseline model. At low bit rates, where the improvements are most effective, our model saves up to 18% over the baseline and outperforms hand-engineered codecs like BPG by up to 25%.

Journal ArticleDOI
28 Feb 2020-Entropy
TL;DR: This paper proposes a novel system that is computationally less expensive and provided a higher level of security in chaotic-based encryption schemes based on a shuffling process with fractals key along with three-dimensional Lorenz chaotic map.
Abstract: Chaos-based encryption schemes have attracted many researchers around the world in the digital image security domain. Digital images can be secured using existing chaotic maps, multiple chaotic maps, and several other hybrid dynamic systems that enhance the non-linearity of digital images. The combined property of confusion and diffusion was introduced by Claude Shannon which can be employed for digital image security. In this paper, we proposed a novel system that is computationally less expensive and provided a higher level of security. The system is based on a shuffling process with fractals key along with three-dimensional Lorenz chaotic map. The shuffling process added the confusion property and the pixels of the standard image is shuffled. Three-dimensional Lorenz chaotic map is used for a diffusion process which distorted all pixels of the image. In the statistical security test, means square error (MSE) evaluated error value was greater than the average value of 10000 for all standard images. The value of peak signal to noise (PSNR) was 7.69(dB) for the test image. Moreover, the calculated correlation coefficient values for each direction of the encrypted images was less than zero with a number of pixel change rate (NPCR) higher than 99%. During the security test, the entropy values were more than 7.9 for each grey channel which is almost equal to the ideal value of 8 for an 8-bit system. Numerous security tests and low computational complexity tests validate the security, robustness, and real-time implementation of the presented scheme.

Journal ArticleDOI
TL;DR: A Kapur’s entropy based Crow Search Algorithm (CSA) to estimate optimal values of multilevel thresholds is presented, which performed better than PSO, DE, GWO, MFO and CS in terms of quality and consistency.

Journal ArticleDOI
TL;DR: In this article, the theoretical development of some fundamental entropy measures are reviewed and the relations among them are clarified, with the intent of improving online entropy estimation and expanding its applicability to a wider range of intelligent fault-diagnostic systems.
Abstract: Entropy, as a complexity measure, has been widely applied for time series analysis. One preeminent example is the design of machine condition monitoring and industrial fault-diagnostic systems. The occurrence of failures in a machine will typically lead to nonlinear characteristics in the measurements, caused by instantaneous variations, which can increase the complexity in the system response. Entropy measures are suitable to quantify such dynamic changes in the underlying process, distinguishing between different system conditions. However, notions of entropy are defined differently in various contexts (e.g., information theory and dynamical systems theory), which may confound researchers in the applied sciences. In this article, we have systematically reviewed the theoretical development of some fundamental entropy measures and clarified the relations among them. Then, typical entropy-based applications of machine fault-diagnostic systems are summarized. Furthermore, insights into possible applications of the entropy measures are explained, as to where and how these measures can be useful toward future data-driven fault diagnosis methodologies. Finally, potential research trends in this area are discussed, with the intent of improving online entropy estimation and expanding its applicability to a wider range of intelligent fault-diagnostic systems.

Journal ArticleDOI
26 Jul 2020-Entropy
TL;DR: A novel features extraction method which incorporates robust entropy optimization and an efficient Maximum Entropy Markov Model (MEMM) for HIR via multiple vision sensors is proposed, which will be applicable to a wide variety of man–machine interfaces.
Abstract: Automatic identification of human interaction is a challenging task especially in dynamic environments with cluttered backgrounds from video sequences. Advancements in computer vision sensor technologies provide powerful effects in human interaction recognition (HIR) during routine daily life. In this paper, we propose a novel features extraction method which incorporates robust entropy optimization and an efficient Maximum Entropy Markov Model (MEMM) for HIR via multiple vision sensors. The main objectives of proposed methodology are: (1) to propose a hybrid of four novel features—i.e., spatio-temporal features, energy-based features, shape based angular and geometric features—and a motion-orthogonal histogram of oriented gradient (MO-HOG); (2) to encode hybrid feature descriptors using a codebook, a Gaussian mixture model (GMM) and fisher encoding; (3) to optimize the encoded feature using a cross entropy optimization function; (4) to apply a MEMM classification algorithm to examine empirical expectations and highest entropy, which measure pattern variances to achieve outperformed HIR accuracy results. Our system is tested over three well-known datasets: SBU Kinect interaction; UoL 3D social activity; UT-interaction datasets. Through wide experimentations, the proposed features extraction algorithm, along with cross entropy optimization, has achieved the average accuracy rate of 91.25% with SBU, 90.4% with UoL and 87.4% with UT-Interaction datasets. The proposed HIR system will be applicable to a wide variety of man–machine interfaces, such as public-place surveillance, future medical applications, virtual reality, fitness exercises and 3D interactive gaming.

Journal ArticleDOI
TL;DR: In this paper, the authors derived dynamics of the entanglement wedge cross section from the reflected entropy for local operator quench states in the holographic CFT and showed that reflected entropy can diagnose a new perspective of the chaotic nature for given mixed states and characterize classical correlations in the subregion/subregion duality.
Abstract: We derive dynamics of the entanglement wedge cross section from the reflected entropy for local operator quench states in the holographic CFT. By comparing between the reflected entropy and the mutual information in this dynamical setup, we argue that (1) the reflected entropy can diagnose a new perspective of the chaotic nature for given mixed states and (2) it can also characterize classical correlations in the subregion/subregion duality. Moreover, we point out that we must improve the bulk interpretation of a heavy state even in the case of well-studied entanglement entropy. Finally, we show that we can derive the same results from the odd entanglement entropy. The present paper is an extended version of our earlier report arXiv:1907.06646 and includes many new results: non-perturbative quantum correction to the reflected/odd entropy, detailed analysis in both CFT and bulk sides, many technical aspects of replica trick for reflected entropy which turn out to be important for general setup, and explicit forms of multi-point semi- classical conformal blocks under consideration.

Journal ArticleDOI
TL;DR: A new definition of negation of BBA is presented, and in the proposed negation, BBAs are represented as vectors, and negation is realized by matrix operators and has the merit of simplifying the problem.
Abstract: Negation is a new perspective to represent knowledge. The negation of probability distribution has been proposed, and it has a lot of interesting properties, which can reach a maximum entropy. Because of the defects of the classical probability theory in the expression of uncertainty, the basic belief assignment (BBA) in the Dempster–Shafer theory (D–S theory) are widely used in decision theory. Thus, negation provides a new perspective for D–S theory to measure fuzziness. In this paper, a new definition of negation of BBA is presented. In the proposed negation, BBAs are represented as vectors, and negation is realized by matrix operators. This method has a good interpretation of the matrix operators and has the merit of simplifying the problem. With several different definitions of entropy to determinate the uncertainty of BBA, the proposed negation of BBA can reach a maximum belief entropy when the entropies satisfy a certain property.

Journal ArticleDOI
20 May 2020-Entropy
TL;DR: A human activity recognition model that acquires signal data from motion node sensors including inertial sensors, i.e., gyroscopes and accelerometers is presented, which outperformed existing well-known statistical state-of-the-art methods by achieving an improved recognition accuracy.
Abstract: Advancements in wearable sensors technologies provide prominent effects in the daily life activities of humans. These wearable sensors are gaining more awareness in healthcare for the elderly to ensure their independent living and to improve their comfort. In this paper, we present a human activity recognition model that acquires signal data from motion node sensors including inertial sensors, i.e., gyroscopes and accelerometers. First, the inertial data is processed via multiple filters such as Savitzky-Golay, median and hampel filters to examine lower/upper cutoff frequency behaviors. Second, it extracts a multifused model for statistical, wavelet and binary features to maximize the occurrence of optimal feature values. Then, adaptive moment estimation (Adam) and AdaDelta are introduced in a feature optimization phase to adopt learning rate patterns. These optimized patterns are further processed by the maximum entropy Markov model (MEMM) for empirical expectation and highest entropy, which measure signal variances for outperformed accuracy results. Our model was experimentally evaluated on University of Southern California Human Activity Dataset (USC-HAD) as a benchmark dataset and on an Intelligent Mediasporting behavior (IMSB), which is a new self-annotated sports dataset. For evaluation, we used the "leave-one-out" cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving an improved recognition accuracy of 91.25%, 93.66% and 90.91% when compared with USC-HAD, IMSB, and Mhealth datasets, respectively. The proposed system should be applicable to man-machine interface domains, such as health exercises, robot learning, interactive games and pattern-based surveillance.

Journal ArticleDOI
TL;DR: Experimental results under seven UCI datasets and eight gene expression datasets illustrate that the proposed NMRS-based attribute reduction method using Lebesgue and entropy measures in incomplete neighborhood decision systems is effective to select most relevant attributes with higher classification accuracy, as compared with representative algorithms.
Abstract: For incomplete data with mixed numerical and symbolic attributes, attribute reduction based on neighborhood multi-granulation rough sets (NMRS) is an important method to improve the classification performance. However, most classical attribute reduction methods can only handle finite sets as to produce more attributes and lower classification accuracy. This paper proposes a novel NMRS-based attribute reduction method using Lebesgue and entropy measures in incomplete neighborhood decision systems. First, some concepts of optimistic and pessimistic NMRS models in incomplete neighborhood decision systems are given, respectively. Then, a Lebesgue measure is combined with NMRS to study neighborhood tolerance class-based uncertainty measures. To analyze the uncertainty, noise and redundancy of incomplete neighborhood decision systems in detail, some neighborhood multi-granulation entropy-based uncertainty measures are developed by integrating Lebesgue and entropy measures. Inspired by both algebraic view with information view in NMRS, the pessimistic neighborhood multi-granulation dependency joint entropy is proposed. What is more, the corresponding properties are further deduced and the relationships among these measures are discussed, which can help to investigate the uncertainty of incomplete neighborhood decision systems. Finally, the Fisher linear discriminant method is used to eliminate irrelevant attributes to significantly reduce computational complexity for high-dimensional datasets, and a heuristic attribute reduction algorithm with complexity analysis is designed to improve classification performance of incomplete and mixed datasets. Experimental results under seven UCI datasets and eight gene expression datasets illustrate that the proposed method is effective to select most relevant attributes with higher classification accuracy, as compared with representative algorithms.

Proceedings Article
30 Apr 2020
TL;DR: In this paper, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function is proposed. But this method suffers from large variance that may limit performance and in practice requires carefully tuned entropy regularization to prevent policy collapse.
Abstract: Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.

Journal ArticleDOI
TL;DR: In this paper, the saturation value of the bipartite entanglement and number entropy was analyzed starting from a random product state deep in the many-body localized (MBL) phase.
Abstract: We analyze the saturation value of the bipartite entanglement and number entropy starting from a random product state deep in the many-body localized (MBL) phase. By studying the probability distributions of these entropies we find that the growth of the saturation value of the entanglement entropy stems from a significant reshuffling of the weight in the probability distributions from the bulk to the exponential tails. In contrast, the probability distributions of the saturation value of the number entropy are converged with system size, and exhibit a sharp cutoff for values of the number entropy which correspond to one particle fluctuating across the boundary between the two halves of the system. Our results therefore rule out slow particle transport deep in the MBL phase and confirm that the slow entanglement entropy production stems uniquely from configurational entanglement.

Journal ArticleDOI
TL;DR: A new image encryption algorithm is presented using chaos theory and dynamic substitution based on two-dimensional Henon, Ikeda chaotic maps, and substitution box (S-box) transformation to prove the efficiency and security of the proposed scheme.
Abstract: The evolution of wireless and mobile communication from 0G to the upcoming 5G gives rise to data sharing through the Internet. This data transfer via open public networks are susceptible to several types of attacks. Encryption is a method that can protect information from hackers and hence confidential data can be secured through a cryptosystem. Due to the increased number of cyber attacks, encryption has become an important component of modern-day communication. In this article, a new image encryption algorithm is presented using chaos theory and dynamic substitution. The proposed scheme is based on two-dimensional Henon, Ikeda chaotic maps, and substitution box (S-box) transformation. Through Henon, a random S-Box is selected and the image pixel is substituted randomly. To analyze security and robustness of the proposed algorithm, several security tests such as information entropy, histogram investigation, correlation analysis, energy, homogeneity, and mean square error are performed. The entropy values of the test images are greater than 7.99 and the key space of the proposed algorithm is 2 798 . Furthermore, the correlation values of the encrypted images using the proposed scheme are close to zero when compared with other conventional schemes. The number of pixel change rate (NPCR) and unified average change intensity (UACI) for the proposed scheme are higher than 99.50% and 33, respectively. The simulation results and comparison with the state-of-the-art algorithms prove the efficiency and security of the proposed scheme.

Journal ArticleDOI
TL;DR: Multivariate analysis of the EEG signal for the detection of Schizophrenia condition and five entropy measures measured from the IMF signal showed a significant difference.

Journal ArticleDOI
01 Feb 2020
TL;DR: A promising framework is developed based on soft sets, TOPSIS and the Shannon entropy, which aggregates concept selection on design parameters values by merging acceptable- and satisfactory-level needs of the customers.
Abstract: Among several phases of new product development, concept selection is the most crucial activity and it gives perfection to further progress of a product. Customer’s ideas and linguistic requirements are often substantial in concept specifications to assess quantitative criteria, which gives satisfaction for a product to progress in the markets. This work aggregates concept selection on design parameters values by merging acceptable- and satisfactory-level needs of the customers. A promising framework is developed based on soft sets, TOPSIS and the Shannon entropy. Customer’s preferences on incorporating design values are identified based on acceptable- and satisfactory-level needs, and these preferences are weighted through Shannon entropy. By performing AND operation on the soft set of level requirements of one customer with the soft set of requirements of another customer, several weighted tables of soft sets are obtained on the pair of design parameters values. To obtain the best concept on different levels of requirements, TOPSIS is performed which provides several integrated evaluations. An illustration is considered for the demonstration of the method, brings the best concept for two customers which is acceptable for both of the customers, satisfactory for both the customers and vise-versa. Finally, the comparisons are presented with recent major existing methods.

Journal ArticleDOI
TL;DR: In this paper, the classical Shannon entropy and the quantum von Neumann entropy are described, along with related concepts such as classical and quantum relative entropy, conditional entropy, and mutual information.
Abstract: This article consists of a very short introduction to classical and quantum information theory. Basic properties of the classical Shannon entropy and the quantum von Neumann entropy are described, along with related concepts such as classical and quantum relative entropy, conditional entropy, and mutual information. A few more detailed topics are considered in the quantum case.