scispace - formally typeset

Showing papers in "Entropy in 2020"


Journal ArticleDOI

[...]

25 Dec 2020-Entropy
TL;DR: In this paper, a literature review and taxonomy of machine learning interpretability methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
Abstract: Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.

67 citations


Journal ArticleDOI

[...]

24 Sep 2020-Entropy
TL;DR: The present analysis deals with the entropy analysis of the blood flow through an anisotropically tapered arteries under the suspension of magnetic Zinc-oxide (ZnO) nanoparticles (NPs) and the present outcomes are enriched to give valuable information for the research scientists in the field biomedical science.
Abstract: The present analysis deals with the entropy analysis of the blood flow through an anisotropically tapered arteries under the suspension of magnetic Zinc-oxide (ZnO) nanoparticles (NPs). The Jeffrey fluid model is contemplated as blood that is electrically conducting and incompressible. The lubrication approach is used for the mathematical modeling. The second law of thermodynamics is used to examine the entropy generation. The exact solutions are obtained against velocity and temperature profile with the use of computational software. The results for Entropy, Velocity, Bejan number, temperature profile, and impedance profile are discussed by plotting the graphs. ZnO-NPs have promising applications in biomedical engineering due to its low toxicity, economically reliable, and excellent biocompatibility. ZnO-NPs also emerged in medicine i.e., antibacterial and anticancer activity, and also beneficial in antidiabetic treatment. The monitoring of the blood temperature in the case of the tapered artery has supreme importance in controlling the temperature of blood in the living environment. The presence of a magnetic field is advantageous to manage and control the blood motion at different temperatures. The present outcomes are enriched to give valuable information for the research scientists in the field biomedical science, who are looking to examine the blood flow with stenosis conditions and also beneficial in treating multiple diseases.

62 citations


Journal ArticleDOI

[...]

01 May 2020-Entropy
TL;DR: This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans.
Abstract: Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists' efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.

52 citations


Journal ArticleDOI

[...]

30 Apr 2020-Entropy
TL;DR: A key distinction in the physical sciences that may provide a foundation for the distinction between mind and matter, and between sentient and intentional systems is described.
Abstract: This essay addresses Cartesian duality and how its implicit dialectic might be repaired using physics and information theory. Our agenda is to describe a key distinction in the physical sciences that may provide a foundation for the distinction between mind and matter, and between sentient and intentional systems. From this perspective, it becomes tenable to talk about the physics of sentience and 'forces' that underwrite our beliefs (in the sense of probability distributions represented by our internal states), which may ground our mental states and consciousness. We will refer to this view as Markovian monism, which entails two claims: (1) fundamentally, there is only one type of thing and only one type of irreducible property (hence monism). (2) All systems possessing a Markov blanket have properties that are relevant for understanding the mind and consciousness: if such systems have mental properties, then they have them partly by virtue of possessing a Markov blanket (hence Markovian). Markovian monism rests upon the information geometry of random dynamic systems. In brief, the information geometry induced in any system-whose internal states can be distinguished from external states-must acquire a dual aspect. This dual aspect concerns the (intrinsic) information geometry of the probabilistic evolution of internal states and a separate (extrinsic) information geometry of probabilistic beliefs about external states that are parameterised by internal states. We call these intrinsic (i.e., mechanical, or state-based) and extrinsic (i.e., Markovian, or belief-based) information geometries, respectively. Although these mathematical notions may sound complicated, they are fairly straightforward to handle, and may offer a means through which to frame the origins of consciousness.

48 citations


Journal ArticleDOI

[...]

30 Jul 2020-Entropy
TL;DR: In this paper, the authors used decision tree, bagging, random forest, adaptive boosting (Adaboost), gradient boosting, and eXtreme gradient boosting (XGBoost), and artificial neural networks (ANN), recurrent neural network (RNN) and long short-term memory (LSTM).
Abstract: The prediction of stock groups values has always been attractive and challenging for shareholders due to its inherent dynamics, non-linearity, and complex nature. This paper concentrates on the future prediction of stock market groups. Four groups named diversified financials, petroleum, non-metallic minerals, and basic metals from Tehran stock exchange were chosen for experimental evaluations. Data were collected for the groups based on 10 years of historical records. The value predictions are created for 1, 2, 5, 10, 15, 20, and 30 days in advance. Various machine learning algorithms were utilized for prediction of future values of stock market groups. We employed decision tree, bagging, random forest, adaptive boosting (Adaboost), gradient boosting, and eXtreme gradient boosting (XGBoost), and artificial neural networks (ANN), recurrent neural network (RNN) and long short-term memory (LSTM). Ten technical indicators were selected as the inputs into each of the prediction models. Finally, the results of the predictions were presented for each technique based on four metrics. Among all algorithms used in this paper, LSTM shows more accurate results with the highest model fitting ability. In addition, for tree-based models, there is often an intense competition between Adaboost, Gradient Boosting, and XGBoost.

43 citations


Journal ArticleDOI

[...]

29 Sep 2020-Entropy
TL;DR: The fundamental differential-geometric structures of information manifolds are described, the fundamental theorem of information geometry is state, and some use cases of these information manifolding in information sciences are illustrated.
Abstract: In this survey, we describe the fundamental differential-geometric structures of information manifolds, state the fundamental theorem of information geometry, and illustrate some use cases of these information manifolds in information sciences. The exposition is self-contained by concisely introducing the necessary concepts of differential geometry. Proofs are omitted for brevity.

41 citations


Journal ArticleDOI

[...]

Ian Fischer1
08 Sep 2020-Entropy
TL;DR: This article proposed the Minimum Necessary Information (MNI) criterion for evaluating the quality of a model, which extends the traditional measure of generalization as accuracy or related metrics on a held-out set.
Abstract: Much of the field of Machine Learning exhibits a prominent set of failure modes, including vulnerability to adversarial examples, poor out-of-distribution (OoD) detection, miscalibration, and willingness to memorize random labelings of datasets. We characterize these as failures of robust generalization, which extends the traditional measure of generalization as accuracy or related metrics on a held-out set. We hypothesize that these failures to robustly generalize are due to the learning systems retaining too much information about the training data. To test this hypothesis, we propose the Minimum Necessary Information (MNI) criterion for evaluating the quality of a model. In order to train models that perform well with respect to the MNI criterion, we present a new objective function, the Conditional Entropy Bottleneck (CEB), which is closely related to the Information Bottleneck (IB). We experimentally test our hypothesis by comparing the performance of CEB models with deterministic models and Variational Information Bottleneck (VIB) models on a variety of different datasets and robustness challenges. We find strong empirical evidence supporting our hypothesis that MNI models improve on these problems of robust generalization.

41 citations


Journal ArticleDOI

[...]

27 Jan 2020-Entropy
TL;DR: This tutorial paper focuses on the variants of the bottleneck problem taking an information theoretic perspective and discusses practical methods to solve it, as well as its connection to coding and learning aspects.
Abstract: This tutorial paper focuses on the variants of the bottleneck problem taking an information theoretic perspective and discusses practical methods to solve it, as well as its connection to coding and learning aspects. The intimate connections of this setting to remote source-coding under logarithmic loss distortion measure, information combining, common reconstruction, the Wyner–Ahlswede–Korner problem, the efficiency of investment information, as well as, generalization, variational inference, representation learning, autoencoders, and others are highlighted. We discuss its extension to the distributed information bottleneck problem with emphasis on the Gaussian model and highlight the basic connections to the uplink Cloud Radio Access Networks (CRAN) with oblivious processing. For this model, the optimal trade-offs between relevance (i.e., information) and complexity (i.e., rates) in the discrete and vector Gaussian frameworks is determined. In the concluding outlook, some interesting problems are mentioned such as the characterization of the optimal inputs (“features”) distributions under power limitations maximizing the “relevance” for the Gaussian information bottleneck, under “complexity” constraints.

40 citations


Journal ArticleDOI

[...]

28 Feb 2020-Entropy
TL;DR: This paper proposes a novel system that is computationally less expensive and provided a higher level of security in chaotic-based encryption schemes based on a shuffling process with fractals key along with three-dimensional Lorenz chaotic map.
Abstract: Chaos-based encryption schemes have attracted many researchers around the world in the digital image security domain. Digital images can be secured using existing chaotic maps, multiple chaotic maps, and several other hybrid dynamic systems that enhance the non-linearity of digital images. The combined property of confusion and diffusion was introduced by Claude Shannon which can be employed for digital image security. In this paper, we proposed a novel system that is computationally less expensive and provided a higher level of security. The system is based on a shuffling process with fractals key along with three-dimensional Lorenz chaotic map. The shuffling process added the confusion property and the pixels of the standard image is shuffled. Three-dimensional Lorenz chaotic map is used for a diffusion process which distorted all pixels of the image. In the statistical security test, means square error (MSE) evaluated error value was greater than the average value of 10000 for all standard images. The value of peak signal to noise (PSNR) was 7.69(dB) for the test image. Moreover, the calculated correlation coefficient values for each direction of the encrypted images was less than zero with a number of pixel change rate (NPCR) higher than 99%. During the security test, the entropy values were more than 7.9 for each grey channel which is almost equal to the ideal value of 8 for an 8-bit system. Numerous security tests and low computational complexity tests validate the security, robustness, and real-time implementation of the presented scheme.

39 citations


Journal ArticleDOI

[...]

20 May 2020-Entropy
TL;DR: A human activity recognition model that acquires signal data from motion node sensors including inertial sensors, i.e., gyroscopes and accelerometers is presented, which outperformed existing well-known statistical state-of-the-art methods by achieving an improved recognition accuracy.
Abstract: Advancements in wearable sensors technologies provide prominent effects in the daily life activities of humans. These wearable sensors are gaining more awareness in healthcare for the elderly to ensure their independent living and to improve their comfort. In this paper, we present a human activity recognition model that acquires signal data from motion node sensors including inertial sensors, i.e., gyroscopes and accelerometers. First, the inertial data is processed via multiple filters such as Savitzky-Golay, median and hampel filters to examine lower/upper cutoff frequency behaviors. Second, it extracts a multifused model for statistical, wavelet and binary features to maximize the occurrence of optimal feature values. Then, adaptive moment estimation (Adam) and AdaDelta are introduced in a feature optimization phase to adopt learning rate patterns. These optimized patterns are further processed by the maximum entropy Markov model (MEMM) for empirical expectation and highest entropy, which measure signal variances for outperformed accuracy results. Our model was experimentally evaluated on University of Southern California Human Activity Dataset (USC-HAD) as a benchmark dataset and on an Intelligent Mediasporting behavior (IMSB), which is a new self-annotated sports dataset. For evaluation, we used the "leave-one-out" cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving an improved recognition accuracy of 91.25%, 93.66% and 90.91% when compared with USC-HAD, IMSB, and Mhealth datasets, respectively. The proposed system should be applicable to man-machine interface domains, such as health exercises, robot learning, interactive games and pattern-based surveillance.

36 citations


Journal ArticleDOI

[...]

13 Aug 2020-Entropy
TL;DR: It is argued that the model of adaptive phenotypes under the free-energy principle can be used to furnish a formal semantics, enabling us to assign semantic content to specific phenotypic states (the internal states of a Markovian system that exists far from equilibrium).
Abstract: The aim of this paper is twofold: (1) to assess whether the construct of neural representations plays an explanatory role under the variational free-energy principle and its corollary process theory, active inference; and (2) if so, to assess which philosophical stance-in relation to the ontological and epistemological status of representations-is most appropriate. We focus on non-realist (deflationary and fictionalist-instrumentalist) approaches. We consider a deflationary account of mental representation, according to which the explanatorily relevant contents of neural representations are mathematical, rather than cognitive, and a fictionalist or instrumentalist account, according to which representations are scientifically useful fictions that serve explanatory (and other) aims. After reviewing the free-energy principle and active inference, we argue that the model of adaptive phenotypes under the free-energy principle can be used to furnish a formal semantics, enabling us to assign semantic content to specific phenotypic states (the internal states of a Markovian system that exists far from equilibrium). We propose a modified fictionalist account-an organism-centered fictionalism or instrumentalism. We argue that, under the free-energy principle, pursuing even a deflationary account of the content of neural representations licenses the appeal to the kind of semantic content involved in the 'aboutness' or intentionality of cognitive systems; our position is thus coherent with, but rests on distinct assumptions from, the realist position. We argue that the free-energy principle thereby explains the aboutness or intentionality in living systems and hence their capacity to parse their sensory stream using an ontology or set of semantic factors.

Journal ArticleDOI

[...]

04 Feb 2020-Entropy
TL;DR: This work proposes a permissioned private blockchain-based solution to secure the image while encrypting it, ensuring the privacy and security of the image data on the blockchain.
Abstract: Smart cameras and image sensors are widely used in industrial processes, from the designing to the quality checking of the final product. Images generated by these sensors are at continuous risk of disclosure and privacy breach in the industrial Internet of Things (IIoT). Traditional solutions to secure sensitive data fade in IIoT environments because of the involvement of third parties. Blockchain technology is the modern-day solution for trust issues and eliminating or minimizing the role of the third party. In the context of the IIoT, we propose a permissioned private blockchain-based solution to secure the image while encrypting it. In this scheme, the cryptographic pixel values of an image are stored on the blockchain, ensuring the privacy and security of the image data. Based on the number of pixels change rate (NPCR), the unified averaged changed intensity (UACI), and information entropy analysis, we evaluate the strength of proposed image encryption algorithm ciphers with respect to differential attacks. We obtained entropy values near to an ideal value of 8, which is considered to be safe from brute force attack. Encrypted results show that the proposed scheme is highly effective for data leakage prevention and security.

Journal ArticleDOI

[...]

10 Nov 2020-Entropy
TL;DR: This is the Editorial article summarizing the scope of the Special Issue: Approximate Bayesian Inference.
Abstract: This is the Editorial article summarizing the scope of the Special Issue: Approximate Bayesian Inference.

Journal ArticleDOI

[...]

Chungu Guo, Liangwei Yang, Xiao Chen, Duanbing Chen, Hui Gao, Jing Ma1 
21 Feb 2020-Entropy
TL;DR: The proposed EnRenew algorithm measures the importance of nodes based on information entropy and selects a group of important nodes through dynamic update strategy and shed light on new method of node mining in complex networks for information spreading and epidemic prevention.
Abstract: Identifying a set of influential nodes is an important topic in complex networks which plays a crucial role in many applications, such as market advertising, rumor controlling, and predicting valuable scientific publications. In regard to this, researchers have developed algorithms from simple degree methods to all kinds of sophisticated approaches. However, a more robust and practical algorithm is required for the task. In this paper, we propose the EnRenew algorithm aimed to identify a set of influential nodes via information entropy. Firstly, the information entropy of each node is calculated as initial spreading ability. Then, select the node with the largest information entropy and renovate its l-length reachable nodes' spreading ability by an attenuation factor, repeat this process until specific number of influential nodes are selected. Compared with the best state-of-the-art benchmark methods, the performance of proposed algorithm improved by 21.1%, 7.0%, 30.0%, 5.0%, 2.5%, and 9.0% in final affected scale on CEnew, Email, Hamster, Router, Condmat, and Amazon network, respectively, under the Susceptible-Infected-Recovered (SIR) simulation model. The proposed algorithm measures the importance of nodes based on information entropy and selects a group of important nodes through dynamic update strategy. The impressive results on the SIR simulation model shed light on new method of node mining in complex networks for information spreading and epidemic prevention.

Journal ArticleDOI

[...]

21 Mar 2020-Entropy
TL;DR: A systematic literature review about variants and improvements of the Particle Swarm Optimisation algorithm and its extensions to other classes of optimisation problems, taking into consideration the most important ones is made.
Abstract: The Particle Swarm Optimisation (PSO) algorithm was inspired by the social and biological behaviour of bird flocks searching for food sources. In this nature-based algorithm, individuals are referred to as particles and fly through the search space seeking for the global best position that minimises (or maximises) a given problem. Today, PSO is one of the most well-known and widely used swarm intelligence algorithms and metaheuristic techniques, because of its simplicity and ability to be used in a wide range of applications. However, in-depth studies of the algorithm have led to the detection and identification of a number of problems with it, especially convergence problems and performance issues. Consequently, a myriad of variants, enhancements and extensions to the original version of the algorithm, developed and introduced in the mid-1990s, have been proposed, especially in the last two decades. In this article, a systematic literature review about those variants and improvements is made, which also covers the hybridisation and parallelisation of the algorithm and its extensions to other classes of optimisation problems, taking into consideration the most important ones. These approaches and improvements are appropriately summarised, organised and presented, in order to allow and facilitate the identification of the most appropriate PSO variant for a particular application.

Journal ArticleDOI

[...]

26 Jul 2020-Entropy
TL;DR: A novel features extraction method which incorporates robust entropy optimization and an efficient Maximum Entropy Markov Model (MEMM) for HIR via multiple vision sensors is proposed, which will be applicable to a wide variety of man–machine interfaces.
Abstract: Automatic identification of human interaction is a challenging task especially in dynamic environments with cluttered backgrounds from video sequences. Advancements in computer vision sensor technologies provide powerful effects in human interaction recognition (HIR) during routine daily life. In this paper, we propose a novel features extraction method which incorporates robust entropy optimization and an efficient Maximum Entropy Markov Model (MEMM) for HIR via multiple vision sensors. The main objectives of proposed methodology are: (1) to propose a hybrid of four novel features—i.e., spatio-temporal features, energy-based features, shape based angular and geometric features—and a motion-orthogonal histogram of oriented gradient (MO-HOG); (2) to encode hybrid feature descriptors using a codebook, a Gaussian mixture model (GMM) and fisher encoding; (3) to optimize the encoded feature using a cross entropy optimization function; (4) to apply a MEMM classification algorithm to examine empirical expectations and highest entropy, which measure pattern variances to achieve outperformed HIR accuracy results. Our system is tested over three well-known datasets: SBU Kinect interaction; UoL 3D social activity; UT-interaction datasets. Through wide experimentations, the proposed features extraction algorithm, along with cross entropy optimization, has achieved the average accuracy rate of 91.25% with SBU, 90.4% with UoL and 87.4% with UT-Interaction datasets. The proposed HIR system will be applicable to a wide variety of man–machine interfaces, such as public-place surveillance, future medical applications, virtual reality, fitness exercises and 3D interactive gaming.

Journal ArticleDOI

[...]

27 Jul 2020-Entropy
TL;DR: This paper aims to describe the state-of-the-art in implementation of thermodynamics into ecology, and shows that natural and culturally induced changes in the ecosystems, are accompanied by a variations in exergy.
Abstract: How to predict the evolution of ecosystems is one of the numerous questions asked of ecologists by managers and politicians. To answer this we will need to give a scientific definition to concepts like sustainability, integrity, resilience and ecosystem health. This is not an easy task, as modern ecosystem theory exemplifies. Ecosystems show a high degree of complexity, based upon a high number of compartments, interactions and regulations. The last two decades have offered proposals for interpretation of ecosystems within a framework of thermodynamics. The entrance point of such an understanding of ecosystems was delivered more than 50 years ago through Schrodinger's and Prigogine's interpretations of living systems as "negentropy feeders" and "dissipative structures", respectively. Combining these views from the far from equilibrium thermodynamics to traditional classical thermodynamics, and ecology is obviously not going to happen without problems. There seems little reason to doubt that far from equilibrium systems, such as organisms or ecosystems, also have to obey fundamental physical principles such as mass conservation, first and second law of thermodynamics. Both have been applied in ecology since the 1950s and lately the concepts of exergy and entropy have been introduced. Exergy has recently been proposed, from several directions, as a useful indicator of the state, structure and function of the ecosystem. The proposals take two main directions, one concerned with the exergy stored in the ecosystem, the other with the exergy degraded and entropy formation. The implementation of exergy in ecology has often been explained as a translation of the Darwinian principle of "survival of the fittest" into thermodynamics. The fittest ecosystem, being the one able to use and store fluxes of energy and materials in the most efficient manner. The major problem in the transfer to ecology is that thermodynamic properties can only be calculated and not measured. Most of the supportive evidence comes from aquatic ecosystems. Results show that natural and culturally induced changes in the ecosystems, are accompanied by a variations in exergy. In brief, ecological succession is followed by an increase of exergy. This paper aims to describe the state-of-the-art in implementation of thermodynamics into ecology. This includes a brief outline of the history and the derivation of the thermodynamic functions used today. Examples of applications and results achieved up to now are given, and the importance to management laid out. Some suggestions for essential future research agendas of issues that needs resolution are given.

Journal ArticleDOI

[...]

24 Jan 2020-Entropy
TL;DR: The extensive experimental results indicated that the proposed CEEMD-XGBoost can significantly enhance the detection performance of epileptic seizures in terms of sensitivity, specificity, and accuracy.
Abstract: Epilepsy is a common nervous system disease that is characterized by recurrent seizures. An electroencephalogram (EEG) records neural activity, and it is commonly used for the diagnosis of epilepsy. To achieve accurate detection of epileptic seizures, an automatic detection approach of epileptic seizures, integrating complementary ensemble empirical mode decomposition (CEEMD) and extreme gradient boosting (XGBoost), named CEEMD-XGBoost, is proposed. Firstly, the decomposition method, CEEMD, which is capable of effectively reducing the influence of mode mixing and end effects, was utilized to divide raw EEG signals into a set of intrinsic mode functions (IMFs) and residues. Secondly, the multi-domain features were extracted from raw signals and the decomposed components, and they were further selected according to the importance scores of the extracted features. Finally, XGBoost was applied to develop the epileptic seizure detection model. Experiments were conducted on two benchmark epilepsy EEG datasets, named the Bonn dataset and the CHB-MIT (Children's Hospital Boston and Massachusetts Institute of Technology) dataset, to evaluate the performance of our proposed CEEMD-XGBoost. The extensive experimental results indicated that, compared with some previous EEG classification models, CEEMD-XGBoost can significantly enhance the detection performance of epileptic seizures in terms of sensitivity, specificity, and accuracy.

Journal ArticleDOI

[...]

16 Apr 2020-Entropy
TL;DR: This paper shows that non-extensive cross-entropy econometrics is a valuable complement to traditional econometry as it explains phenomena based on power-law probability distribution and enables econometric model estimation for non-ergodic ill-behaved (troublesome) inverse problems.
Abstract: The aim of this paper is to examine the role of thermodynamics, and in particular, entropy, for the development of economics within the last 150 years. The use of entropy has not only led to a significant increase in economic knowledge, but also to the emergence of such scientific disciplines as econophysics, complexity economics and quantum economics. Nowadays, an interesting phenomenon can be observed; namely, that rapid progress in economics is being made outside the mainstream. The first significant achievement was the emergence of entropy economics in the early 1970s, which introduced the second law of thermodynamics to considerations regarding production processes. In this way, not only was ecological economics born but also an entropy-based econometric approach developed. This paper shows that non-extensive cross-entropy econometrics is a valuable complement to traditional econometrics as it explains phenomena based on power-law probability distribution and enables econometric model estimation for non-ergodic ill-behaved (troublesome) inverse problems. Furthermore, the entropy economics has accelerated the emergence of modern econophysics and complexity economics. These new directions of research have led to many interesting discoveries that usually contradict the claims of conventional economics. Econophysics has questioned the efficient market hypothesis, while complexity economics has shown that markets and economies function best near the edge of chaos. Quantum economics has already appeared on the horizon, which recognizes money as a fundamental measurement device in the economy. The development of these sciences may indicate the need to reformulate all mainstream economics from its foundations.

Journal ArticleDOI

[...]

23 Apr 2020-Entropy
TL;DR: An interdependent network model, where individuals in each layer follow different evolutionary games, and where each player is considered as a mobile agent that can move locally inside its own layer to improve its fitness is considered.
Abstract: Evolutionary game theory in the realm of network science appeals to a lot of research communities, as it constitutes a popular theoretical framework for studying the evolution of cooperation in social dilemmas. Recent research has shown that cooperation is markedly more resistant in interdependent networks, where traditional network reciprocity can be further enhanced due to various forms of interdependence between different network layers. However, the role of mobility in interdependent networks is yet to gain its well-deserved attention. Here we consider an interdependent network model, where individuals in each layer follow different evolutionary games, and where each player is considered as a mobile agent that can move locally inside its own layer to improve its fitness. Probabilistically, we also consider an imitation possibility from a neighbor on the other layer. We show that, by considering migration and stochastic imitation, further fascinating gateways to cooperation on interdependent networks can be observed. Notably, cooperation can be promoted on both layers, even if cooperation without interdependence would be improbable on one of the layers due to adverse conditions. Our results provide a rationale for engineering better social systems at the interface of networks and human decision making under testing dilemmas.

Journal ArticleDOI

[...]

23 Apr 2020-Entropy
TL;DR: A novel procedure is proposed and justify a novel procedure that should be used in adjusting the evaluation metrics for imbalanced datasets that are common for different kinds of skin lesions.
Abstract: In this paper, a new Computer-Aided Detection (CAD) system for the detection and classification of dangerous skin lesions (melanoma type) is presented, through a fusion of handcraft features related to the medical algorithm ABCD rule (Asymmetry Borders-Colors-Dermatoscopic Structures) and deep learning features employing Mutual Information (MI) measurements. The steps of a CAD system can be summarized as preprocessing, feature extraction, feature fusion, and classification. During the preprocessing step, a lesion image is enhanced, filtered, and segmented, with the aim to obtain the Region of Interest (ROI); in the next step, the feature extraction is performed. Handcraft features such as shape, color, and texture are used as the representation of the ABCD rule, and deep learning features are extracted using a Convolutional Neural Network (CNN) architecture, which is pre-trained on Imagenet (an ILSVRC Imagenet task). MI measurement is used as a fusion rule, gathering the most important information from both types of features. Finally, at the Classification step, several methods are employed such as Linear Regression (LR), Support Vector Machines (SVMs), and Relevant Vector Machines (RVMs). The designed framework was tested using the ISIC 2018 public dataset. The proposed framework appears to demonstrate an improved performance in comparison with other state-of-the-art methods in terms of the accuracy, specificity, and sensibility obtained in the training and test stages. Additionally, we propose and justify a novel procedure that should be used in adjusting the evaluation metrics for imbalanced datasets that are common for different kinds of skin lesions.

Journal ArticleDOI

[...]

21 May 2020-Entropy
TL;DR: In this article, a comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping is presented, where the objective of shaping is reformulated as obtaining the most energy-efficient signal space for a given rate.
Abstract: In this paper, we provide a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. However, as the blocklength decreases, the resulting rate loss diminishes the efficiency of CCDM. We claim that for such short blocklengths over the additive white Gaussian noise (AWGN) channel, the objective of shaping should be reformulated as obtaining the most energy-efficient signal space for a given rate (rather than matching distributions). In light of this interpretation, multiset-partition DM (MPDM) and SpSh are reviewed as energy-efficient shaping techniques. Numerical results show that both have smaller rate losses than CCDM. SpSh-whose sole objective is to maximize the energy efficiency-is shown to have the minimum rate loss amongst all, which is particularly apparent for ultra short blocklengths. We provide simulation results of the end-to-end decoding performance showing that up to 1 dB improvement in power efficiency over uniform signaling can be obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a discussion on the complexity of these algorithms from the perspectives of latency, storage and computations.

Journal ArticleDOI

[...]

13 May 2020-Entropy
TL;DR: In this paper, the authors consider multi-message communication (MMC) by allowing multiple computations to be conveyed from each worker per iteration, and propose novel straggler avoidance techniques for both coded computation and coded communication with MMC.
Abstract: When gradient descent (GD) is scaled to many parallel workers for large-scale machine learning applications, its per-iteration computation time is limited by straggling workers. Straggling workers can be tolerated by assigning redundant computations and/or coding across data and computations, but in most existing schemes, each non-straggling worker transmits one message per iteration to the parameter server (PS) after completing all its computations. Imposing such a limitation results in two drawbacks: over-computation due to inaccurate prediction of the straggling behavior, and under-utilization due to discarding partial computations carried out by stragglers. To overcome these drawbacks, we consider multi-message communication (MMC) by allowing multiple computations to be conveyed from each worker per iteration, and propose novel straggler avoidance techniques for both coded computation and coded communication with MMC. We analyze how the proposed designs can be employed efficiently to seek a balance between the computation and communication latency. Furthermore, we identify the advantages and disadvantages of these designs in different settings through extensive simulations, both model-based and real implementation on Amazon EC2 servers, and demonstrate that proposed schemes with MMC can help improve upon existing straggler avoidance schemes.

Journal ArticleDOI

[...]

24 Feb 2020-Entropy
TL;DR: This paper combines entropy weight, analytic hierarchy process (AHP) weight, and the technique for order preference by similarity to an ideal solution (TOPSIS) method into a suitable multi-criteria decision making (MCDM) solution.
Abstract: The type of criterion weight can be distinguished according to different decision methods. Subjective weights are given by decision makers based on their knowledge, experience, expertise, and other factors. Objective weights are obtained through multi-step calculations of the evaluation matrix constructed from the actual information about the evaluation criteria of the alternatives. A single consideration of these two types of weights often results in biased results. In addition, in order to build an effective supply chain source, buyers must find suitable quality products and/or service providers in the process of supplier selection. Based on the above reasons, it is difficult to accurately select the appropriate alternative. The main contribution of this paper is to combine entropy weight, analytic hierarchy process (AHP) weight, and the technique for order preference by similarity to an ideal solution (TOPSIS) method into a suitable multi-criteria decision making (MCDM) solution. The TOPSIS method is extended with entropy-AHP weights, and entropy-AHP weights are used instead of subjective weights. A novel decision-making model of TOPSIS integrated entropy-AHP weights is proposed to select the appropriate supplier. Finally, we take the selection of building material suppliers as an example and use sensitivity analysis to show that the combination of the TOPSIS method based on entropy-AHP weights can effectively select the appropriate supplier.

Journal ArticleDOI

[...]

09 Jan 2020-Entropy
TL;DR: A brief explanatory review of recent ideas, results and hypotheses about the blessing of dimensionality and related simplifying effects relevant to machine learning and neuroscience can be found in this paper, where the authors show that generic high-dimensional datasets exhibit fairly simple geometric properties and there is a fundamental tradeoff between complexity and simplicity in high dimensional spaces.
Abstract: High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning. The well-known phenomenon of the “curse of dimensionality” states: many problems become exponentially difficult in high dimensions. Recently, the other side of the coin, the “blessing of dimensionality”, has attracted much attention. It turns out that generic high-dimensional datasets exhibit fairly simple geometric properties. Thus, there is a fundamental tradeoff between complexity and simplicity in high dimensional spaces. Here we present a brief explanatory review of recent ideas, results and hypotheses about the blessing of dimensionality and related simplifying effects relevant to machine learning and neuroscience.

Journal ArticleDOI

[...]

26 Oct 2020-Entropy
TL;DR: In this paper, it was shown that the learning dynamics of a neural network can exhibit approximate behaviors that were described by both quantum mechanics and general relativity, and that the two descriptions are holographic duals of each other.
Abstract: We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton–Jacobi equations (with free energy representing the Hamilton’s principal function). This shows that the trainable variables can indeed exhibit classical and quantum behaviors with the state vector of neurons representing the hidden variables. We then study stochastic evolution of the hidden variables by considering D non-interacting subsystems with average state vectors, x¯1, …, x¯D and an overall average state vector x¯0. In the limit when the weight matrix is a permutation matrix, the dynamics of x¯μ can be described in terms of relativistic strings in an emergent D+1 dimensional Minkowski space-time. If the subsystems are minimally interacting, with interactions that are described by a metric tensor, and then the emergent space-time becomes curved. We argue that the entropy production in such a system is a local function of the metric tensor which should be determined by the symmetries of the Onsager tensor. It turns out that a very simple and highly symmetric Onsager tensor leads to the entropy production described by the Einstein–Hilbert term. This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors that were described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other.

Journal ArticleDOI

[...]

06 Mar 2020-Entropy
TL;DR: The criticism of quantum nonlocality is explained in the spirit of Hertz-Boltzmann methodology of scientific theories and shows that the quantum paradoxes disappear if one adopts the purely statistical interpretation of quantum mechanics.
Abstract: This paper is a new step towards understanding why "quantum nonlocality" is a misleading concept. Metaphorically speaking, "quantum nonlocality" is Janus faced. One face is an apparent nonlocality ...

Journal ArticleDOI

[...]

04 Mar 2020-Entropy
TL;DR: ElPiGraph exploits and further develops the concept of elastic energy, the topological graph grammar approach, and a gradient descent-like optimization of the graph topology, and is capable of approximating data point clouds via principal graph ensembles.
Abstract: Multidimensional datapoint clouds representing large datasets are frequently characterized by non-trivial low-dimensional geometry and topology which can be recovered by unsupervised machine learning approaches, in particular, by principal graphs. Principal graphs approximate the multivariate data by a graph injected into the data space with some constraints imposed on the node mapping. Here we present ElPiGraph, a scalable and robust method for constructing principal graphs. ElPiGraph exploits and further develops the concept of elastic energy, the topological graph grammar approach, and a gradient descent-like optimization of the graph topology. The method is able to withstand high levels of noise and is capable of approximating data point clouds via principal graph ensembles. This strategy can be used to estimate the statistical significance of complex data features and to summarize them into a single consensus principal graph. ElPiGraph deals efficiently with large datasets in various fields such as biology, where it can be used for example with single-cell transcriptomic or epigenomic datasets to infer gene expression dynamics and recover differentiation landscapes.

Journal ArticleDOI

[...]

11 Mar 2020-Entropy
TL;DR: This paper presents a new extended fuzzy TOPSIS method for dealing with uncertainty in the form of PyPHFS in real life problems and applies some practical examples of the selection of the most critical fog-haze influence factor.
Abstract: The Pythagorean probabilistic hesitant fuzzy set (PyPHFS) is an effective, generalized and powerful tool for expressing fuzzy information. It can cover more complex and more hesitant fuzzy evaluation information. Therefore, based on the advantages of PyPHFSs, this paper presents a new extended fuzzy TOPSIS method for dealing with uncertainty in the form of PyPHFS in real life problems. The paper is divided into three main parts. Firstly, the novel Pythagorean probabilistic hesitant fuzzy entropy measure is established using generalized distance measure under PyPHFS information to find out the unknown weights information of the attributes. The second part consists of the algorithm sets of the TOPSIS technique under PyPHFS environment, where the weights of criteria are completely unknown. Finally, in order to verify the efficiency and superiority of the proposed method, this paper applies some practical examples of the selection of the most critical fog-haze influence factor and makes a detailed comparison with other existing methods.

Journal ArticleDOI

[...]

22 Dec 2020-Entropy
TL;DR: In this article, the authors present a comprehensive review of these qualitative approaches from both theoretical and practical aspects, and present some of the latest results of the qualitative fault diagnosis in high-speed trains.
Abstract: For ensuring the safety and reliability of high-speed trains, fault diagnosis (FD) technique plays an important role. Benefiting from the rapid developments of artificial intelligence, intelligent FD (IFD) strategies have obtained much attention in the field of academics and applications, where the qualitative approach is an important branch. Therefore, this survey will present a comprehensive review of these qualitative approaches from both theoretical and practical aspects. The primary task of this paper is to review the current development of these qualitative IFD techniques and then to present some of the latest results. Another major focus of our research is to introduce the background of high-speed trains, like the composition of the core subsystems, system structure, etc., based on which it becomes convenient for researchers to extract the diagnostic knowledge of high-speed trains, where the purpose is to understand how to use these types of knowledge. By reasonable utilization of the knowledge, it is hopeful to address various challenges caused by the coupling among subsystems of high-speed trains. Furthermore, future research trends for qualitative IFD approaches are also presented.