scispace - formally typeset
Search or ask a question

Showing papers in "Information Fusion in 2020"


Journal ArticleDOI
TL;DR: In this paper, a taxonomy of recent contributions related to explainability of different machine learning models, including those aimed at explaining Deep Learning methods, is presented, and a second dedicated taxonomy is built and examined in detail.

2,827 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed model demonstrates better generalization ability than the existing image fusion models for fusing various types of images, such as multi-focus, infrared-visual, multi-modal medical and multi-exposure images.

524 citations


Journal ArticleDOI
TL;DR: This survey provides a thorough review of techniques for manipulating face images including DeepFake methods, and methods to detect such manipulations, with special attention to the latest generation of DeepFakes.

502 citations


Journal ArticleDOI
TL;DR: A smart healthcare system is proposed for heart disease prediction using ensemble deep learning and feature fusion approaches and obtains accuracy of 98.5%, which is higher than existing systems.

379 citations


Journal ArticleDOI
TL;DR: This paper offers a detailed introduction to the background of data fusion and machine learning in terms of definitions, applications, architectures, processes, and typical techniques, and proposes a number of requirements to review and evaluate the performance of existing fusion methods based on machine learning.

309 citations


Journal ArticleDOI
TL;DR: The emotion recognition methods based on multi-channel EEG signals as well as multi-modal physiological signals are reviewed and the correlation between different brain areas and emotions is discussed.

281 citations


Journal ArticleDOI
TL;DR: This work proposes a novel unsupervised framework for pan-sharpening based on a generative adversarial network, termed as Pan-GAN, which does not rely on the so-called ground-truth during network training and has shown promising performance in terms of qualitative visual effects and quantitative evaluation metrics.

261 citations


Journal ArticleDOI
TL;DR: This paper proposes an end-to-end model for infrared and visible image fusion based on detail preserving adversarial learning that is able to overcome the limitations of the manual and complicated design of activity-level measurement and fusion rules in traditional fusion methods.

251 citations


Journal ArticleDOI
Qihang Yao1, Ruxin Wang1, Xiaomao Fan1, Jikui Liu1, Ye Li1 
TL;DR: This work proposed attention-based time-incremental convolutional neural network (ATI-CNN), a deep neural network model achieving both spatial and temporal fusion of information from ECG signals by integrating CNN, recurrent cells and attention module, which reached an overall classification accuracy of 81.2%.

242 citations


Journal ArticleDOI
TL;DR: This paper presents the state-of-the art dimensionality reduction techniques and their suitability for different types of data and application areas and the issues of dimensionality Reduction techniques that can affect the accuracy and relevance of results.

212 citations


Journal ArticleDOI
TL;DR: This article focuses on classifying and comparing some of the significant works in the field of denoising and explains why some methods work optimally and others tend to create artefacts and remove fine structural details under general conditions.

Journal ArticleDOI
TL;DR: The results indicate that both the simple integration model and the embedding integration model can greatly improve the recognition ability for the minority financial distress samples, and the embedded integration model is even more preferred because it also significantly outperforms the simple Integration model.

Journal ArticleDOI
TL;DR: A data fusion enabled Ensemble approach is proposed to work with medical data obtained from BSNs in a fog computing environment and the obtained results are promising, as 98% accuracy when the tree depth is equal to 15, number of estimators is 40, and 8 features are considered for the prediction task.

Journal ArticleDOI
TL;DR: The origin and basic research paradigm of the FMMA/C is analyzed, and the feedback mechanism with minimum adjustment or cost has been developed and widely used in various group decision making contexts to improve consensus efficiency.

Journal ArticleDOI
TL;DR: Overall, multi-modal fusion shows significant benefits in clinical diagnosis and neuroscience research and widespread education and further research amongst engineers, researchers and clinicians will benefit the field of multimodal neuroimaging.

Journal ArticleDOI
TL;DR: The performance of 14 different bagging and boosting based ensembles, including XGBoost, LightGBM and Random Forest, is empirically analyzed in terms of predictive capability and efficiency.

Journal ArticleDOI
TL;DR: A clear definition and characterization of LSDM events are proposed as a basis for characterizing this emerging family of decision frameworks, and a taxonomy and an overview of LSDm models are presented, predicated on their key elements.

Journal ArticleDOI
TL;DR: The results show that the feature fusion methods although are time consuming but can provide superior classification accuracy compared to other methods.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an environment-fusion multipath routing protocol (EFMRP) to provide sustainable message forwarding service under harsh environments, where routing decisions are made according to a mixed potential field in terms of depth, residual energy and environment.

Journal ArticleDOI
TL;DR: A comprehensive overview of existing multi-focus image fusion methods is presented and a new taxonomy is introduced to classify existing methods into four main categories: transformdomain methods, spatial domain methods, methods combining transform domain and spatial domain, and deep learning methods.

Journal ArticleDOI
TL;DR: This study aimed to construct a novel multimodal model by fusing different electroencephalogram (EEG) data sources, which were under neutral, negative and positive audio stimulation, to discriminate between depressed patients and normal controls, and the results were encouraging.

Journal ArticleDOI
TL;DR: Experiments demonstrate that the proposed Two-stream Fusion Network (TFNet) can fuse PAN and MS images effectively, and produce pan-sharpened images competitive with even superior to state of the arts images.

Journal ArticleDOI
Jia Liu1, Tianrui Li1, Peng Xie1, Shengdong Du1, Fei Teng1, Xin Yang1 
TL;DR: To clarify the methodologies of urban big data fusion based on deep learning (DL), this paper classifies them into three categories: DL-output-based fusion, DL-input- based fusion and DL-double-stage-Based fusion.

Journal ArticleDOI
TL;DR: A general panorama of the state of the art of the Choquet integral generalizations is offered, showing the relations and intersections among such five classes of generalizations.

Journal ArticleDOI
TL;DR: This review article focuses on the literature of empathetic dialogue systems, whose goal is to enhance the perception and expression of emotional states, personal preference, and knowledge, and identifies three key features that underpin such systems: emotion-awareness, personality-awareness and knowledge-accessibility.

Journal ArticleDOI
TL;DR: This work proposes the definition of four different dimensions, namely Pattern & Knowledge discovery, Information Fusion & Integration, Scalability, and Visualization, which are used to define a set of new metrics (termed degrees) in order to evaluate the different software tools and frameworks of SNA.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper introduced four main factors affecting urban flow, and partitioned the preparation process of multi-source spatiotemporal data related with urban flow into three groups: mobile phone data, taxi trajectories data, metro/bus swiping data, bike-sharing data.

Journal ArticleDOI
TL;DR: A comprehensive review of the state-of-the-art solutions in the domain of distributed estimation over a low-cost sensor network, exploring their characteristics, advantages, and challenging issues is presented.

Journal ArticleDOI
TL;DR: This work proposes a body sensor-based system for behavior recognition using deep Recurrent Neural Network (RNN), a promising deep learning algorithm based on sequential information that outperforms the available state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this article, a deep neural network architecture for human activity recognition based on multiple sensor data is proposed, which encodes the time series of sensor data as images and leverages these transformed images to retain the necessary features for activity recognition.