scispace - formally typeset
Search or ask a question
Author

Chang Li

Bio: Chang Li is an academic researcher from Hefei University of Technology. The author has contributed to research in topics: Hyperspectral imaging & Image fusion. The author has an hindex of 19, co-authored 65 publications receiving 2084 citations. Previous affiliations of Chang Li include Wuhan University & Huazhong University of Science and Technology.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a novel method to fuse two types of information using a generative adversarial network, termed as FusionGAN, which establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients.

853 citations

Journal ArticleDOI
Jiayi Ma1, Yong Ma1, Chang Li1
TL;DR: This survey comprehensively survey the existing methods and applications for the fusion of infrared and visible images, which can serve as a reference for researchers inrared and visible image fusion and related fields.

849 citations

Journal ArticleDOI
TL;DR: A novel fusion algorithm, named Gradient Transfer Fusion (GTF), based on gradient transfer and total variation (TV) minimization is proposed, which can keep both the thermal radiation and the appearance information in the source images.

729 citations

Journal ArticleDOI
TL;DR: In this paper, an attention-based convolutional recurrent neural network (ACRNN) was proposed to extract more discriminative features from EEG signals and improve the accuracy of emotion recognition.
Abstract: Emotion recognition based on electroencephalography (EEG) is a significant task in the brain-computer interface field. Recently, many deep learning-based emotion recognition methods are demonstrated to outperform traditional methods. However, it remains challenging to extract discriminative features for EEG emotion recognition, and most methods ignore useful information in channel and time. This paper proposes an attention-based convolutional recurrent neural network (ACRNN) to extract more discriminative features from EEG signals and improve the accuracy of emotion recognition. First, the proposed ACRNN adopts a channel-wise attention mechanism to adaptively assign the weights of different channels, and a CNN is employed to extract the spatial information of encoded EEG signals. Then, to explore the temporal information of EEG signals, extended self-attention is integrated into an RNN to recode the importance based on intrinsic similarity in EEG signals. We conducted extensive experiments on the DEAP and DREAMER databases. The experimental results demonstrate that the proposed ACRNN outperforms state-of-the-art methods.

166 citations

Journal ArticleDOI
TL;DR: Both simulated and real data experiments demonstrate that the proposed SSTV-LRTF method achieves superior performance for HSI mixed-noise removal, as compared to the state-of-the-art TV regularized and LR-based methods.
Abstract: Several bandwise total variation (TV) regularized low-rank (LR)-based models have been proposed to remove mixed noise in hyperspectral images (HSIs). These methods convert high-dimensional HSI data into 2-D data based on LR matrix factorization. This strategy introduces the loss of useful multiway structure information. Moreover, these bandwise TV-based methods exploit the spatial information in a separate manner. To cope with these problems, we propose a spatial–spectral TV regularized LR tensor factorization (SSTV-LRTF) method to remove mixed noise in HSIs. From one aspect, the hyperspectral data are assumed to lie in an LR tensor, which can exploit the inherent tensorial structure of hyperspectral data. The LRTF-based method can effectively separate the LR clean image from sparse noise. From another aspect, HSIs are assumed to be piecewisely smooth in the spatial domain. The TV regularization is effective in preserving the spatial piecewise smoothness and removing Gaussian noise. These facts inspire the integration of the LRTF with TV regularization. To address the limitations of bandwise TV, we use the SSTV regularization to simultaneously consider local spatial structure and spectral correlation of neighboring bands. Both simulated and real data experiments demonstrate that the proposed SSTV-LRTF method achieves superior performance for HSI mixed-noise removal, as compared to the state-of-the-art TV regularized and LR-based methods.

144 citations


Cited by
More filters
Journal ArticleDOI
Alan R. Jones1

1,349 citations

Journal ArticleDOI
TL;DR: This paper proposes a novel method to fuse two types of information using a generative adversarial network, termed as FusionGAN, which establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients.

853 citations

Journal ArticleDOI
Jiayi Ma1, Yong Ma1, Chang Li1
TL;DR: This survey comprehensively survey the existing methods and applications for the fusion of infrared and visible images, which can serve as a reference for researchers inrared and visible image fusion and related fields.

849 citations

Journal ArticleDOI
Hui Li1, Xiaojun Wu1
TL;DR: A novel deep learning architecture for infrared and visible images fusion problems is presented, where the encoding network is combined with convolutional layers, a fusion layer, and dense block in which the output of each layer is connected to every other layer.
Abstract: In this paper, we present a novel deep learning architecture for infrared and visible images fusion problems. In contrast to conventional convolutional networks, our encoding network is combined with convolutional layers, a fusion layer, and dense block in which the output of each layer is connected to every other layer. We attempt to use this architecture to get more useful features from source images in the encoding process, and two fusion layers (fusion strategies) are designed to fuse these features. Finally, the fused image is reconstructed by a decoder. Compared with existing fusion methods, the proposed fusion method achieves the state-of-the-art performance in objective and subjective assessment.

703 citations

Journal ArticleDOI
TL;DR: This survey paper presents a systematic review of the DL-based pixel-level image fusion literature, summarized the main difficulties that exist in conventional image fusion research and discussed the advantages that DL can offer to address each of these problems.

493 citations