scispace - formally typeset
Search or ask a question
Author

Fan Fan

Bio: Fan Fan is an academic researcher from Wuhan University. The author has contributed to research in topics: Computer science & Image fusion. The author has an hindex of 12, co-authored 57 publications receiving 794 citations. Previous affiliations of Fan Fan include Huazhong University of Science and Technology.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: A robust IR small target detection algorithm based on HVS is proposed to pursue good performance in detection rate, false alarm rate, and speed simultaneously.
Abstract: Robust human visual system (HVS) properties can effectively improve the infrared (IR) small target detection capabilities, such as detection rate, false alarm rate, speed, etc. However, current algorithms based on HVS usually improve one or two of the aforementioned detection capabilities while sacrificing the others. In this letter, a robust IR small target detection algorithm based on HVS is proposed to pursue good performance in detection rate, false alarm rate, and speed simultaneously. First, an HVS size-adaptation process is used, and the IR image after preprocessing is divided into subblocks to improve detection speed. Then, based on HVS contrast mechanism, the improved local contrast measure, which can improve detection rate and reduce false alarm rate, is proposed to calculate the saliency map, and a threshold operation along with a rapid traversal mechanism based on HVS attention shift mechanism is used to get the target subblocks quickly. Experimental results show the proposed algorithm has good robustness and efficiency for real IR small target detection applications.

324 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed spectral-spatial attention network for hyperspectral image classification can fully utilize the spectral and spatial information to obtain competitive performance.
Abstract: Many deep learning models, such as convolutional neural network (CNN) and recurrent neural network (RNN), have been successfully applied to extracting deep features for hyperspectral tasks. Hyperspectral image classification allows distinguishing the characterization of land covers by utilizing their abundant information. Motivated by the attention mechanism of the human visual system, in this study, we propose a spectral-spatial attention network for hyperspectral image classification. In our method, RNN with attention can learn inner spectral correlations within a continuous spectrum, while CNN with attention is designed to focus on saliency features and spatial relevance between neighboring pixels in the spatial dimension. Experimental results demonstrate that our method can fully utilize the spectral and spatial information to obtain competitive performance.

163 citations

Journal ArticleDOI
TL;DR: This work proposes a multi-scale decomposition image fusion method based on a local edge-preserving (LEP) filter and saliency detection to retain the details of a visible image with a discernible target area.
Abstract: To retain the details of a visible image with a discernible target area, we propose a multi-scale decomposition image fusion method based on a local edge-preserving (LEP) filter and saliency detection. We first use a LEP filter to decompose the infrared and visible images. Then, a modified saliency detection method is utilized to detect the salient target areas of an infrared image, which determine the base layer's weights of fusion strategy. Finally, each layer is reconstructed to obtain a visually pleasing fused image. Comparison with 11 other state-of-the-art methods reveals the superiority of the proposed method in terms of quality and quantity results.

129 citations

Journal ArticleDOI
TL;DR: This paper integrates superpixel segmentation (SS) into LRR and proposes a novel denoising method called SSLRR, which excavate the spatial-spectral information of HSI by combining PCA with SS, and is better than simply dividing the HSI into square patches.
Abstract: Recently, low-rank representation (LRR) based hyperspectral image (HSI) restoration method has been proven to be a powerful tool for simultaneously removing different types of noise, such as Gaussian, dead pixels and impulse noise. However, the LRR based method just adopts the square patch denoising strategy, which makes it not able to excavate the spatial information in HSI. This paper integrates superpixel segmentation (SS) into LRR and proposes a novel denoising method called SSLRR. First, the principal component analysis (PCA) is adopted to obtain the first principal component of HSI. Then the SS is adopted to the first principal component of HSI to get the homogeneous regions. Since we excavate the spatial-spectral information of HSI by combining PCA with SS, it is better than simply dividing the HSI into square patches. Finally, we employ the LRR to each homogeneous region of HSI, which enable us to remove all the above mentioned different types of noise simultaneously. Extensive experiments conducted on synthetic and real HSIs indicate that the SSLRR is efficient for HSI denoising.

120 citations

Journal ArticleDOI
TL;DR: An attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction, and an elaborate loss function, consisting of SSIM loss, texture loss, and intensity loss, drives the network to preserve abundant texture details and structural information, as well as presenting optimal apparent intensity.
Abstract: This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer, termed as SwinFusion. On the one hand, an attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction. More specifically, the proposed method involves an intra-domain fusion unit based on self-attention and an inter-domain fusion unit based on cross-attention, which mine and integrate long dependencies within the same domain and across domains. Through long-range dependency modeling, the network is able to fully implement domain-specific information extraction and cross-domain complementary information integration as well as maintaining the appropriate apparent intensity from a global perspective. In particular, we introduce the shifted windows mechanism into the self-attention and cross-attention, which allows our model to receive images with arbitrary sizes. On the other hand, the multi-scene image fusion problems are generalized to a unified framework with structure maintenance, detail preservation, and proper intensity control. Moreover, an elaborate loss function, consisting of SSIM loss, texture loss, and intensity loss, drives the network to preserve abundant texture details and structural information, as well as presenting optimal apparent intensity. Extensive experiments on both multi-modal image fusion and digital photography image fusion demonstrate the superiority of our SwinFusion compared to the state-of-the-art unified image fusion algorithms and task-specific alternatives. Implementation code and pre-trained weights can be accessed at https://github.com/Linfeng-Tang/SwinFusion.

112 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper proposes a novel method to fuse two types of information using a generative adversarial network, termed as FusionGAN, which establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients.
Abstract: Infrared images can distinguish targets from their backgrounds on the basis of difference in thermal radiation, which works well at all day/night time and under all weather conditions. By contrast, visible images can provide texture details with high spatial resolution and definition in a manner consistent with the human visual system. This paper proposes a novel method to fuse these two types of information using a generative adversarial network, termed as FusionGAN. Our method establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients, and the discriminator aims to force the fused image to have more details existing in visible images. This enables that the final fused image simultaneously keeps the thermal radiation in an infrared image and the textures in a visible image. In addition, our FusionGAN is an end-to-end model, avoiding manually designing complicated activity level measurements and fusion rules as in traditional methods. Experiments on public datasets demonstrate the superiority of our strategy over state-of-the-arts, where our results look like sharpened infrared images with clear highlighted targets and abundant details. Moreover, we also generalize our FusionGAN to fuse images with different resolutions, say a low-resolution infrared image and a high-resolution visible image. Extensive results demonstrate that our strategy can generate clear and clean fused images which do not suffer from noise caused by upsampling of infrared information.

853 citations

Journal ArticleDOI
Jiayi Ma1, Yong Ma1, Chang Li1
TL;DR: This survey comprehensively survey the existing methods and applications for the fusion of infrared and visible images, which can serve as a reference for researchers inrared and visible image fusion and related fields.
Abstract: Infrared images can distinguish targets from their backgrounds based on the radiation difference, which works well in all-weather and all-day/night conditions. By contrast, visible images can provide texture details with high spatial resolution and definition in a manner consistent with the human visual system. Therefore, it is desirable to fuse these two types of images, which can combine the advantages of thermal radiation information in infrared images and detailed texture information in visible images. In this work, we comprehensively survey the existing methods and applications for the fusion of infrared and visible images. First, infrared and visible image fusion methods are reviewed in detail. Meanwhile, image registration, as a prerequisite of image fusion, is briefly introduced. Second, we provide an overview of the main applications of infrared and visible image fusion. Third, the evaluation metrics of fusion performance are discussed and summarized. Fourth, we select eighteen representative methods and nine assessment metrics to conduct qualitative and quantitative experiments, which can provide an objective performance reference for different fusion methods and thus support relative engineering with credible and solid evidence. Finally, we conclude with the current status of infrared and visible image fusion and deliver insightful discussions and prospects for future work. This survey can serve as a reference for researchers in infrared and visible image fusion and related fields.

849 citations

Journal ArticleDOI
TL;DR: A comprehensive review of the current-state-of-the-art in DL for HSI classification, analyzing the strengths and weaknesses of the most widely used classifiers in the literature is provided, providing an exhaustive comparison of the discussed techniques.
Abstract: Advances in computing technology have fostered the development of new and powerful deep learning (DL) techniques, which have demonstrated promising results in a wide range of applications. Particularly, DL methods have been successfully used to classify remotely sensed data collected by Earth Observation (EO) instruments. Hyperspectral imaging (HSI) is a hot topic in remote sensing data analysis due to the vast amount of information comprised by this kind of images, which allows for a better characterization and exploitation of the Earth surface by combining rich spectral and spatial information. However, HSI poses major challenges for supervised classification methods due to the high dimensionality of the data and the limited availability of training samples. These issues, together with the high intraclass variability (and interclass similarity) –often present in HSI data– may hamper the effectiveness of classifiers. In order to solve these limitations, several DL-based architectures have been recently developed, exhibiting great potential in HSI data interpretation. This paper provides a comprehensive review of the current-state-of-the-art in DL for HSI classification, analyzing the strengths and weaknesses of the most widely used classifiers in the literature. For each discussed method, we provide quantitative results using several well-known and widely used HSI scenes, thus providing an exhaustive comparison of the discussed techniques. The paper concludes with some remarks and hints about future challenges in the application of DL techniques to HSI classification. The source codes of the methods discussed in this paper are available from: https://github.com/mhaut/hyperspectral_deeplearning_review .

534 citations

Journal ArticleDOI
TL;DR: This survey introduces feature detection, description, and matching techniques from handcrafted methods to trainable ones and provides an analysis of the development of these methods in theory and practice, and briefly introduces several typical image matching-based applications.
Abstract: As a fundamental and critical task in various visual applications, image matching can identify then correspond the same or similar structure/content from two or more images. Over the past decades, growing amount and diversity of methods have been proposed for image matching, particularly with the development of deep learning techniques over the recent years. However, it may leave several open questions about which method would be a suitable choice for specific applications with respect to different scenarios and task requirements and how to design better image matching methods with superior performance in accuracy, robustness and efficiency. This encourages us to conduct a comprehensive and systematic review and analysis for those classical and latest techniques. Following the feature-based image matching pipeline, we first introduce feature detection, description, and matching techniques from handcrafted methods to trainable ones and provide an analysis of the development of these methods in theory and practice. Secondly, we briefly introduce several typical image matching-based applications for a comprehensive understanding of the significance of image matching. In addition, we also provide a comprehensive and objective comparison of these classical and latest techniques through extensive experiments on representative datasets. Finally, we conclude with the current status of image matching technologies and deliver insightful discussions and prospects for future works. This survey can serve as a reference for (but not limited to) researchers and engineers in image matching and related fields.

474 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive survey of state-of-the-art remote sensing deep learning research for remote sensing applications, focusing on theories, tools, and challenges for the remote sensing community.
Abstract: In recent years, deep learning (DL), a rebranding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, and natural language processing. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV, e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should not only be aware of advancements such as DL, but also be leading researchers in this area. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools, and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as they relate to (i) inadequate data sets, (ii) human-understandable solutions for modeling physical phenomena, (iii) big data, (iv) nontraditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial, and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.

467 citations