Author
Jun Huang
Other affiliations: Huazhong University of Science and Technology
Bio: Jun Huang is an academic researcher from Wuhan University. The author has contributed to research in topics: Computer science & Hyperspectral imaging. The author has an hindex of 17, co-authored 49 publications receiving 1195 citations. Previous affiliations of Jun Huang include Huazhong University of Science and Technology.
Papers
More filters
TL;DR: A novel fusion algorithm, named Gradient Transfer Fusion (GTF), based on gradient transfer and total variation (TV) minimization is proposed, which can keep both the thermal radiation and the appearance information in the source images.
Abstract: We propose a new IR/visible fusion method based on gradient transfer and TV minimization.It can keep both the thermal radiation and the appearance information in the source images.We generalize the proposed method to fuse image pairs without pre-registration.Our fusion results look like sharpened IR images with highlighted target and abundant textures.To the best of our knowledge, the proposed fusion strategy has not yet been studied. In image fusion, the most desirable information is obtained from multiple images of the same scene and merged to generate a composite image. This resulting new image is more appropriate for human visual perception and further image-processing tasks. Existing methods typically use the same representations and extract the similar characteristics for different source images during the fusion process. However, it may not be appropriate for infrared and visible images, as the thermal radiation in infrared images and the appearance in visible images are manifestations of two different phenomena. To keep the thermal radiation and appearance information simultaneously, in this paper we propose a novel fusion algorithm, named Gradient Transfer Fusion (GTF), based on gradient transfer and total variation (TV) minimization. We formulate the fusion problem as an ?1-TV minimization problem, where the data fidelity term keeps the main intensity distribution in the infrared image, and the regularization term preserves the gradient variation in the visible image. We also generalize the formulation to fuse image pairs without pre-registration, which greatly enhances its applicability as high-precision registration is very challenging for multi-sensor data. The qualitative and quantitative comparisons with eight state-of-the-art methods on publicly available databases demonstrate the advantages of GTF, where our results look like sharpened infrared images with more appearance details.
729 citations
TL;DR: Experimental results demonstrate that the proposed spectral-spatial attention network for hyperspectral image classification can fully utilize the spectral and spatial information to obtain competitive performance.
Abstract: Many deep learning models, such as convolutional neural network (CNN) and recurrent neural network (RNN), have been successfully applied to extracting deep features for hyperspectral tasks. Hyperspectral image classification allows distinguishing the characterization of land covers by utilizing their abundant information. Motivated by the attention mechanism of the human visual system, in this study, we propose a spectral-spatial attention network for hyperspectral image classification. In our method, RNN with attention can learn inner spectral correlations within a continuous spectrum, while CNN with attention is designed to focus on saliency features and spatial relevance between neighboring pixels in the spatial dimension. Experimental results demonstrate that our method can fully utilize the spectral and spatial information to obtain competitive performance.
163 citations
TL;DR: This work proposes a multi-scale decomposition image fusion method based on a local edge-preserving (LEP) filter and saliency detection to retain the details of a visible image with a discernible target area.
Abstract: To retain the details of a visible image with a discernible target area, we propose a multi-scale decomposition image fusion method based on a local edge-preserving (LEP) filter and saliency detection. We first use a LEP filter to decompose the infrared and visible images. Then, a modified saliency detection method is utilized to detect the salient target areas of an infrared image, which determine the base layer's weights of fusion strategy. Finally, each layer is reconstructed to obtain a visually pleasing fused image. Comparison with 11 other state-of-the-art methods reveals the superiority of the proposed method in terms of quality and quantity results.
129 citations
TL;DR: This paper integrates superpixel segmentation (SS) into LRR and proposes a novel denoising method called SSLRR, which excavate the spatial-spectral information of HSI by combining PCA with SS, and is better than simply dividing the HSI into square patches.
Abstract: Recently, low-rank representation (LRR) based hyperspectral image (HSI) restoration method has been proven to be a powerful tool for simultaneously removing different types of noise, such as Gaussian, dead pixels and impulse noise. However, the LRR based method just adopts the square patch denoising strategy, which makes it not able to excavate the spatial information in HSI. This paper integrates superpixel segmentation (SS) into LRR and proposes a novel denoising method called SSLRR. First, the principal component analysis (PCA) is adopted to obtain the first principal component of HSI. Then the SS is adopted to the first principal component of HSI to get the homogeneous regions. Since we excavate the spatial-spectral information of HSI by combining PCA with SS, it is better than simply dividing the HSI into square patches. Finally, we employ the LRR to each homogeneous region of HSI, which enable us to remove all the above mentioned different types of noise simultaneously. Extensive experiments conducted on synthetic and real HSIs indicate that the SSLRR is efficient for HSI denoising.
120 citations
TL;DR: The difference of Gabor (DoGb) filters is proposed and improved (IDoGb), which is an extension of DoG but is sensitive to orientations and can better suppress the complex background edges, then achieves a lower false alarm rate.
Abstract: Infrared (IR) small target detection with high detection rate, low false alarm rate, and multiscale detection ability is a challenging task since raw IR images usually have low contrast and complex background. In recent years, robust human visual system (HVS) properties have been introduced into the IR small target detection field. However, existing algorithms based on HVS, such as difference of Gaussians (DoG) filters, are sensitive to not only real small targets but also background edges, which results in a high false alarm rate. In this letter, the difference of Gabor (DoGb) filters is proposed and improved (IDoGb), which is an extension of DoG but is sensitive to orientations and can better suppress the complex background edges, then achieves a lower false alarm rate. In addition, multiscale detection can be also achieved. Experimental results show that the IDoGb filter produces less false alarms at the same detection rate, while consuming only about 0.1 s for a single frame.
117 citations
Cited by
More filters
TL;DR: This paper proposes a novel method to fuse two types of information using a generative adversarial network, termed as FusionGAN, which establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients.
Abstract: Infrared images can distinguish targets from their backgrounds on the basis of difference in thermal radiation, which works well at all day/night time and under all weather conditions. By contrast, visible images can provide texture details with high spatial resolution and definition in a manner consistent with the human visual system. This paper proposes a novel method to fuse these two types of information using a generative adversarial network, termed as FusionGAN. Our method establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients, and the discriminator aims to force the fused image to have more details existing in visible images. This enables that the final fused image simultaneously keeps the thermal radiation in an infrared image and the textures in a visible image. In addition, our FusionGAN is an end-to-end model, avoiding manually designing complicated activity level measurements and fusion rules as in traditional methods. Experiments on public datasets demonstrate the superiority of our strategy over state-of-the-arts, where our results look like sharpened infrared images with clear highlighted targets and abundant details. Moreover, we also generalize our FusionGAN to fuse images with different resolutions, say a low-resolution infrared image and a high-resolution visible image. Extensive results demonstrate that our strategy can generate clear and clean fused images which do not suffer from noise caused by upsampling of infrared information.
853 citations
TL;DR: This survey comprehensively survey the existing methods and applications for the fusion of infrared and visible images, which can serve as a reference for researchers inrared and visible image fusion and related fields.
Abstract: Infrared images can distinguish targets from their backgrounds based on the radiation difference, which works well in all-weather and all-day/night conditions. By contrast, visible images can provide texture details with high spatial resolution and definition in a manner consistent with the human visual system. Therefore, it is desirable to fuse these two types of images, which can combine the advantages of thermal radiation information in infrared images and detailed texture information in visible images. In this work, we comprehensively survey the existing methods and applications for the fusion of infrared and visible images. First, infrared and visible image fusion methods are reviewed in detail. Meanwhile, image registration, as a prerequisite of image fusion, is briefly introduced. Second, we provide an overview of the main applications of infrared and visible image fusion. Third, the evaluation metrics of fusion performance are discussed and summarized. Fourth, we select eighteen representative methods and nine assessment metrics to conduct qualitative and quantitative experiments, which can provide an objective performance reference for different fusion methods and thus support relative engineering with credible and solid evidence. Finally, we conclude with the current status of infrared and visible image fusion and deliver insightful discussions and prospects for future work. This survey can serve as a reference for researchers in infrared and visible image fusion and related fields.
849 citations
TL;DR: A novel deep learning architecture for infrared and visible images fusion problems is presented, where the encoding network is combined with convolutional layers, a fusion layer, and dense block in which the output of each layer is connected to every other layer.
Abstract: In this paper, we present a novel deep learning architecture for infrared and visible images fusion problems. In contrast to conventional convolutional networks, our encoding network is combined with convolutional layers, a fusion layer, and dense block in which the output of each layer is connected to every other layer. We attempt to use this architecture to get more useful features from source images in the encoding process, and two fusion layers (fusion strategies) are designed to fuse these features. Finally, the fused image is reconstructed by a decoder. Compared with existing fusion methods, the proposed fusion method achieves the state-of-the-art performance in objective and subjective assessment.
703 citations
TL;DR: A comprehensive review of the current-state-of-the-art in DL for HSI classification, analyzing the strengths and weaknesses of the most widely used classifiers in the literature is provided, providing an exhaustive comparison of the discussed techniques.
Abstract: Advances in computing technology have fostered the development of new and powerful deep learning (DL) techniques, which have demonstrated promising results in a wide range of applications. Particularly, DL methods have been successfully used to classify remotely sensed data collected by Earth Observation (EO) instruments. Hyperspectral imaging (HSI) is a hot topic in remote sensing data analysis due to the vast amount of information comprised by this kind of images, which allows for a better characterization and exploitation of the Earth surface by combining rich spectral and spatial information. However, HSI poses major challenges for supervised classification methods due to the high dimensionality of the data and the limited availability of training samples. These issues, together with the high intraclass variability (and interclass similarity) –often present in HSI data– may hamper the effectiveness of classifiers. In order to solve these limitations, several DL-based architectures have been recently developed, exhibiting great potential in HSI data interpretation. This paper provides a comprehensive review of the current-state-of-the-art in DL for HSI classification, analyzing the strengths and weaknesses of the most widely used classifiers in the literature. For each discussed method, we provide quantitative results using several well-known and widely used HSI scenes, thus providing an exhaustive comparison of the discussed techniques. The paper concludes with some remarks and hints about future challenges in the application of DL techniques to HSI classification. The source codes of the methods discussed in this paper are available from: https://github.com/mhaut/hyperspectral_deeplearning_review .
534 citations
TL;DR: This survey paper presents a systematic review of the DL-based pixel-level image fusion literature, summarized the main difficulties that exist in conventional image fusion research and discussed the advantages that DL can offer to address each of these problems.
Abstract: By integrating the information contained in multiple images of the same scene into one composite image, pixel-level image fusion is recognized as having high significance in a variety of fields including medical imaging, digital photography, remote sensing, video surveillance, etc. In recent years, deep learning (DL) has achieved great success in a number of computer vision and image processing problems. The application of DL techniques in the field of pixel-level image fusion has also emerged as an active topic in the last three years. This survey paper presents a systematic review of the DL-based pixel-level image fusion literature. Specifically, we first summarize the main difficulties that exist in conventional image fusion research and discuss the advantages that DL can offer to address each of these problems. Then, the recent achievements in DL-based image fusion are reviewed in detail. More than a dozen recently proposed image fusion methods based on DL techniques including convolutional neural networks (CNNs), convolutional sparse representation (CSR) and stacked autoencoders (SAEs) are introduced. At last, by summarizing the existing DL-based image fusion methods into several generic frameworks and presenting a potential DL-based framework for developing objective evaluation metrics, we put forward some prospects for the future study on this topic. The key issues and challenges that exist in each framework are discussed.
493 citations