scispace - formally typeset
Open AccessJournal ArticleDOI

NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models

TLDR
A novel method for infrared and visible image fusion where the nest connection-based network and spatial/channel attention models are developed that describe the importance of each spatial position and of each channel with deep features is proposed.
Abstract
In this article, we propose a novel method for infrared and visible image fusion where we develop nest connection-based network and spatial/channel attention models. The nest connection-based network can preserve significant amounts of information from input data in a multiscale perspective. The approach comprises three key elements: encoder, fusion strategy, and decoder, respectively. In our proposed fusion strategy, spatial attention models and channel attention models are developed that describe the importance of each spatial position and of each channel with deep features. First, the source images are fed into the encoder to extract multiscale deep features. The novel fusion strategy is then developed to fuse these features for each scale. Finally, the fused image is reconstructed by the nest connection-based decoder. Experiments are performed on publicly available data sets. These exhibit that our proposed approach has better fusion performance than other state-of-the-art methods. This claim is justified through both subjective and objective evaluations. The code of our fusion method is available at https://github.com/hli1221/imagefusion-nestfuse .

read more

Citations
More filters
Journal ArticleDOI

RFN-Nest: An end-to-end residual fusion network for infrared and visible images

TL;DR: A residual fusion network (RFN) which is based on a residual architecture to replace the traditional fusion approach is proposed which delivers a better performance than the state-of-the-art methods in both subjective and objective evaluation.
Journal ArticleDOI

Image fusion meets deep learning: A survey and perspective

TL;DR: In this paper, a comprehensive review and analysis of latest deep learning methods in different image fusion scenarios is provided, and the evaluation for some representative methods in specific fusion tasks are performed qualitatively and quantitatively.
Journal ArticleDOI

GANMcC: A Generative Adversarial Network With Multiclassification Constraints for Infrared and Visible Image Fusion

TL;DR: A new fusion framework called generative adversarial network with multiclassification constraints (GANMcC) is proposed, which transforms image fusion into a multidistribution simultaneous estimation problem to fuse infrared and visible images in a more reasonable way.
Journal ArticleDOI

Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network

TL;DR: Tang et al. as discussed by the authors proposed a semantic-aware real-time image fusion network (SeAFusion), which cascaded the image fusion module and semantic segmentation module and leveraged the semantic loss to guide high-level semantic information to flow back to the fusion module.
Journal ArticleDOI

Infrared and Visible Image Fusion via Decoupling Network

TL;DR: Wang et al. as mentioned in this paper proposed a decoupling network-based IVIF method (DNFusion), which utilizes the decoupled maps to design additional constraints on the network and force the network to retain the saliency information of the source image effectively.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal ArticleDOI

ImageNet classification with deep convolutional neural networks

TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Related Papers (5)