Journal ArticleDOI
Multi-focus image fusion with a deep convolutional neural network
TLDR
A new multi-focus image fusion method is primarily proposed, aiming to learn a direct mapping between source images and focus map, using a deep convolutional neural network trained by high-quality image patches and their blurred versions to encode the mapping.About:
This article is published in Information Fusion.The article was published on 2017-07-01. It has received 826 citations till now. The article focuses on the topics: Image fusion & Convolutional neural network.read more
Citations
More filters
Journal ArticleDOI
FDGNet: A pair feature difference guided network for multimodal medical image fusion
TL;DR: Wang et al. as mentioned in this paper proposed a pair feature difference guided network (FDGNet) to extract complementary features from source images, where the feature extraction framework is dedicated to calculating the difference among features at various levels, such that the feature reconstruction framework can generate a pair of interactive weights via the guidance of feature differences to directly produce the fused result.
Journal ArticleDOI
A novel multi-focus image fusion method based on joint regularization optimization layering and sparse representation
TL;DR: In this article , a multi-source regularization optimization model was proposed to divide source images into a common background layer and respective detail layers jointly, and the resulting detail layers are fused in sparse representation domain.
Journal ArticleDOI
A new multi-focus image fusion method based on multi-classification focus learning and multi-scale decomposition
Proceedings ArticleDOI
A New Infrared and Visible Image Fusion Method Based on Generative Adversarial Networks and Attention Mechanism
Jixiao Wang,Yang Li,Zhuang Miao +2 more
TL;DR: Li et al. as discussed by the authors utilized three types of attention mechanisms which are self-attention, dual attention and multi-scale attention to add into the basic network of image fusion for developing the performance of the fused images.
Proceedings ArticleDOI
Multi-focus image fusion using pixel level deep learning convolutional neural network
Manmay Rout,Siddheswar Nahak,Subhashree Priyadarshinee,Prasanti Santoshroy,Kodanda Dhar Sa,Dillip Dash +5 more
TL;DR: A Pixel level Deep learning method using a 3-Channel convolutional neural network to fuse two multi focus images obtain a high definition or high quality fused image.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI
Image quality assessment: from error visibility to structural similarity
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Proceedings ArticleDOI
Fully convolutional networks for semantic segmentation
TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings Article
Rectified Linear Units Improve Restricted Boltzmann Machines
Vinod Nair,Geoffrey E. Hinton +1 more
TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Related Papers (5)
A general framework for image fusion based on multi-scale transform and sparse representation
Yu Liu,Shuping Liu,Zengfu Wang +2 more