scispace - formally typeset
Journal ArticleDOI

Multi-focus image fusion with a deep convolutional neural network

TLDR
A new multi-focus image fusion method is primarily proposed, aiming to learn a direct mapping between source images and focus map, using a deep convolutional neural network trained by high-quality image patches and their blurred versions to encode the mapping.
About
This article is published in Information Fusion.The article was published on 2017-07-01. It has received 826 citations till now. The article focuses on the topics: Image fusion & Convolutional neural network.

read more

Citations
More filters
Journal ArticleDOI

FusionGAN: A generative adversarial network for infrared and visible image fusion

TL;DR: This paper proposes a novel method to fuse two types of information using a generative adversarial network, termed as FusionGAN, which establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients.
Journal ArticleDOI

Infrared and visible image fusion methods and applications: A survey

TL;DR: This survey comprehensively survey the existing methods and applications for the fusion of infrared and visible images, which can serve as a reference for researchers inrared and visible image fusion and related fields.
Journal ArticleDOI

DenseFuse: A Fusion Approach to Infrared and Visible Images

TL;DR: A novel deep learning architecture for infrared and visible images fusion problems is presented, where the encoding network is combined with convolutional layers, a fusion layer, and dense block in which the output of each layer is connected to every other layer.
Journal ArticleDOI

IFCNN: A general image fusion framework based on convolutional neural network

TL;DR: The experimental results show that the proposed model demonstrates better generalization ability than the existing image fusion models for fusing various types of images, such as multi-focus, infrared-visual, multi-modal medical and multi-exposure images.
Journal ArticleDOI

Deep learning for pixel-level image fusion: Recent advances and future prospects

TL;DR: This survey paper presents a systematic review of the DL-based pixel-level image fusion literature, summarized the main difficulties that exist in conventional image fusion research and discussed the advantages that DL can offer to address each of these problems.
References
More filters
Proceedings ArticleDOI

Learning to compare image patches via convolutional neural networks

TL;DR: This paper shows how to learn directly from image data a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems, and opts for a CNN-based model that is trained to account for a wide variety of changes in image appearance.
Journal ArticleDOI

Image Fusion With Guided Filtering

TL;DR: Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Journal ArticleDOI

A general framework for image fusion based on multi-scale transform and sparse representation

TL;DR: A general image fusion framework by combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods is presented and experimental results demonstrate that the proposed fusion framework can obtain state-of-the-art performance.
Journal ArticleDOI

Pixel-level image fusion

TL;DR: It is concluded that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications and the researches in the image fusion field are still expected to significantly grow in the coming years.
Journal ArticleDOI

A general framework for multiresolution image fusion: from pixels to regions

TL;DR: The aim is to reframe the multiresolution-based fusion methodology into a common formalism and to develop a new region-based approach which combines aspects of both object and pixel-level fusion.
Related Papers (5)