scispace - formally typeset
Journal ArticleDOI

Multi-focus image fusion with a deep convolutional neural network

TLDR
A new multi-focus image fusion method is primarily proposed, aiming to learn a direct mapping between source images and focus map, using a deep convolutional neural network trained by high-quality image patches and their blurred versions to encode the mapping.
About
This article is published in Information Fusion.The article was published on 2017-07-01. It has received 826 citations till now. The article focuses on the topics: Image fusion & Convolutional neural network.

read more

Citations
More filters
Journal ArticleDOI

MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion

TL;DR: A new generative adversarial network with adaptive and gradient joint constraints to fuse multi-focus images is presented with the superiority of the method over the state-of-the-art in terms of both subjective visual effect and quantitative metrics.
Book ChapterDOI

Remote Sensing Image Fusion Based on Two-Stream Fusion Network

TL;DR: Experiments on Quickbird and GaoFen-1 satellite images demonstrate that the proposed TFNet can fuse PAN and MS images, effectively, and produce pan-sharpened images competitive with even superior to state of the arts.
Journal ArticleDOI

Deep Coupled Dense Convolutional Network With Complementary Data for Intelligent Fault Diagnosis

TL;DR: This paper proposes a deep coupled dense convolutional network (CDCN) with complementary data to integrate information fusion, feature extraction, and fault classification together for intelligent diagnosis.
Journal ArticleDOI

SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer

TL;DR: An attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction, and an elaborate loss function, consisting of SSIM loss, texture loss, and intensity loss, drives the network to preserve abundant texture details and structural information, as well as presenting optimal apparent intensity.
Journal ArticleDOI

SDNet: A Versatile Squeeze-and-Decomposition Network for Real-Time Image Fusion

TL;DR: A squeeze-and-decomposition network (SDNet) is proposed to realize multi-modal and digital photography image fusion in real time and is much faster than the state-of-the-arts, which can deal with real-time fusion tasks.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Related Papers (5)