scispace - formally typeset
Journal ArticleDOI

Multi-focus image fusion with a deep convolutional neural network

TLDR
A new multi-focus image fusion method is primarily proposed, aiming to learn a direct mapping between source images and focus map, using a deep convolutional neural network trained by high-quality image patches and their blurred versions to encode the mapping.
About
This article is published in Information Fusion.The article was published on 2017-07-01. It has received 826 citations till now. The article focuses on the topics: Image fusion & Convolutional neural network.

read more

Citations
More filters
Journal ArticleDOI

Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks

TL;DR: Experimental results demonstrate that the proposed method can outperform several representative and state-of-the-art image fusion methods in terms of both visual quality and objective evaluation.
Journal ArticleDOI

A Novel GA-Based Optimized Approach for Regional Multimodal Medical Image Fusion With Superpixel Segmentation

TL;DR: Wang et al. as discussed by the authors proposed a region-based multimodal medical image fusion framework based on superpixel segmentation and a post-processing optimization method, which can effectively obtain homogeneous regions and preserve the complete information of image details.
Journal ArticleDOI

HID: The Hybrid Image Decomposition Model for MRI and CT Fusion

TL;DR: Wang et al. as mentioned in this paper proposed an efficient hybrid image decomposition (HID) method, which combines the advantages of spatial domain and transform domain methods and breaks through the limitations of the algorithms based on single category features.
Journal ArticleDOI

Infrared and visible image fusion based on dilated residual attention network

TL;DR: Instead of using normal convolutions, dilated convolutions in the encoders are introduced to extract multi-scale features of IR and VIS images and self-attention mechanism is introduced to refine and adaptively fuse multi-contextual features ofIR andVIS images.
Journal ArticleDOI

Improving the Performance of Image Fusion Based on Visual Saliency Weight Map Combined With CNN

TL;DR: The proposed fusion framework based on visual saliency weight map (VSWM) combined with CNN outperforms state-of-the-art methods by scoring the highest over different evaluation metrics such as Q0, multiscale structural similarity (MS_SSIM), and the sum of correlations of differences (SCD).
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Related Papers (5)