scispace - formally typeset
Journal ArticleDOI

A multi-focus color image fusion algorithm based on low vision image reconstruction and focused feature extraction

TLDR
Zhang et al. as discussed by the authors proposed a multi-focus color image fusion algorithm based on low vision image reconstruction and focus feature extraction, which improves the recognition accuracy of decision focus and defocused areas.
Abstract
Multi-focus image fusion is a process of generating fused images by merging multiple images with different degrees of focus in the same scene. In multi-focus image fusion, the accuracy of the detected focus area is critical for improving the quality of the fused image. Combining the structural gradient, we propose a multi-focus color image fusion algorithm based on low vision image reconstruction and focus feature extraction. First, the source images are input into the deep residual network (ResNet) to conduct the low vision image reconstructed by the super-resolution method. Next, an end-to-end restoration model is used to improve the image details and maintain the edges of the image through rolling guidance filter. What is more, the difference image is obtained from the reconstructed image and the source image. Then, the fusion decision map is generated based on the focus area detection method based on structural gradient. Finally, the source image and the fusion decision map are used for weighted fusion to generate a fusion image. Experimental results show that our algorithm is quite accurate in detecting the edge of the focus area. Compared with other algorithms, the proposed algorithm improves the recognition accuracy of decision focus and defocused areas. It can well retain the detailed texture features and edge structure of the source image.

read more

Citations
More filters
Journal ArticleDOI

Infrared and visible image fusion based on QNSCT and Guided Filter

TL;DR: In this paper , a new fusion framework based on Quaternion Non-Subsampled Contourlet Transform (QNSCT) and Guided Filter detail enhancement is designed to address the problems of inconspicuous infrared targets and poor background texture in Infrared and visible image fusion.
Journal ArticleDOI

MFDetection: A highly generalized object detection network unified with multilevel heterogeneous image fusion

TL;DR: Zhang et al. as mentioned in this paper proposed a multi-level fusion detection network (MFDetection), which fused multi-scale feature maps of visible and infrared images extracted from the feature extraction network and then applied to detection, which greatly improves the detection accuracy.
Journal ArticleDOI

Multi-focus image fusion dataset and algorithm test in real environment

TL;DR: A multi-focus image fusion dataset and algorithm test in real environment and the results show promising results in terms of accuracy and efficiency.
Journal ArticleDOI

Sand dust image visibility enhancement algorithm via fusion strategy

Yazhong Si, +1 more
TL;DR: In this article , a novel image enhancement algorithm based on fusion strategy is proposed, which includes two components in sequence: sand removal via the improved Gaussian model-based color correction algorithm and dust elimination using the residual-based convolutional neural network (CNN).
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Image Fusion With Guided Filtering

TL;DR: Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Journal ArticleDOI

A general framework for image fusion based on multi-scale transform and sparse representation

TL;DR: A general image fusion framework by combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods is presented and experimental results demonstrate that the proposed fusion framework can obtain state-of-the-art performance.
Posted Content

ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

TL;DR: This work thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improves each of them to derive an Enhanced SRGAN (ESRGAN), which achieves consistently better visual quality with more realistic and natural textures than SRGAN.
Journal ArticleDOI

Multi-focus image fusion with a deep convolutional neural network

TL;DR: A new multi-focus image fusion method is primarily proposed, aiming to learn a direct mapping between source images and focus map, using a deep convolutional neural network trained by high-quality image patches and their blurred versions to encode the mapping.
Related Papers (5)