scispace - formally typeset
Search or ask a question
Author

Jian Ma

Bio: Jian Ma is an academic researcher from Hebei University. The author has contributed to research in topics: Filter (signal processing) & Image fusion. The author has an hindex of 2, co-authored 3 publications receiving 10 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A multi-focus image fusion algorithm based on a dual convolutional neural network (DualCNN), in which the focus area is detected from super-resolved images, which can achieve better visual perception according to subjective evaluation and objective indexes.
Abstract: Multi-focus image fusion is an image processing that generates an integrated image by merging multiple images from different focus area in the same scene. For most fusion methods, the detection of the focus area is a critical step. In this paper, we propose a multi-focus image fusion algorithm based on a dual convolutional neural network (DualCNN), in which the focus area is detected from super-resolved images. Firstly, the source image is input into a DualCNN to restore the details and structure from its super-resolved image, as well as to improve the contrast of the source image. Secondly, the bilateral filter is used to reduce noise on the fused image, and the guided filter is used to detect the focus area of the image and refine the decision map. Finally, the fused image is obtained by weighting the source image according to the decision map. Experimental results show that our algorithm can well retain image details and maintain spatial consistency. Compared with existing methods in multiple groups of experiments, our algorithm can achieve better visual perception according to subjective evaluation and objective indexes.

18 citations

Journal ArticleDOI
12 Jan 2021
TL;DR: The fusion algorithm using RGF and CNN-based feature mapping combined with NNM can improve fusion effects and suppress artifacts and blocking effects in the fused result.
Abstract: BACKGROUND Medical image fusion is very important for the diagnosis and treatment of diseases. In recent years, there have been a number of different multi-modal medical image fusion algorithms that can provide delicate contexts for disease diagnosis more clearly and more conveniently. Recently, nuclear norm minimization and deep learning have been used effectively in image processing. METHODS A multi-modality medical image fusion method using a rolling guidance filter (RGF) with a convolutional neural network (CNN) based feature mapping and nuclear norm minimization (NNM) is proposed. At first, we decompose medical images to base layer components and detail layer components by using RGF. In the next step, we get the basic fused image through the pretrained CNN model. The CNN model with pre-training is used to obtain the significant characteristics of the base layer components. And we can compute the activity level measurement from the regional energy of CNN-based fusion maps. Then, a detail fused image is gained by NNM. That is, we use NNM to fuse the detail layer components. At last, the basic and detail fused images are integrated into the fused result. RESULTS From the comparison with the most advanced fusion algorithms, the results of experiments indicate that this fusion algorithm has the best effect in visual evaluation and objective standard. CONCLUSION The fusion algorithm using RGF and CNN-based feature mapping, combined with NNM, can improve fusion effects and suppress artifacts and blocking effects in the fused results.

14 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a multi-focus color image fusion algorithm based on low vision image reconstruction and focus feature extraction, which improves the recognition accuracy of decision focus and defocused areas.
Abstract: Multi-focus image fusion is a process of generating fused images by merging multiple images with different degrees of focus in the same scene. In multi-focus image fusion, the accuracy of the detected focus area is critical for improving the quality of the fused image. Combining the structural gradient, we propose a multi-focus color image fusion algorithm based on low vision image reconstruction and focus feature extraction. First, the source images are input into the deep residual network (ResNet) to conduct the low vision image reconstructed by the super-resolution method. Next, an end-to-end restoration model is used to improve the image details and maintain the edges of the image through rolling guidance filter. What is more, the difference image is obtained from the reconstructed image and the source image. Then, the fusion decision map is generated based on the focus area detection method based on structural gradient. Finally, the source image and the fusion decision map are used for weighted fusion to generate a fusion image. Experimental results show that our algorithm is quite accurate in detecting the edge of the focus area. Compared with other algorithms, the proposed algorithm improves the recognition accuracy of decision focus and defocused areas. It can well retain the detailed texture features and edge structure of the source image.

10 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Based on two coupled neural P (CNP) systems with local topology, a multi-focus image fusion framework in the non-sub-sampled contourlet transform (NSCT) domain is developed, where the two CNP systems are utilized to control the fusion of low-frequency coefficients in the NSCT domain this paper.

20 citations

Journal ArticleDOI
TL;DR: A novel weighted term multimodality anatomical medical image fusion method, which eliminates the distortions from the source images and afterward, extracts two pieces of crucial information: the local contrast and the salient structure to obtain the final weight map.
Abstract: Image modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), single‐photon emission computed tomography (SPECT), and so on, reflect various levels of details about objects of interest that help medical practitioners to examine patients' diseases from different perspectives. A single medical image, at times, may not be sufficient for making a critical decision; therefore, providing detailed information from a different perspective may help in making a better decision. Image fusion techniques play a vital role in this regard by combining important details from different medical images into a single, information enhanced image. In this article, we present a novel weighted term multimodality anatomical medical image fusion method. The proposed method, as a first step, eliminates the distortions from the source images and afterward, extracts two pieces of crucial information: the local contrast and the salient structure. Both the local contrast and salient structure are later combined to obtain the final weight map. The obtained weights are then passed through a fast guided filter to remove the discontinuities and noise. Lastly, the refined weight map is fused with source images using pyramid decomposition to get the final fused image. The proposed method is accessed and compared both qualitatively and quantitatively with state‐of‐the‐art techniques. The result illustrates the performance superiority and efficiency of the proposed method.

16 citations

Journal ArticleDOI
01 Mar 2022-Optik
TL;DR: Wang et al. as mentioned in this paper proposed a novel infrared and visible image fusion method based on visibility enhancement and hybrid multiscale decomposition, where the pre-processed images are decomposed by ℓ1−ℓ0 decomposition model to obtain the base and detail layers.

16 citations

Journal ArticleDOI
TL;DR: An infrared and visible image fusion method based on an iterative differential thermal information filter to generate a fusion image with the salient thermal targets of the infrared image and detailed information of the visible image is proposed.

14 citations

Journal ArticleDOI
TL;DR: In this paper , an iterative differential thermal information filter was proposed to generate a fusion image with the salient thermal targets of the infrared image and detailed information of the visible image, and the fusion image was obtained by a weighted-averaging strategy.

12 citations