scispace - formally typeset
Search or ask a question
Author

Jiao Du

Bio: Jiao Du is an academic researcher from Guangzhou University. The author has contributed to research in topics: Image fusion & Artificial intelligence. The author has an hindex of 10, co-authored 13 publications receiving 454 citations. Previous affiliations of Jiao Du include Chongqing Technology and Business University & Chongqing University of Posts and Telecommunications.

Papers
More filters
Journal ArticleDOI
TL;DR: In this review, methods in the field of medical image fusion are characterized by image decomposition and image reconstruction, image fusion rules, image quality assessments, and experiments on the benchmark dataset.

238 citations

Journal ArticleDOI
TL;DR: Visual and statistical analyses show that the quality of fused image can be significantly improved over that of typical image quality assessment metrics in terms of structural similarity, peak-signal-to-noise ratio, standard deviation, and tone mapped image quality index metrics.

157 citations

Journal ArticleDOI
TL;DR: A novel method for performing anatomical magnetic resonance imaging-functional (positron emission tomography or single photon emission computed tomography) image fusion is presented and can obtain better performance, compared with the state-of-the-art fusion methods.
Abstract: A novel method for performing anatomical magnetic resonance imaging-functional (positron emission tomography or single photon emission computed tomography) image fusion is presented. The method merges specific feature information from input image signals of a single or multiple medical imaging modalities into a single fused image, while preserving more information and generating less distortion. The proposed method uses a local Laplacian filtering-based technique realized through a novel multi-scale system architecture. First, the input images are generated in a multi-scale image representation and are processed using local Laplacian filtering. Second, at each scale, the decomposed images are combined to produce fused approximate images using a local energy maximum scheme and produce the fused residual images using an information of interest-based scheme. Finally, a fused image is obtained using a reconstruction process that is analogous to that of conventional Laplacian pyramid transform. Experimental results computed using individual multi-scale analysis-based decomposition schemes or fusion rules clearly demonstrate the superiority of the proposed method through subjective observation as well as objective metrics. Furthermore, the proposed method can obtain better performance, compared with the state-of-the-art fusion methods.

117 citations

Journal ArticleDOI
TL;DR: A generative adversarial network based on dual-stream attention mechanism (DSAGAN) for anatomical and functional image fusion, which consumes less fusion time and achieves better objective metrics in terms of Q AG, QEN and QNIQE.

35 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed image fusion algorithms show the best performances among the other fusion methods in the domain of MRI-CT and MRI-PET fusion.
Abstract: Applying color saliency feature algorithm on PET image to get functional information.Applying canny operator on MRI and CT image to get the anatomical structural information.Entropy of image is selected as weight for fusing smoothed images at different scales.Variance of luminance image is selected as weight for fusing detailed images at different scales. Two efficient image fusion algorithms are proposed for constructing a fused image through combining parallel features on multi-scale local extrema scheme. Firstly, the source image is decomposed into a series of smoothed and detailed images at different scales by local extrema scheme. Secondly, the parallel features of edge and color are extracted to get the saliency maps. The edge saliency weighted map aims to preserve the structural information using Canny edge detection operator; Meanwhile, the color saliency weighted map works for extracting the color and luminance information by context-aware operator. Thirdly, the average and weighted average schemes are used as the fusion rules for grouping the coefficients of weighted maps obtained from smoothed and detailed images. Finally, the fused image is reconstructed by the fused smoothed and the fused detailed images. Experimental results demonstrate that the proposed algorithms show the best performances among the other fusion methods in the domain of MRI-CT and MRI-PET fusion.

33 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This survey paper presents a systematic review of the DL-based pixel-level image fusion literature, summarized the main difficulties that exist in conventional image fusion research and discussed the advantages that DL can offer to address each of these problems.

493 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain more competitive performance in comparison to nine representative medical image fusion methods, leading to state-of-the-art results on both visual quality and objective assessment.
Abstract: As an effective way to integrate the information contained in multiple medical images with different modalities, medical image fusion has emerged as a powerful technique in various clinical applications such as disease diagnosis and treatment planning. In this paper, a new multimodal medical image fusion method in nonsubsampled shearlet transform (NSST) domain is proposed. In the proposed method, the NSST decomposition is first performed on the source images to obtain their multiscale and multidirection representations. The high-frequency bands are fused by a parameter-adaptive pulse-coupled neural network (PA-PCNN) model, in which all the PCNN parameters can be adaptively estimated by the input band. The low-frequency bands are merged by a novel strategy that simultaneously addresses two crucial issues in medical image fusion, namely, energy preservation and detail extraction. Finally, the fused image is reconstructed by performing inverse NSST on the fused high-frequency and low-frequency bands. The effectiveness of the proposed method is verified by four different categories of medical image fusion problems [computed tomography (CT) and magnetic resonance (MR), MR-T1 and MR-T2, MR and positron emission tomography, and MR and single-photon emission CT] with more than 80 pairs of source images in total. Experimental results demonstrate that the proposed method can obtain more competitive performance in comparison to nine representative medical image fusion methods, leading to state-of-the-art results on both visual quality and objective assessment.

381 citations

Proceedings ArticleDOI
10 Jul 2017
TL;DR: Experimental results demonstrate that the proposed convolutional neural networks method can achieve promising results in terms of both visual quality and objective assessment.
Abstract: Medical image fusion technique plays an an increasingly critical role in many clinical applications by deriving the complementary information from medical images with different modalities. In this paper, a medical image fusion method based on convolutional neural networks (CNNs) is proposed. In our method, a siamese convolutional network is adopted to generate a weight map which integrates the pixel activity information from two source images. The fusion process is conducted in a multi-scale manner via image pyramids to be more consistent with human visual perception. In addition, a local similarity based strategy is applied to adaptively adjust the fusion mode for the decomposed coefficients. Experimental results demonstrate that the proposed method can achieve promising results in terms of both visual quality and objective assessment.

238 citations

Journal ArticleDOI
TL;DR: A novel multi-modality medical image fusion method based on phase congruency and local Laplacian energy that achieves competitive performance in both the image quantity and computational costs is presented.
Abstract: Multi-modality image fusion provides more comprehensive and sophisticated information in modern medical diagnosis, remote sensing, video surveillance, and so on. This paper presents a novel multi-modality medical image fusion method based on phase congruency and local Laplacian energy. In the proposed method, the non-subsampled contourlet transform is performed on medical image pairs to decompose the source images into high-pass and low-pass subbands. The high-pass subbands are integrated by a phase congruency-based fusion rule that can enhance the detailed features of the fused image for medical diagnosis. A local Laplacian energy-based fusion rule is proposed for low-pass subbands. The local Laplacian energy consists of weighted local energy and the weighted sum of Laplacian coefficients that describe the structured information and the detailed features of source image pairs, respectively. Thus, the proposed fusion rule can simultaneously integrate two key components for the fusion of low-pass subbands. The fused high-pass and low-pass subbands are inversely transformed to obtain the fused image. In the comparative experiments, three categories of multi-modality medical image pairs are used to verify the effectiveness of the proposed method. The experiment results show that the proposed method achieves competitive performance in both the image quantity and computational costs.

220 citations