scispace - formally typeset
Search or ask a question
Author

Bin Xiao

Bio: Bin Xiao is an academic researcher from Chongqing University of Posts and Telecommunications. The author has contributed to research in topics: Image fusion & Convolutional neural network. The author has an hindex of 23, co-authored 83 publications receiving 1501 citations. Previous affiliations of Bin Xiao include Chongqing University & Shaanxi Normal University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this review, methods in the field of medical image fusion are characterized by image decomposition and image reconstruction, image fusion rules, image quality assessments, and experiments on the benchmark dataset.

238 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed p-CNN is competitive with or even outperforms the state-of-the-art methods in terms of both subjective visual perception and objective evaluation metrics.

213 citations

Journal ArticleDOI
TL;DR: Visual and statistical analyses show that the quality of fused image can be significantly improved over that of typical image quality assessment metrics in terms of structural similarity, peak-signal-to-noise ratio, standard deviation, and tone mapped image quality index metrics.

157 citations

Journal ArticleDOI
TL;DR: A new set of moments based on the Bessel function of the first kind, named Bessel-Fourier moments (BFMs), which are more suitable than orthogonal Fourier-Mellin and Zernike moments for image analysis and rotation invariant pattern recognition.

117 citations

Journal ArticleDOI
TL;DR: A novel method for performing anatomical magnetic resonance imaging-functional (positron emission tomography or single photon emission computed tomography) image fusion is presented and can obtain better performance, compared with the state-of-the-art fusion methods.
Abstract: A novel method for performing anatomical magnetic resonance imaging-functional (positron emission tomography or single photon emission computed tomography) image fusion is presented. The method merges specific feature information from input image signals of a single or multiple medical imaging modalities into a single fused image, while preserving more information and generating less distortion. The proposed method uses a local Laplacian filtering-based technique realized through a novel multi-scale system architecture. First, the input images are generated in a multi-scale image representation and are processed using local Laplacian filtering. Second, at each scale, the decomposed images are combined to produce fused approximate images using a local energy maximum scheme and produce the fused residual images using an information of interest-based scheme. Finally, a fused image is obtained using a reconstruction process that is analogous to that of conventional Laplacian pyramid transform. Experimental results computed using individual multi-scale analysis-based decomposition schemes or fusion rules clearly demonstrate the superiority of the proposed method through subjective observation as well as objective metrics. Furthermore, the proposed method can obtain better performance, compared with the state-of-the-art fusion methods.

117 citations


Cited by
More filters
Journal Article
TL;DR: In this article, the authors explore the effect of dimensionality on the nearest neighbor problem and show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance of the farthest data point.
Abstract: We explore the effect of dimensionality on the nearest neighbor problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10-15) dimensionality!.

1,992 citations

Journal ArticleDOI
TL;DR: The experimental results show that the proposed model demonstrates better generalization ability than the existing image fusion models for fusing various types of images, such as multi-focus, infrared-visual, multi-modal medical and multi-exposure images.

524 citations

Journal ArticleDOI
TL;DR: This survey paper presents a systematic review of the DL-based pixel-level image fusion literature, summarized the main difficulties that exist in conventional image fusion research and discussed the advantages that DL can offer to address each of these problems.

493 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain more competitive performance in comparison to nine representative medical image fusion methods, leading to state-of-the-art results on both visual quality and objective assessment.
Abstract: As an effective way to integrate the information contained in multiple medical images with different modalities, medical image fusion has emerged as a powerful technique in various clinical applications such as disease diagnosis and treatment planning. In this paper, a new multimodal medical image fusion method in nonsubsampled shearlet transform (NSST) domain is proposed. In the proposed method, the NSST decomposition is first performed on the source images to obtain their multiscale and multidirection representations. The high-frequency bands are fused by a parameter-adaptive pulse-coupled neural network (PA-PCNN) model, in which all the PCNN parameters can be adaptively estimated by the input band. The low-frequency bands are merged by a novel strategy that simultaneously addresses two crucial issues in medical image fusion, namely, energy preservation and detail extraction. Finally, the fused image is reconstructed by performing inverse NSST on the fused high-frequency and low-frequency bands. The effectiveness of the proposed method is verified by four different categories of medical image fusion problems [computed tomography (CT) and magnetic resonance (MR), MR-T1 and MR-T2, MR and positron emission tomography, and MR and single-photon emission CT] with more than 80 pairs of source images in total. Experimental results demonstrate that the proposed method can obtain more competitive performance in comparison to nine representative medical image fusion methods, leading to state-of-the-art results on both visual quality and objective assessment.

381 citations