scispace - formally typeset
Search or ask a question
Author

Qiang Zhang

Bio: Qiang Zhang is an academic researcher from Xidian University. The author has contributed to research in topics: Sparse approximation & Object detection. The author has an hindex of 13, co-authored 23 publications receiving 1051 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A novel image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) is proposed, aiming at solving the fusion problem of multifocus images, and significantly outperforms the traditional discrete wavelets transform-based and the discrete wavelet frame transform- based image fusion methods.

593 citations

Journal ArticleDOI
TL;DR: A systematic review of the SR-based multi-sensor image fusion literature, highlighting the pros and cons of each category of approaches and evaluating the impact of these three algorithmic components on the fusion performance when dealing with different applications.

297 citations

Journal ArticleDOI
TL;DR: The proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.

146 citations

Journal ArticleDOI
TL;DR: This article revisits feature fusion for mining intrinsic RGB-T saliency patterns and proposes a novel deep feature fusion network, which consists of the multi-scale, multi-modality, and multi-level feature fusion modules.
Abstract: While many RGB-based saliency detection algorithms have recently shown the capability of segmenting salient objects from an image, they still suffer from unsatisfactory performance when dealing with complex scenarios, insufficient illumination or occluded appearances. To overcome this problem, this article studies RGB-T saliency detection, where we take advantage of thermal modality’s robustness against illumination and occlusion. To achieve this goal, we revisit feature fusion for mining intrinsic RGB-T saliency patterns and propose a novel deep feature fusion network, which consists of the multi-scale, multi-modality, and multi-level feature fusion modules. Specifically, the multi-scale feature fusion module captures rich contexture features from each modality feature, while the multi-modality and multi-level feature fusion modules integrate complementary features from different modality features and different level of features, respectively. To demonstrate the effectiveness of the proposed approach, we conduct comprehensive experiments on the RGB-T saliency detection benchmark. The experimental results demonstrate that our approach outperforms other state-of-the-art methods and the conventional feature fusion modules by a large margin.

90 citations

Journal ArticleDOI
TL;DR: A new strategy for guiding multi-level contextual information integration, where feature maps and side outputs across layers are fully engaged, is proposed, and shallower-level feature maps are guided by the deeper-level side outputs to learn more accurate properties of the salient object.
Abstract: Integration of multi-level contextual information, such as feature maps and side outputs, is crucial for Convolutional Neural Networks (CNNs)-based salient object detection. However, most existing methods either simply concatenate multi-level feature maps or calculate element-wise addition of multi-level side outputs, thus failing to take full advantages of them. In this paper, we propose a new strategy for guiding multi-level contextual information integration, where feature maps and side outputs across layers are fully engaged. Specifically, shallower-level feature maps are guided by the deeper-level side outputs to learn more accurate properties of the salient object. In turn, the deeper-level side outputs can be propagated to high-resolution versions with spatial details complemented by means of shallower-level feature maps. Moreover, a group convolution module is proposed with the aim to achieve high-discriminative feature maps, in which the backbone feature maps are divided into a number of groups and then the convolution is applied to the channels of backbone feature maps within each group. Eventually, the group convolution module is incorporated in the guidance module to further promote the guidance role. Experiments on three public benchmark datasets verify the effectiveness and superiority of the proposed method over the state-of-the-art methods.

73 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Abstract: A fast and effective image fusion method is proposed for creating a highly informative fused image through merging multiple images. The proposed method is based on a two-scale decomposition of an image into a base layer containing large scale variations in intensity, and a detail layer capturing small scale details. A novel guided filtering-based weighted average technique is proposed to make full use of spatial consistency for fusion of the base and detail layers. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.

1,300 citations

Journal ArticleDOI
TL;DR: A general image fusion framework by combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods is presented and experimental results demonstrate that the proposed fusion framework can obtain state-of-the-art performance.

952 citations

Journal ArticleDOI
TL;DR: This paper proposes a novel method to fuse two types of information using a generative adversarial network, termed as FusionGAN, which establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients.

853 citations

Journal ArticleDOI
Jiayi Ma1, Yong Ma1, Chang Li1
TL;DR: This survey comprehensively survey the existing methods and applications for the fusion of infrared and visible images, which can serve as a reference for researchers inrared and visible image fusion and related fields.

849 citations