scispace - formally typeset
Search or ask a question
Author

Qiang Guo

Bio: Qiang Guo is an academic researcher from Shandong University of Finance and Economics. The author has contributed to research in topics: Interpolation & Singular value decomposition. The author has an hindex of 9, co-authored 19 publications receiving 375 citations. Previous affiliations of Qiang Guo include MediaTech Institute & Shandong University.

Papers
More filters
Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed method can effectively reduce noise and be competitive with the current state-of-the-art denoising algorithms in terms of both quantitative metrics and subjective visual quality.
Abstract: Nonlocal self-similarity of images has attracted considerable interest in the field of image processing and has led to several state-of-the-art image denoising algorithms, such as block matching and 3-D, principal component analysis with local pixel grouping, patch-based locally optimal wiener, and spatially adaptive iterative singular-value thresholding. In this paper, we propose a computationally simple denoising algorithm using the nonlocal self-similarity and the low-rank approximation (LRA). The proposed method consists of three basic steps. First, our method classifies similar image patches by the block-matching technique to form the similar patch groups, which results in the similar patch groups to be low rank. Next, each group of similar patches is factorized by singular value decomposition (SVD) and estimated by taking only a few largest singular values and corresponding singular vectors. Finally, an initial denoised image is generated by aggregating all processed patches. For low-rank matrices, SVD can provide the optimal energy compaction in the least square sense. The proposed method exploits the optimal energy compaction property of SVD to lead an LRA of similar patch groups. Unlike other SVD-based methods, the LRA in SVD domain avoids learning the local basis for representing image patches, which usually is computationally expensive. The experimental results demonstrate that the proposed method can effectively reduce noise and be competitive with the current state-of-the-art denoising algorithms in terms of both quantitative metrics and subjective visual quality.

228 citations

Journal ArticleDOI
TL;DR: A two-stage low rank approximation (TSLRA) scheme is designed to recover image structures and refine texture details of corrupted images, which is comparable and even superior to some state-of-the-art inpainting algorithms.
Abstract: To recover the corrupted pixels, traditional inpainting methods based on low-rank priors generally need to solve a convex optimization problem by an iterative singular value shrinkage algorithm. In this paper, we propose a simple method for image inpainting using low rank approximation, which avoids the time-consuming iterative shrinkage. Specifically, if similar patches of a corrupted image are identified and reshaped as vectors, then a patch matrix can be constructed by collecting these similar patch-vectors. Due to its columns being highly linearly correlated, this patch matrix is low-rank. Instead of using an iterative singular value shrinkage scheme, the proposed method utilizes low rank approximation with truncated singular values to derive a closed-form estimate for each patch matrix. Depending upon an observation that there exists a distinct gap in the singular spectrum of patch matrix, the rank of each patch matrix is empirically determined by a heuristic procedure. Inspired by the inpainting algorithms with component decomposition, a two-stage low rank approximation (TSLRA) scheme is designed to recover image structures and refine texture details of corrupted images. Experimental results on various inpainting tasks demonstrate that the proposed method is comparable and even superior to some state-of-the-art inpainting algorithms.

102 citations

Journal ArticleDOI
01 Jun 2018
TL;DR: A fast and robust weak-supervised pulmonary nodules segmentation method based on a modified self-adaptive FCM algorithm that utilizes the relation matrix to calculate the category index of every pixel by Bayesian theory and PSOm algorithm.
Abstract: One of the key problems of computer-aided diagnosis is to segment specific anatomy structures in tomographic images as fast and accurately as possible, which is an important step toward identifying pathologically changed tissues. The segmentation accuracy has a significant impact on diseases diagnosis as well as the therapeutic efficacy. This paper presents a fast and robust weak-supervised pulmonary nodule segmentation method based on a modified self-adaptive FCM algorithm. To improve the traditional FCM, we firstly introduce an enhanced objective function, which computes the membership value according to both the grayscale similarity and spatial similarity between central pixels and neighbors. Then, a probability relation matrix between clusters and categories is constructed by using a small amount of prior knowledge learned from training samples. Based on this matrix, we realize a weak-supervised pulmonary nodules segmentation for unlabeled lung CT images. More specifically, the proposed method utilizes the relation matrix to calculate the category index of every pixel by Bayesian theory and PSOm algorithm. The quantitative experimental results on a test dataset, including 115 2-D clinical CT data, demonstrate the accuracy, efficiency and generality of the proposed weak-supervised strategy in pulmonary nodules segmentation.

28 citations

Journal ArticleDOI
TL;DR: An energy optimization model for image denoising, called adaptive sparse coding on a principal component analysis dictionary (ASC-PCA), and a filter-based iterative shrinkage algorithm containing the filter- based back-projection and shrinkage stages is proposed.
Abstract: Sparse coding is a popular technique in image denoising. However, owing to the ill-posedness of denoising problems, it is difficult to obtain an accurate estimation of the true code. To improve denoising performance, we collect the sparse coding errors of a dataset on a principal component analysis dictionary, make an assumption on the probability of errors and derive an energy optimization model for image denoising, called adaptive sparse coding on a principal component analysis dictionary (ASC-PCA). The new method considers two aspects. First, with a PCA dictionary-related observation of the probability distributions of sparse coding errors on different dimensions, the regularization parameter balancing the fidelity term and the nonlocal constraint can be adaptively determined, which is critical for obtaining satisfying results. Furthermore, an intuitive interpretation of the constructed model is discussed. Second, to solve the new model effectively, a filter-based iterative shrinkage algorithm containing the filter-based back-projection and shrinkage stages is proposed. The filter in the back-projection stage plays an important role in solving the model. As demonstrated by extensive experiments, the proposed method performs optimally in terms of both quantitative and visual measurements.

25 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed resolution enhancement method outperforms conventional interpolation methods and is competitive with the current stat-of-the-art methods in terms of both quantitative metrics and visual quality.
Abstract: Medical images have high information redundancy, which can be used to improve image analysis and visualization for purpose of healthcare. In order to recover a high-resolution (HR) image from its low-resolution (LR) counterpart, this paper proposes a resolution enhancement method by using the nonlocal self-similar redundancy and the low-rank prior. The proposed method consists of three main steps. First, an initial HR image is generated by nonlocal interpolation, which is based on the self-similarity of medical images. Next, the low-rank minimum variance estimator is exploited to reconstruct the HR image. At last, we iteratively apply the subsampling consistency constraint and perform the low-rank reconstruction to refine the reconstructed HR result. Experimental results conducted on MR and CT images demonstrate that the proposed method outperforms conventional interpolation methods and is competitive with the current stat-of-the-art methods in terms of both quantitative metrics and visual quality.

22 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper gives the formulation of the image denoising problem, and then it presents several imageDenoising techniques, which discuss the characteristics of these techniques and provide several promising directions for future research.
Abstract: With the explosion in the number of digital images taken every day, the demand for more accurate and visually pleasing images is increasing. However, the images captured by modern cameras are inevitably degraded by noise, which leads to deteriorated visual image quality. Therefore, work is required to reduce noise without losing image features (edges, corners, and other sharp structures). So far, researchers have already proposed various methods for decreasing noise. Each method has its own advantages and disadvantages. In this paper, we summarize some important research in the field of image denoising. First, we give the formulation of the image denoising problem, and then we present several image denoising techniques. In addition, we discuss the characteristics of these techniques. Finally, we provide several promising directions for future research.

267 citations

Journal ArticleDOI
TL;DR: This article focuses on classifying and comparing some of the significant works in the field of denoising and explains why some methods work optimally and others tend to create artefacts and remove fine structural details under general conditions.
Abstract: At the crossing of the statistical and functional analysis, there exists a relentless quest for an efficient image denoising algorithm. In terms of greyscale imaging, a plethora of denoising algorithms have been documented in the literature, in spite of which the level of functionality of these algorithms still holds margin to acquire desired level of applicability. Quite often noise affecting the pixels in image is Gaussian in nature and uniformly deters information pixels in image. Based on some specific set of assumptions all methods work optimally, however they tend to create artefacts and remove fine structural details under general conditions. This article focuses on classifying and comparing some of the significant works in the field of denoising.

211 citations

Journal ArticleDOI
TL;DR: A novel single-image super-resolution procedure, which upscales a given low-resolution input image to a high-resolution image while preserving the textural and structural information, and develops a single- image SR algorithm based on the proposed model.
Abstract: This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

131 citations

Journal ArticleDOI
TL;DR: The work in this paper was made by NPRP grant # NPRP8-140-2-065 from the Qatar National Research Fund (a member of the Qatar Foundation).
Abstract: This publication was made by NPRP grant # NPRP8-140-2-065 from the Qatar National Research Fund (a member of the Qatar Foundation).

121 citations

Journal ArticleDOI
TL;DR: A fast and fully-automated end-to-end system that can efficiently segment precise lung nodule contours from raw thoracic CT scans and has no human interaction or database specific design is introduced.
Abstract: Deep learning techniques have been extensively used in computerized pulmonary nodule analysis in recent years. Many reported studies still utilized hybrid methods for diagnosis, in which convolutional neural networks (CNNs) are used only as one part of the pipeline, and the whole system still needs either traditional image processing modules or human intervention to obtain final results. In this paper, we introduced a fast and fully-automated end-to-end system that can efficiently segment precise lung nodule contours from raw thoracic CT scans. Our proposed system has four major modules: candidate nodule detection with Faster regional-CNN (R-CNN), candidate merging, false positive (FP) reduction with CNN, and nodule segmentation with customized fully convolutional neural network (FCN). The entire system has no human interaction or database specific design. The average runtime is about 16 s per scan on a standard workstation. The nodule detection accuracy is 91.4% and 94.6% with an average of 1 and 4 false positives (FPs) per scan. The average dice coefficient of nodule segmentation compared to the groundtruth is 0.793.

102 citations