Author
Hui Liu
Other affiliations: Stanford University, MediaTech Institute, Shandong Institute of Business and Technology
Bio: Hui Liu is an academic researcher from Shandong University of Finance and Economics. The author has contributed to research in topics: Computer science & Segmentation. The author has an hindex of 8, co-authored 23 publications receiving 359 citations. Previous affiliations of Hui Liu include Stanford University & MediaTech Institute.
Papers
More filters
TL;DR: The experimental results demonstrate that the proposed method can effectively reduce noise and be competitive with the current state-of-the-art denoising algorithms in terms of both quantitative metrics and subjective visual quality.
Abstract: Nonlocal self-similarity of images has attracted considerable interest in the field of image processing and has led to several state-of-the-art image denoising algorithms, such as block matching and 3-D, principal component analysis with local pixel grouping, patch-based locally optimal wiener, and spatially adaptive iterative singular-value thresholding. In this paper, we propose a computationally simple denoising algorithm using the nonlocal self-similarity and the low-rank approximation (LRA). The proposed method consists of three basic steps. First, our method classifies similar image patches by the block-matching technique to form the similar patch groups, which results in the similar patch groups to be low rank. Next, each group of similar patches is factorized by singular value decomposition (SVD) and estimated by taking only a few largest singular values and corresponding singular vectors. Finally, an initial denoised image is generated by aggregating all processed patches. For low-rank matrices, SVD can provide the optimal energy compaction in the least square sense. The proposed method exploits the optimal energy compaction property of SVD to lead an LRA of similar patch groups. Unlike other SVD-based methods, the LRA in SVD domain avoids learning the local basis for representing image patches, which usually is computationally expensive. The experimental results demonstrate that the proposed method can effectively reduce noise and be competitive with the current state-of-the-art denoising algorithms in terms of both quantitative metrics and subjective visual quality.
228 citations
TL;DR: A super resolution algorithm (SR-DCNN) is proposed for medical images that is based on a neural network and employs a deconvolution operation to effectively establish an end-to-end mapping between the low and high resolution images.
Abstract: Super resolution reconstruction can be used to recover a high resolution image from a low resolution image and is particularly beneficial for clinically significant medical images in diagnosis, treatment, and research applications. However, super resolution is a challenging inverse problem due to its ill-posed nature. In this paper, inspired by recent developments in deep learning, a super resolution algorithm (SR-DCNN) is proposed for medical images that is based on a neural network and employs a deconvolution operation. The purpose of the deconvolution is to effectively establish an end-to-end mapping between the low and high resolution images. First, training data consisting of 1500 medical images of the lung, brain, heart, and spine, was collected, down-sampled, and input into the neural network. Then, patch-based image features were extracted using a set of filters and the parametric rectified linear unit (PReLU) was subsequently applied as the activation function . Finally, these extracted image features were used to reconstruct high resolution images by minimizing the loss between the predicted output image and the original high resolution image. Various network structures and hyper parameter settings were explored to achieve a good trade-off between performance and computational efficiency, based on which a four-layer network was found to achieve the best result in terms of the peak signal-to-noise ratio (PSNR), structural similarity measure (SSIM), information entropy (IE), and execution speed. The network was then validated on test data, and it was demonstrated that the proposed SR-DCNN algorithm quantitatively and qualitatively outperformed the current state-of-the-art methods .
57 citations
TL;DR: This paper proposes a series of gradient-based algorithms, such as height field deformation, high slope optimization, fine detail preservation, curved surface flattening and relief mapping, and two types of shape editing tools that allow the user to interactively modify the bas-relief to exhibit a desired shape.
Abstract: In this paper, we introduce a novel approach to bas-relief generation and shape editing that uses gradient-based mesh deformation as the theoretical foundation Our approach differs from image-based methods in that it operates directly on the triangular mesh, and ensures that the mesh topology remains unchanged during geometric processing By implicitly deforming the input mesh through gradient manipulation, our approach is applicable to both plane surface bas-relief generation and curved surface bas-relief generation We propose a series of gradient-based algorithms, such as height field deformation, high slope optimization, fine detail preservation, curved surface flattening and relief mapping Additionally, we present two types of shape editing tools that allow the user to interactively modify the bas-relief to exhibit a desired shape Experimental results indicate that the proposed approach is effective in producing plausible and impressive bas-reliefs
36 citations
TL;DR: An improved image segmentation schema is proposed and two improved clustering algorithms are presented, in which self-similarity and back projection are considered simultaneously to enhance the robustness, to improve the adaptation of complex images in segmentation.
Abstract: Accurate image segmentation is a prerequisite to conducting an image analysis task, and the complexity stemming from the semantic diversity plays a pivotal role in image segmentation. Existing algorithms employed different types of information in the process of segmentation to improve the robustness. However, these algorithms were characterized by a tradeoff between noise removal and detail retention; this is because it is difficult to distinguish image artifacts from details. This paper proposes an improved image segmentation schema and presents two improved clustering algorithms, in which self-similarity and back projection are considered simultaneously to enhance the robustness. With the aid of self-similarity, non-local information is fully exploited, while the original information can be retained by back projection. Extensive experiments on various types of images demonstrate that our algorithms can balance noise restraining and detail retention to improve the adaptation of complex images in segmentation.
34 citations
01 Jun 2018
TL;DR: A fast and robust weak-supervised pulmonary nodules segmentation method based on a modified self-adaptive FCM algorithm that utilizes the relation matrix to calculate the category index of every pixel by Bayesian theory and PSOm algorithm.
Abstract: One of the key problems of computer-aided diagnosis is to segment specific anatomy structures in tomographic images as fast and accurately as possible, which is an important step toward identifying pathologically changed tissues. The segmentation accuracy has a significant impact on diseases diagnosis as well as the therapeutic efficacy. This paper presents a fast and robust weak-supervised pulmonary nodule segmentation method based on a modified self-adaptive FCM algorithm. To improve the traditional FCM, we firstly introduce an enhanced objective function, which computes the membership value according to both the grayscale similarity and spatial similarity between central pixels and neighbors. Then, a probability relation matrix between clusters and categories is constructed by using a small amount of prior knowledge learned from training samples. Based on this matrix, we realize a weak-supervised pulmonary nodules segmentation for unlabeled lung CT images. More specifically, the proposed method utilizes the relation matrix to calculate the category index of every pixel by Bayesian theory and PSOm algorithm. The quantitative experimental results on a test dataset, including 115 2-D clinical CT data, demonstrate the accuracy, efficiency and generality of the proposed weak-supervised strategy in pulmonary nodules segmentation.
28 citations
Cited by
More filters
TL;DR: This paper gives the formulation of the image denoising problem, and then it presents several imageDenoising techniques, which discuss the characteristics of these techniques and provide several promising directions for future research.
Abstract: With the explosion in the number of digital images taken every day, the demand for more accurate and visually pleasing images is increasing. However, the images captured by modern cameras are inevitably degraded by noise, which leads to deteriorated visual image quality. Therefore, work is required to reduce noise without losing image features (edges, corners, and other sharp structures). So far, researchers have already proposed various methods for decreasing noise. Each method has its own advantages and disadvantages. In this paper, we summarize some important research in the field of image denoising. First, we give the formulation of the image denoising problem, and then we present several image denoising techniques. In addition, we discuss the characteristics of these techniques. Finally, we provide several promising directions for future research.
267 citations
TL;DR: This article focuses on classifying and comparing some of the significant works in the field of denoising and explains why some methods work optimally and others tend to create artefacts and remove fine structural details under general conditions.
Abstract: At the crossing of the statistical and functional analysis, there exists a relentless quest for an efficient image denoising algorithm. In terms of greyscale imaging, a plethora of denoising algorithms have been documented in the literature, in spite of which the level of functionality of these algorithms still holds margin to acquire desired level of applicability. Quite often noise affecting the pixels in image is Gaussian in nature and uniformly deters information pixels in image. Based on some specific set of assumptions all methods work optimally, however they tend to create artefacts and remove fine structural details under general conditions. This article focuses on classifying and comparing some of the significant works in the field of denoising.
211 citations
TL;DR: A fast and fully-automated end-to-end system that can efficiently segment precise lung nodule contours from raw thoracic CT scans and has no human interaction or database specific design is introduced.
Abstract: Deep learning techniques have been extensively used in computerized pulmonary nodule analysis in recent years. Many reported studies still utilized hybrid methods for diagnosis, in which convolutional neural networks (CNNs) are used only as one part of the pipeline, and the whole system still needs either traditional image processing modules or human intervention to obtain final results. In this paper, we introduced a fast and fully-automated end-to-end system that can efficiently segment precise lung nodule contours from raw thoracic CT scans. Our proposed system has four major modules: candidate nodule detection with Faster regional-CNN (R-CNN), candidate merging, false positive (FP) reduction with CNN, and nodule segmentation with customized fully convolutional neural network (FCN). The entire system has no human interaction or database specific design. The average runtime is about 16 s per scan on a standard workstation. The nodule detection accuracy is 91.4% and 94.6% with an average of 1 and 4 false positives (FPs) per scan. The average dice coefficient of nodule segmentation compared to the groundtruth is 0.793.
102 citations
TL;DR: A two-stage low rank approximation (TSLRA) scheme is designed to recover image structures and refine texture details of corrupted images, which is comparable and even superior to some state-of-the-art inpainting algorithms.
Abstract: To recover the corrupted pixels, traditional inpainting methods based on low-rank priors generally need to solve a convex optimization problem by an iterative singular value shrinkage algorithm. In this paper, we propose a simple method for image inpainting using low rank approximation, which avoids the time-consuming iterative shrinkage. Specifically, if similar patches of a corrupted image are identified and reshaped as vectors, then a patch matrix can be constructed by collecting these similar patch-vectors. Due to its columns being highly linearly correlated, this patch matrix is low-rank. Instead of using an iterative singular value shrinkage scheme, the proposed method utilizes low rank approximation with truncated singular values to derive a closed-form estimate for each patch matrix. Depending upon an observation that there exists a distinct gap in the singular spectrum of patch matrix, the rank of each patch matrix is empirically determined by a heuristic procedure. Inspired by the inpainting algorithms with component decomposition, a two-stage low rank approximation (TSLRA) scheme is designed to recover image structures and refine texture details of corrupted images. Experimental results on various inpainting tasks demonstrate that the proposed method is comparable and even superior to some state-of-the-art inpainting algorithms.
102 citations