scispace - formally typeset
Search or ask a question
Author

Yifei Lou

Bio: Yifei Lou is an academic researcher from University of Texas at Dallas. The author has contributed to research in topics: Iterative reconstruction & Reconstruction algorithm. The author has an hindex of 29, co-authored 93 publications receiving 2880 citations. Previous affiliations of Yifei Lou include Washington State University & University of Texas Southwestern Medical Center.


Papers
More filters
Journal ArticleDOI
TL;DR: A sparsity oriented simulated annealing procedure with non-Gaussian random perturbation is proposed and the almost sure convergence of the combined algorithm (DCASA) to a global minimum is proved.
Abstract: We study minimization of the difference of $\ell_1$ and $\ell_2$ norms as a nonconvex and Lipschitz continuous metric for solving constrained and unconstrained compressed sensing problems. We establish exact (stable) sparse recovery results under a restricted isometry property (RIP) condition for the constrained problem, and a full-rank theorem of the sensing matrix restricted to the support of the sparse solution. We present an iterative method for $\ell_{1-2}$ minimization based on the difference of convex functions algorithm and prove that it converges to a stationary point satisfying the first-order optimality condition. We propose a sparsity oriented simulated annealing procedure with non-Gaussian random perturbation and prove the almost sure convergence of the combined algorithm (DCASA) to a global minimum. Computation examples on success rates of sparse solution recovery show that if the sensing matrix is ill-conditioned (non RIP satisfying), then our method is better than existing nonconvex compre...

349 citations

Journal ArticleDOI
TL;DR: Two nonlocal regularizations for image recovery, which exploit the spatial interactions in images, are considered, which get superior results using preprocessed data as input for the weighted functionals.
Abstract: This paper considers two nonlocal regularizations for image recovery, which exploit the spatial interactions in images. We get superior results using preprocessed data as input for the weighted functionals. Applications discussed include image deconvolution and tomographic reconstruction. The numerical results show our method outperforms some previous ones.

333 citations

Journal ArticleDOI
TL;DR: A fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose considerably is developed and the high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.
Abstract: Purpose: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. Methods: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. Results: It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currentlymore » widely used full-fan head and neck scanning protocol of {approx}360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. Conclusions: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.« less

233 citations

Journal ArticleDOI
TL;DR: A fast graphic processing unit (GPU)-based algorithm to reconstruct high-quality CBCT images from undersampled and noisy projection data so as to lower the imaging dose and quantitatively analyzed the reconstructed CBCT image quality in terms of the modulation-transfer function and contrast-to-noise ratio.
Abstract: The x-ray imaging dose from serial cone-beam computed tomography (CBCT) scans raises a clinical concern in most image-guided radiation therapy procedures. It is the goal of this paper to develop a fast graphic processing unit (GPU)-based algorithm to reconstruct high-quality CBCT images from undersampled and noisy projection data so as to lower the imaging dose. For this purpose, we have developed an iterative tight-frame (TF)-based CBCT reconstruction algorithm. A condition that a real CBCT image has a sparse representation under a TF basis is imposed in the iteration process as regularization to the solution. To speed up the computation, a multi-grid method is employed. Our GPU implementation has achieved high computational efficiency and a CBCT image of resolution 512 × 512 × 70 can be reconstructed in ~5 min. We have tested our algorithm on a digital NCAT phantom and a physical Catphan phantom. It is found that our TF-based algorithm is able to reconstruct CBCT in the context of undersampling and low mAs levels. We have also quantitatively analyzed the reconstructed CBCT image quality in terms of the modulation-transfer function and contrast-to-noise ratio under various scanning conditions. The results confirm the high CBCT image quality obtained from our TF algorithm. Moreover, our algorithm has also been validated in a real clinical context using a head-and-neck patient case. Comparisons of the developed TF algorithm and the current state-of-the-art TV algorithm have also been made in various cases studied in terms of reconstructed image quality and computation efficiency.

194 citations

Journal ArticleDOI
TL;DR: A weighted difference of anisotropic and isotropic total variation (TV) as a regularization for image processing tasks, based on the well-known TV model and natural image statistics, improves on the classical TV model consistently and is on par with representative state-of-the-art methods.
Abstract: We propose a weighted difference of anisotropic and isotropic total variation (TV) as a regularization for image processing tasks, based on the well-known TV model and natural image statistics. Due to the form of our model, it is natural to compute via a difference of convex algorithm (DCA). We draw its connection to the Bregman iteration for convex problems and prove that the iteration generated from our algorithm converges to a stationary point with the objective function values decreasing monotonically. A stopping strategy based on the stable oscillatory pattern of the iteration error from the ground truth is introduced. In numerical experiments on image denoising, image deblurring, and magnetic resonance imaging (MRI) reconstruction, our method improves on the classical TV model consistently and is on par with representative state-of-the-art methods.

180 citations


Cited by
More filters
Book ChapterDOI
24 Jun 2010
TL;DR: This paper deals with the single image scale-up problem using sparse-representation modeling, and assumes a local Sparse-Land model on image patches, serving as regularization, to recover an original image from its blurred and down-scaled noisy version.
Abstract: This paper deals with the single image scale-up problem using sparse-representation modeling. The goal is to recover an original image from its blurred and down-scaled noisy version. Since this problem is highly ill-posed, a prior is needed in order to regularize it. The literature offers various ways to address this problem, ranging from simple linear space-invariant interpolation schemes (e.g., bicubic interpolation), to spatially-adaptive and non-linear filters of various sorts. We embark from a recently-proposed successful algorithm by Yang et. al. [1,2], and similarly assume a local Sparse-Land model on image patches, serving as regularization. Several important modifications to the above-mentioned solution are introduced, and are shown to lead to improved results. These modifications include a major simplification of the overall process both in terms of the computational complexity and the algorithm architecture, using a different training approach for the dictionary-pair, and introducing the ability to operate without a training-set by boot-strapping the scale-up task from the given low-resolution image. We demonstrate the results on true images, showing both visual and PSNR improvements.

2,667 citations

01 Jan 2016
TL;DR: In this paper, the authors present the principles of optics electromagnetic theory of propagation interference and diffraction of light, which can be used to find a good book with a cup of coffee in the afternoon, instead of facing with some infectious bugs inside their computer.
Abstract: Thank you for reading principles of optics electromagnetic theory of propagation interference and diffraction of light. As you may know, people have search hundreds times for their favorite novels like this principles of optics electromagnetic theory of propagation interference and diffraction of light, but end up in harmful downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they are facing with some infectious bugs inside their computer.

2,213 citations

Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations