scispace - formally typeset
Search or ask a question
Author

Phillip Vargas

Bio: Phillip Vargas is an academic researcher from University of Chicago. The author has contributed to research in topics: Iterative reconstruction & Tomographic reconstruction. The author has an hindex of 9, co-authored 26 publications receiving 447 citations. Previous affiliations of Phillip Vargas include University of Illinois at Chicago.

Papers
More filters
Journal ArticleDOI
TL;DR: It is found that at low exposure levels typical of those being considered for screening CT, the Poisson-likelihood based approaches outperform the PWLS objective as well as a standard approach based on adaptive filtering followed by deconvolution.
Abstract: We formulate computed tomography (CT) sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. CT measurement data are degraded by a number of factors-including beam hardening and off-focal radiation-that produce artifacts in reconstructed images unless properly corrected. Currently, such effects are addressed by a sequence of sinogram-preprocessing steps, including deconvolution corrections for off-focal radiation, that have the potential to amplify noise. Noise itself is generally mitigated through apodization of the reconstruction kernel, which effectively ignores the measurement statistics, although in high-noise situations adaptive filtering methods that loosely model data statistics are sometimes applied. As an alternative, we present a general imaging model relating the degraded measurements to the sinogram of ideal line integrals and propose to estimate these line integrals by iteratively optimizing a statistically based objective function. We consider three different strategies for estimating the set of ideal line integrals, one based on direct estimation of ideal "monochromatic" line integrals that have been corrected for single-material beam hardening, one based on estimation of ideal "polychromatic" line integrals that can be readily mapped to monochromatic line integrals, and one based on estimation of ideal transmitted intensities, from which ideal, monochromatic line integrals can be readily estimated. The first two approaches involve maximization of a penalized Poisson-likelihood objective function while the third involves minimization of a quadratic penalized weighted least squares (PWLS) objective applied in the transmitted intensity domain. We find that at low exposure levels typical of those being considered for screening CT, the Poisson-likelihood based approaches outperform the PWLS objective as well as a standard approach based on adaptive filtering followed by deconvolution. At higher exposure levels, the approaches all perform similarly

181 citations

Journal ArticleDOI
07 May 2019-eLife
TL;DR: The computational and visual insights into 3D cell and tissue architecture provided by histotomography are expected to be useful for reference atlases, hypothesis generation, comprehensive organismal screens, and diagnostics.
Abstract: Diagnosing diseases, such as cancer, requires scientists and doctors to understand how cells respond to different medical conditions. A common way of studying these microscopic cell changes is by an approach called histology: thin slices of centimeter-sized samples of tissues are taken from patients, stained to distinguish cellular components, and examined for abnormal features. This powerful technique has revolutionized biology and medicine. But despite its frequent use, histology comes with limitations. To allow individual cells to be distinguished, tissues are cut into slices less than 1/20th of a millimeter thick. Histology’s dependence upon such thin slices makes it impossible to see the entirety of cells and structures that are thicker than the slice, or to accurately measure three-dimensional features such as shape or volume. Larger internal structures within the human body are routinely visualized using a technique known as computerized tomography, CT for short – whereby dozens of x-ray images are compiled together to generate a three-dimensional image. This technique has also been applied to image smaller structures. However, the resolution (the ability to distinguish between objects) and tissue contrast of these images has been insufficient for histology-based diagnosis across all cell types. Now, Ding et al. have developed a new method, by optimizing multiple components of CT scanning, that begins to provide the higher resolution and contrast needed to make diagnoses that require histological detail. To test their modified CT system, Ding et al. created three-dimensional images of whole zebrafish, measuring three millimeters to about a centimeter in length. Adjusting imaging parameters and views of these images made it possible to study features of larger-scale structures, such as the gills and the gut, that are normally inaccessible to histology. As a result of this unprecedented combination of high resolution and scale, computer analysis of these images allowed Ding et al. to measure cellular features such as size and shape, and to determine which cells belong to different brain regions, all from single reconstructions. Surprisingly, visualization of how tightly the brain cells are packed revealed striking differences between the brains of sibling zebrafish that were born the same day. This new method could be used to study changes across hundreds of cell types in any millimeter to centimetre-sized organism or tissue sample. In the future, the accurate measurements of microscopic features made possible by this new tool may help us to make drugs safer, improve tissue diagnostics, and care for our environment.

70 citations

Journal ArticleDOI
TL;DR: A penalized-likelihood image reconstruction strategy that alternates between updating the distribution of a given element and updating the attenuation map for that element's fluorescence X-rays and is guaranteed to increase the penalized likelihood at each iteration.
Abstract: X-ray fluorescence computed tomography (XFCT) allows for the reconstruction of the distribution of elements within a sample from measurements of fluorescence x rays produced by irradiation of the sample with monochromatic synchrotron radiation. XFCT is not a transmission tomography modality, but rather a stimulated emission tomography modality; thus correction for attenuation of the incident and fluorescence photons is essential if accurate images are to be obtained. This is challenging because the attenuation map is, in general, known only at the stimulating beam energy and not at the various fluorescence energies of interest. We make use of empirically fitted analytic expressions for x-ray attenuation coefficients to express the unknown attenuation maps as linear combinations of known quantities and the unknown elemental concentrations of interest. We then develop an iterative image reconstruction algorithm based on penalized-likelihood methods that have been developed for medical emission tomography. Studies with numerical phantoms indicate that the approach is able to produce qualitatively and quantitatively accurate reconstructed images even in the face of severe attenuation. We also apply the method to real synchrotron-acquired data and demonstrate a marked improvement in image quality relative to filtered backprojection reconstruction.

61 citations

Journal ArticleDOI
TL;DR: A monotonic penalized-likelihood algorithm for image reconstruction in X-ray fluorescence CT (XFCT) when the attenuation maps at the energies of the fluorescence X-rays are unknown, guaranteed to increase the penalized likelihood at each iteration.
Abstract: In this paper, we derive a monotonic penalized-likelihood algorithm for image reconstruction in X-ray fluorescence computed tomography (XFCT) when the attenuation maps at the energies of the fluorescence X-rays are unknown. In XFCT, a sample is irradiated with pencil beams of monochromatic synchrotron radiation that stimulate the emission of fluorescence X-rays from atoms of elements whose K- or L-edges lie below the energy of the stimulating beam. Scanning and rotating the object through the beam allows for acquisition of a tomographic dataset that can be used to reconstruct images of the distribution of the elements in question. XFCT is a stimulated emission tomography modality, and it is thus necessary to correct for attenuation of the incident and fluorescence photons. The attenuation map is, however, generally known only at the stimulating beam energy and not at the energies of the various fluorescence X-rays of interest. We have developed a penalized-likelihood image reconstruction strategy for this problem. The approach alternates between updating the distribution of a given element and updating the attenuation map for that element's fluorescence X-rays. The approach is guaranteed to increase the penalized likelihood at each iteration. Because the joint objective function is not necessarily concave, the approach may drive the solution to a local maximum. To encourage the algorithm to seek out a reasonable local maximum, we include in the objective function a prior that encourages a relationship, based on physical considerations, between the fluorescence attenuation map and the distribution of the element being reconstructed

51 citations

Journal ArticleDOI
TL;DR: The authors report the development and experimental implementation of two novel imaging geometries for mapping of trace metals in biological samples with ∼50-500 μm spatial resolution and demonstrate the feasibility of these two novel approaches to XFCT imaging.
Abstract: Purpose: X-ray fluorescence computed tomography (XFCT) is an emerging imaging modality that maps the three-dimensional distribution of elements, generally metals, inex vivo specimens and potentially in living animals and humans. At present, it is generally performed at synchrotrons, taking advantage of the high flux of monochromatic x rays, but recent work has demonstrated the feasibility of using laboratory-based x-ray tube sources. In this paper, the authors report the development and experimental implementation of two novel imaging geometries for mapping of trace metals in biological samples with ∼50–500 μm spatial resolution. Methods: One of the new imaging approaches involves illuminating and scanning a single slice of the object and imaging each slice's x-ray fluorescent emissions using a position-sensitive detector and a pinhole collimator. The other involves illuminating a single line through the object and imaging the emissions using a position-sensitive detector and a slit collimator. They have implemented both of these using synchrotron radiation at the Advanced Photon Source. Results: The authors show that it is possible to achieve 250 eV energy resolution using an electron multiplying CCD operating in a quasiphoton-counting mode. Doing so allowed them to generate elemental images using both of the novel geometries for imaging of phantoms and, for the second geometry, an osmium-stained zebrafish. Conclusions: The authors have demonstrated the feasibility of these two novel approaches to XFCT imaging. While they use synchrotron radiation in this demonstration, the geometries could readily be translated to laboratory systems based on tube sources.

27 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The general technical strategies that are commonly used for radiation dose management in CT are summarized, and dose-management strategies for pediatric CT, cardiac CT, dual-energy CT, CT perfusion and interventional CT are specifically discussed.
Abstract: Despite universal consensus that computed tomography (CT) overwhelmingly benefits patients when used for appropriate indications, concerns have been raised regarding the potential risk of cancer induction from CT due to the exponentially increased use of CT in medicine Keeping radiation dose as low as reasonably achievable, consistent with the diagnostic task, remains the most important strategy for decreasing this potential risk This article summarizes the general technical strategies that are commonly used for radiation dose management in CT Dose-management strategies for pediatric CT, cardiac CT, dual-energy CT, CT perfusion and interventional CT are specifically discussed, and future perspectives on CT dose reduction are presented

356 citations

Journal ArticleDOI
TL;DR: A concise look at the overall evolution of CT image reconstruction and its clinical implementations is taken, finding IR is essential for photon-counting CT, phase-contrast CT, and dark-field CT.
Abstract: The first CT scanners in the early 1970s already used iterative reconstruction algorithms; however, lack of computational power prevented their clinical use. In fact, it took until 2009 for the first iterative reconstruction algorithms to come commercially available and replace conventional filtered back projection. Since then, this technique has caused a true hype in the field of radiology. Within a few years, all major CT vendors introduced iterative reconstruction algorithms for clinical routine, which evolved rapidly into increasingly advanced reconstruction algorithms. The complexity of algorithms ranges from hybrid-, model-based to fully iterative algorithms. As a result, the number of scientific publications on this topic has skyrocketed over the last decade. But what exactly has this technology brought us so far? And what can we expect from future hardware as well as software developments, such as photon-counting CT and artificial intelligence? This paper will try answer those questions by taking a concise look at the overall evolution of CT image reconstruction and its clinical implementations. Subsequently, we will give a prospect towards future developments in this domain. KEY POINTS: • Advanced CT reconstruction methods are indispensable in the current clinical setting. • IR is essential for photon-counting CT, phase-contrast CT, and dark-field CT. • Artificial intelligence will potentially further increase the performance of reconstruction methods.

304 citations

Journal ArticleDOI
TL;DR: The results demonstrate that bilateral filtering incorporating a CT noise model can achieve a significantly better noise-resolution trade-off than a series of commercial reconstruction kernels and can be translated into substantial dose reduction.
Abstract: Purpose: To investigate a novel locally adaptive projection space denoising algorithm for low-dose CT data. Methods: The denoising algorithm is based on bilateral filtering, which smooths values using a weighted average in a local neighborhood, with weights determined according to both spatial proximity and intensity similarity between the center pixel and the neighboring pixels. This filtering is locally adaptive and can preserve important edge information in the sinogram, thus maintaining high spatial resolution. A CTnoise model that takes into account the bowtie filter and patient-specific automatic exposure control effects is also incorporated into the denoising process. The authors evaluated the noise-resolution properties of bilateral filtering incorporating such a CTnoise model in phantom studies and preliminary patient studies with contrast-enhanced abdominal CT exams. Results: On a thin wire phantom, the noise-resolution properties were significantly improved with the denoising algorithm compared to commercial reconstruction kernels. The noise-resolution properties on low-dose (40 mA s) data after denoising approximated those of conventional reconstructions at twice the dose level. A separate contrast plate phantom showed improved depiction of low-contrast plates with the denoising algorithm over conventional reconstructions when noise levels were matched. Similar improvement in noise-resolution properties was found on CT colonography data and on five abdominal low-energy (80 kV) CT exams. In each abdominal case, a board-certified subspecialized radiologist rated the denoised 80 kV images markedly superior in image quality compared to the commercially available reconstructions, and denoising improved the image quality to the point where the 80 kV images alone were considered to be of diagnostic quality. Conclusions: The results demonstrate that bilateral filtering incorporating a CTnoise model can achieve a significantly better noise-resolution trade-off than a series of commercial reconstruction kernels. This improvement in noise-resolution properties can be used for improving image quality in CT and can be translated into substantial dose reduction.

290 citations

Journal ArticleDOI
TL;DR: An adaptive-weighted TV (AwTV) minimization algorithm is presented that can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.
Abstract: Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

274 citations

Journal ArticleDOI
TL;DR: This report offers a strategic roadmap for the CT user and research and manufacturer communities toward routinely achieving effective doses of less than 1 mSv, which is well below the average annual dose from naturally occurring sources of radiation.
Abstract: This report summarizes the advances in data acquisition, image reconstruction, and optimization processes that were identified by consensus as being necessary to achieve effective dose levels for routine CT that are well below background levels.

264 citations