scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 2006"


Proceedings ArticleDOI
17 Jun 2006
TL;DR: This paper first survey multi-view stereo algorithms and compare them qualitatively using a taxonomy that differentiates their key properties, then describes the process for acquiring and calibrating multiview image datasets with high-accuracy ground truth and introduces the evaluation methodology.
Abstract: This paper presents a quantitative comparison of several multi-view stereo reconstruction algorithms. Until now, the lack of suitable calibrated multi-view image datasets with known ground truth (3D shape models) has prevented such direct comparisons. In this paper, we first survey multi-view stereo algorithms and compare them qualitatively using a taxonomy that differentiates their key properties. We then describe our process for acquiring and calibrating multiview image datasets with high-accuracy ground truth and introduce our evaluation methodology. Finally, we present the results of our quantitative comparison of state-of-the-art multi-view stereo reconstruction algorithms on six benchmark datasets. The datasets, evaluation details, and instructions for submitting new models are available online at http://vision.middlebury.edu/mview.

2,556 citations


Journal ArticleDOI
TL;DR: An overview of the rapidly expanding field of photoacoustic imaging for biomedical applications can be found in this article, where a number of imaging techniques, including depth profiling in layered media, scanning tomography with focused ultrasonic transducers, image forming with an acoustic lens, and computed tomography using unfocused transducers are introduced.
Abstract: Photoacoustic imaging (also called optoacoustic or thermoacoustic imaging) has the potential to image animal or human organs, such as the breast and the brain, with simultaneous high contrast and high spatial resolution. This article provides an overview of the rapidly expanding field of photoacoustic imaging for biomedical applications. Imaging techniques, including depth profiling in layered media, scanning tomography with focused ultrasonic transducers, image forming with an acoustic lens, and computed tomography with unfocused transducers, are introduced. Special emphasis is placed on computed tomography, including reconstruction algorithms, spatial resolution, and related recent experiments. Promising biomedical applications are discussed throughout the text, including (1) tomographic imaging of the skin and other superficial organs by laser-induced photoacoustic microscopy, which offers the critical advantages, over current high-resolution optical imaging modalities, of deeper imaging depth and higher absorptioncontrasts, (2) breast cancerdetection by near-infrared light or radio-frequency–wave-induced photoacoustic imaging, which has important potential for early detection, and (3) small animal imaging by laser-induced photoacoustic imaging, which measures unique optical absorptioncontrasts related to important biochemical information and provides better resolution in deep tissues than optical imaging.

2,343 citations


Journal ArticleDOI
TL;DR: In this paper, a tomographic particle image velocimetry (tomographic-PIV) system based on the illumination, recording and reconstruction of tracer particles within a 3D measurement volume is described.
Abstract: This paper describes the principles of a novel 3D PIV system based on the illumination, recording and reconstruction of tracer particles within a 3D measurement volume The technique makes use of several simultaneous views of the illuminated particles and their 3D reconstruction as a light intensity distribution by means of optical tomography The technique is therefore referred to as tomographic particle image velocimetry (tomographic-PIV) The reconstruction is performed with the MART algorithm, yielding a 3D array of light intensity discretized over voxels The reconstructed tomogram pair is then analyzed by means of 3D cross-correlation with an iterative multigrid volume deformation technique, returning the three-component velocity vector distribution over the measurement volume The principles and details of the tomographic algorithm are discussed and a parametric study is carried out by means of a computer-simulated tomographic-PIV procedure The study focuses on the accuracy of the light intensity field reconstruction process The simulation also identifies the most important parameters governing the experimental method and the tomographic algorithm parameters, showing their effect on the reconstruction accuracy A computer simulated experiment of a 3D particle motion field describing a vortex ring demonstrates the capability and potential of the proposed system with four cameras The capability of the technique in real experimental conditions is assessed with the measurement of the turbulent flow in the near wake of a circular cylinder at Reynolds 2,700

1,159 citations


Journal Article
TL;DR: In this paper, the authors developed and investigated an iterative image reconstruction algorithm based on the minimization of the image total variation (TV) that applies to divergent-beam CT.
Abstract: In practical applications of tomographic imaging, there are often challenges for image reconstruction due to under-sampling and insufficient data. In computed tomography (CT), for example, image reconstruction from few views would enable rapid scanning with a reduced x-ray dose delivered to the patient. Limited-angle problems are also of practical significance in CT. In this work, we develop and investigate an iterative image reconstruction algorithm based on the minimization of the image total variation (TV) that applies to divergent-beam CT. Numerical demonstrations of our TV algorithm are performed with various insufficient data problems in fan-beam CT. The TV algorithm can be generalized to cone-beam CT as well as other tomographic imaging modalities.

1,009 citations


Journal ArticleDOI
TL;DR: An automated method for the segmentation of the vascular network in retinal images that outperforms other solutions and approximates the average accuracy of a human observer without a significant degradation of sensitivity and specificity is presented.
Abstract: This paper presents an automated method for the segmentation of the vascular network in retinal images. The algorithm starts with the extraction of vessel centerlines, which are used as guidelines for the subsequent vessel filling phase. For this purpose, the outputs of four directional differential operators are processed in order to select connected sets of candidate points to be further classified as centerline pixels using vessel derived features. The final segmentation is obtained using an iterative region growing method that integrates the contents of several binary images resulting from vessel width dependent morphological filters. Our approach was tested on two publicly available databases and its results are compared with recently published methods. The results demonstrate that our algorithm outperforms other solutions and approximates the average accuracy of a human observer without a significant degradation of sensitivity and specificity

900 citations


Proceedings ArticleDOI
01 Jan 2006
TL;DR: This work shows that when applied to human faces, the constrained local model (CLM) algorithm is more robust and more accurate than the original AAM search method, which relies on the image reconstruction error to update the model parameters.
Abstract: We present an efficient and robust model matching method which uses a joint shape and texture appearance model to generate a set of region template detectors. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the Active Appearance Model due to Cootes et al. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to human faces, our constrained local model (CLM) algorithm is more robust and more accurate than the original AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on two publicly available face data sets and improved tracking on a challenging set of in-car face sequences.

802 citations


Journal ArticleDOI
TL;DR: Experimental data were used to compare images reconstructed by the standard iterative reconstruction software and the one modeling the response function, and the results showed that the modeling of the responsefunction improves both spatial resolution and noise properties.
Abstract: The quality of images reconstructed by statistical iterative methods depends on an accurate model of the relationship between image space and projection space through the system matrix. The elements of the system matrix for the clinical Hi-Rez scanner were derived by processing the data measured for a point source at different positions in a portion of the field of view. These measured data included axial compression and azimuthal interleaving of adjacent projections. Measured data were corrected for crystal and geometrical efficiency. Then, a whole system matrix was derived by processing the responses in projection space. Such responses included both geometrical and detection physics components of the system matrix. The response was parameterized to correct for point source location and to smooth for projection noise. The model also accounts for axial compression (span) used on the scanner. The forward projector for iterative reconstruction was constructed using the estimated response parameters. This paper extends our previous work to fully three-dimensional. Experimental data were used to compare images reconstructed by the standard iterative reconstruction software and the one modeling the response function. The results showed that the modeling of the response function improves both spatial resolution and noise properties

520 citations


Journal ArticleDOI
TL;DR: This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations.
Abstract: Reconstructing low-dose X-ray computed tomography (CT) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a Markov random field (MRF) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loeve (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging

519 citations


Proceedings ArticleDOI
17 Jun 2006
TL;DR: A method that estimates the motion of a calibrated camera and the tridimensional geometry of the environment and the introduction of a fast and local bundle adjustment method that ensures both good accuracy and consistency of the estimated camera poses along the sequence is described.
Abstract: In this paper we describe a method that estimates the motion of a calibrated camera (settled on an experimental vehicle) and the tridimensional geometry of the environment. The only data used is a video input. In fact, interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in real-time, key-frames are selected and permit the features 3D reconstruction. The algorithm is particularly appropriate to the reconstruction of long images sequences thanks to the introduction of a fast and local bundle adjustment method that ensures both good accuracy and consistency of the estimated camera poses along the sequence. It also largely reduces computational complexity compared to a global bundle adjustment. Experiments on real data were carried out to evaluate speed and robustness of the method for a sequence of about one kilometer long. Results are also compared to the ground truth measured with a differential GPS.

436 citations


Journal ArticleDOI
TL;DR: The performance of a new dual-source CT with a heart rate independent temporal resolution of 83 ms for the visualization of the coronary arteries in 14 consecutive patients is evaluated and constitutes a promising new concept for cardiac CT.

409 citations


Journal ArticleDOI
TL;DR: A review of recent progress in developing statistically based iterative techniques for emission computed tomography describes the different formulations of the emission image reconstruction problem and their properties and describes the numerical algorithms used for optimizing these functions.
Abstract: In emission tomography statistically based iterative methods can improve image quality relative to analytic image reconstruction through more accurate physical and statistical modelling of high-energy photon production and detection processes. Continued exponential improvements in computing power, coupled with the development of fast algorithms, have made routine use of iterative techniques practical, resulting in their increasing popularity in both clinical and research environments. Here we review recent progress in developing statistically based iterative techniques for emission computed tomography. We describe the different formulations of the emission image reconstruction problem and their properties. We then describe the numerical algorithms that are used for optimizing these functions and illustrate their behaviour using small scale simulations.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: This paper proposes algorithms and hardware to support a new theory of compressive imaging based on a new digital image/video camera that directly acquires random projections of the signal without first collecting the pixels/voxels.
Abstract: Compressive Sensing is an emerging field based on the revelation that a small group of non-adaptive linear projections of a compressible signal contains enough information for reconstruction and processing. In this paper, we propose algorithms and hardware to support a new theory of Compressive Imaging. Our approach is based on a new digital image/video camera that directly acquires random projections of the signal without first collecting the pixels/voxels. Our camera architecture employs a digital micromirror array to perform optical calculations of linear projections of an image onto pseudo-random binary patterns. Its hallmarks include the ability to obtain an image with a single detection element while measuring the image/video fewer times than the number of pixels ? this can significantly reduce the computation required for video acquisition/encoding. Because our system relies on a single photon detector, it can also be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers. We are currently testing a proto-type design for the camera and include experimental results.

Proceedings ArticleDOI
17 Jun 2006
TL;DR: This paper compares classical shape from stereo with shape from synthetic aperture focus, and describes two variants of multi-view stereo based on color medians and entropy that increase robustness to occlusions.
Abstract: Most algorithms for 3D reconstruction from images use cost functions based on SSD, which assume that the surfaces being reconstructed are visible to all cameras. This makes it difficult to reconstruct objects which are partially occluded. Recently, researchers working with large camera arrays have shown it is possible to "see through" occlusions using a technique called synthetic aperture focusing. This suggests that we can design alternative cost functions that are robust to occlusions using synthetic apertures. Our paper explores this design space. We compare classical shape from stereo with shape from synthetic aperture focus. We also describe two variants of multi-view stereo based on color medians and entropy that increase robustness to occlusions. We present an experimental comparison of these cost functions on complex light fields, measuring their accuracy against the amount of occlusion.

Journal ArticleDOI
TL;DR: It was found that for the phantoms with homogeneous background, the BP method resulted in less noisy tomosynthesized images and higher CNR values for masses than the SART and ML-convex methods, however, the two iterative methods provided greater contrast enhancement for both masses and calcification, sharper LOSF, and reduced interplane blurring and artifacts with better ASF behaviors for masses.
Abstract: Digital tomosynthesis mammography (DTM) is a promising new modality for breast cancer detection. In DTM, projection-view images are acquired at a limited number of angles over a limited angular range and the imaged volume is reconstructed from the two-dimensional projections, thus providing three-dimensional structural information of the breast tissue. In this work, we investigated three representative reconstruction methods for this limited-angle cone-beam tomographic problem, including the backprojection (BP) method, the simultaneous algebraic reconstruction technique (SART) and the maximum likelihood method with the convex algorithm (ML-convex). The SART and ML-convex methods were both initialized with BP results to achieve efficient reconstruction. A second generation GE prototype tomosynthesis mammography system with a stationary digital detector was used for image acquisition. Projection-view images were acquired from 21 angles in 3 degrees increments over a +/- 30 degrees angular range. We used an American College of Radiology phantom and designed three additional phantoms to evaluate the image quality and reconstruction artifacts. In addition to visual comparison of the reconstructed images of different phantom sets, we employed the contrast-to-noise ratio (CNR), a line profile of features, an artifact spread function (ASF), a relative noise power spectrum (NPS), and a line object spread function (LOSF) to quantitatively evaluate the reconstruction results. It was found that for the phantoms with homogeneous background, the BP method resulted in less noisy tomosynthesized images and higher CNR values for masses than the SART and ML-convex methods. However, the two iterative methods provided greater contrast enhancement for both masses and calcification, sharper LOSF, and reduced interplane blurring and artifacts with better ASF behaviors for masses. For a contrast-detail phantom with heterogeneous tissue-mimicking background, the BP method had strong blurring artifacts along the x-ray source motion direction that obscured the contrast-detail objects, while the other two methods can remove the superimposed breast structures and significantly improve object conspicuity. With a properly selected relaxation parameter, the SART method with one iteration can provide tomosynthesized images comparable to those obtained from the ML-convex method with seven iterations, when BP results were used as initialization for both methods.

Journal ArticleDOI
TL;DR: A novel framework for lossless (invertible) authentication watermarking is presented, which enables zero-distortion reconstruction of the un-watermarked images upon verification and enables public(-key) authentication without granting access to the perfect original and allows for efficient tamper localization.
Abstract: We present a novel framework for lossless (invertible) authentication watermarking, which enables zero-distortion reconstruction of the un-watermarked images upon verification. As opposed to earlier lossless authentication methods that required reconstruction of the original image prior to validation, the new framework allows validation of the watermarked images before recovery of the original image. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not needed. For verified images, integrity of the reconstructed image is ensured by the uniqueness of the reconstruction procedure. The framework also enables public(-key) authentication without granting access to the perfect original and allows for efficient tamper localization. Effectiveness of the framework is demonstrated by implementing the framework using hierarchical image authentication along with lossless generalized-least significant bit data embedding.

Journal ArticleDOI
TL;DR: A simple iterative algorithm is described to recover the distribution of optical absorption coefficients from the image of the absorbed optical energy, which incorporates a diffusion-based finite-element model of light transport.
Abstract: Photoacoustic imaging is a noninvasive biomedical imaging modality for visualizing the internal structure and function of soft tissues. Conventionally, an image proportional to the absorbed optical energy is reconstructed from measurements of light-induced acoustic emissions. We describe a simple iterative algorithm to recover the distribution of optical absorption coefficients from the image of the absorbed optical energy. The algorithm, which incorporates a diffusion-based finite-element model of light transport, converges quickly onto an accurate estimate of the distribution of absolute absorption coefficients. Two-dimensional examples with physiologically realistic optical properties are shown. The ability to recover optical properties (which directly reflect tissue physiology) could enhance photoacoustic imaging techniques, particularly methods based on spectroscopic analysis of chromophores.

Journal ArticleDOI
TL;DR: Two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information are proposed.
Abstract: In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided.

Journal ArticleDOI
TL;DR: An X-ray system with a large area detector has high scatter-to-primary ratios (SPRs), which result in severe artifacts in reconstructed computed tomography (CT) images, and a scatter correction algorithm is introduced that provides effective scatter correction but does not require additional patient exposure.
Abstract: An X-ray system with a large area detector has high scatter-to-primary ratios (SPRs), which result in severe artifacts in reconstructed computed tomography (CT) images. A scatter correction algorithm is introduced that provides effective scatter correction but does not require additional patient exposure. The key hypothesis of the algorithm is that the high-frequency components of the X-ray spatial distribution do not result in strong high-frequency signals in the scatter. A calibration sheet with a checkerboard pattern of semitransparent blockers (a "primary modulator") is inserted between the X-ray source and the object. The primary distribution is partially modulated by a high-frequency function, while the scatter distribution still has dominant low-frequency components, based on the hypothesis. Filtering and demodulation techniques suffice to extract the low-frequency components of the primary and hence obtain the scatter estimation. The hypothesis was validated using Monte Carlo (MC) simulation, and the algorithm was evaluated by both MC simulations and physical experiments. Reconstructions of a software humanoid phantom suggested system parameters in the physical implementation and showed that the proposed method reduced the relative mean square error of the reconstructed image in the central region of interest from 74.2% to below 1%. In preliminary physical experiments on the standard evaluation phantom, this error was reduced from 31.8% to 2.3%, and it was also demonstrated that the algorithm has no noticeable impact on the resolution of the reconstructed image in spite of the filter-based approach. Although the proposed scatter correction technique was implemented for X-ray CT, it can also be used in other X-ray imaging applications, as long as a primary modulator can be inserted between the X-ray source and the imaged object

Journal ArticleDOI
Jiang Hsieh1, John Londt1, Melissa Vass1, Jay Li1, Xiangyang Tang1, Darin R. Okerlund1 
TL;DR: The key to the proposed protocol is the large volume coverage enabled by the cone beam CT scanner, which allows the coverage of the entire heart in 3 to 4 steps, and a gated complementary reconstruction algorithm that overcomes the longitudinal truncation problem resulting from the conebeam geometry.
Abstract: Coronary artery imaging with x-ray computed tomography (CT) is one of the most recent advancements in CT clinical applications. Although existing ''state-of-the-art'' clinical protocols today utilize helical data acquisition, it suffers from the lack of ability to handle irregular heart rate and relatively high x-ray dose to patients. In this paper, we propose a step-and-shoot data acquisition protocol that significantly overcomes these shortcomings. The key to the proposed protocol is the large volume coverage (40 mm) enabled by the cone beam CT scanner, which allows the coverage of the entire heart in 3 to 4 steps. In addition, we propose a gated complementary reconstruction algorithm that overcomes the longitudinal truncation problem resulting from the cone beam geometry. Computer simulations, phantom experiments, and clinical studies were conducted to validate our approach.

Journal ArticleDOI
Tianfang Li1, B. Thorndyke1, Eduard Schreibmann1, Yong Yang1, Lei Xing1 
TL;DR: A method to enhance the performance of 4D PET by developing a new technique of4D PET reconstruction with incorporation of an organ motion model derived from 4D-CT images based on the well-known maximum-likelihood expectation-maximization (ML-EM) algorithm is proposed.
Abstract: Positron emission tonography (PET) is useful in diagnosis and radiation treatment planning for a variety of cancers. For patients with cancers in thoracic or upper abdominal region, the respiratory motion produces large distortions in the tumor shape and size, affecting the accuracy in both diagnosis and treatment. Four-dimensional (4D) (gated) PET aims to reduce the motion artifacts and to provide accurate measurement of the tumor volume and the tracer concentration. A major issue in 4D PET is the lack of statistics. Since the collected photons are divided into several frames in the 4D PET scan, the quality of each reconstructed frame degrades as the number of frames increases. The increased noise in each frame heavily degrades the quantitative accuracy of the PET imaging. In this work, we propose a method to enhance the performance of 4D PET by developing a new technique of 4D PET reconstruction with incorporation of an organ motion model derived from 4D-CT images. The method is based on the well-known maximum-likelihood expectation-maximization (ML-EM) algorithm. During the processes of forward- and backward-projection in the ML-EM iterations, all projection data acquired at different phases are combined together to update the emission map with the aid of deformable model, the statistics is therefore greatly improved. The proposed algorithm was first evaluated with computer simulations using a mathematical dynamic phantom. Experiment with a moving physical phantom was then carried out to demonstrate the accuracy of the proposed method and the increase of signal-to-noise ratio over three-dimensional PET. Finally, the 4D PET reconstruction was applied to a patient case.

Journal ArticleDOI
TL;DR: In this article, a data sufficiency condition for 2D or 3D region-of-interest (ROI) reconstruction from a limited family of line integrals has been introduced using the relation between the backprojection of a derivative of the data and the Hilbert transform of the image along certain segments of lines covering the ROI.
Abstract: A data sufficiency condition for 2D or 3D region-of-interest (ROI) reconstruction from a limited family of line integrals has recently been introduced using the relation between the backprojection of a derivative of the data and the Hilbert transform of the image along certain segments of lines covering the ROI. This paper generalizes this sufficiency condition by showing that unique and stable reconstruction can be achieved from an even more restricted family of data sets, or, conversely, that even larger ROIs can be reconstructed from a given data set. The condition is derived by analysing the inversion of the truncated Hilbert transform, here defined as the problem of recovering a function of one real variable from the knowledge of its Hilbert transform along a segment which only partially covers the support of the function but has at least one end point outside that support. A proof of uniqueness and a stability estimate are given for this problem. Numerical simulations of a 2D thorax phantom are presented to illustrate the new data sufficiency condition and the good stability of the ROI reconstruction in the presence of noise.

Journal ArticleDOI
TL;DR: The image reconstruction was optimized using a noise model for diffuse correlation tomography which enabled better data selection and regularization in 3D tomography of cerebral blood flow in small animal models.
Abstract: Diffuse optical correlation methods were adapted for three-dimensional (3D) tomography of cerebral blood flow (CBF) in small animal models. The image reconstruction was optimized using a noise model for diffuse correlation tomography which enabled better data selection and regularization. The tomographic approach was demonstrated with simulated data and during in-vivo cortical spreading depression (CSD) in rat brain. Three-dimensional images of CBF were obtained through intact skull in tissues(~4mm) deep below the cortex.

Journal ArticleDOI
TL;DR: The Medium Energy Gamma-ray Astronomy Library MEGAlib as discussed by the authors is a set of software tools designed to analyze data of the next generation of Compton telescopes, which comprises all necessary data analysis steps from data acquisition or simulation via event reconstruction to image reconstruction.

Journal ArticleDOI
TL;DR: This work quantitatively study the influence of organ motion on CBCT imaging and investigates a strategy to acquire high quality phase-resolved [four-dimensional (4D)] CBCT images based on phase binning of the CBCT projection data.
Abstract: On-board cone-beam computed tomography (CBCT) has recently become available to provide volumetric information of a patient in the treatment position, and holds promises for improved target localization and irradiation dose verification. The design of currently available on-board CBCT, however, is far from optimal. Its quality is adversely influenced by many factors, such as scatter, beam hardening, and intra-scanning organ motion. In this work we quantitatively study the influence of organ motion on CBCT imaging and investigate a strategy to acquire high quality phase-resolved [four-dimensional (4D)] CBCT images based on phase binning of the CBCT projection data. An efficient and robust method for binning CBCT data according to the patient's respiratory phase derived in the projection space was developed. The phase-binned projections were reconstructed using the conventional Feldkamp algorithm to yield 4D CBCT images. Both phantom and patient studies were carried out to validate the technique and to optimize the 4D CBCT data acquisition protocol. Several factors that are important to the clinical implementation of the technique, such as the image quality, scanning time, number of projections, and radiation dose, were analyzed for various scanning schemes. The general references drawn from this study are: (i) reliable phase binning of CBCT projections is accomplishable with the aid of external or internal marker and simple analysis of its trace in the projection space, and (ii) artifact-free 4D CBCT images can be obtained without increasing the patient radiation dose as compared to the current 3D CBCT scan.

Journal ArticleDOI
TL;DR: The range of clinical applications for MV CBCT is expected to grow as imaging technology continues to improve, and the system demonstrates submillimeter localization precision and sufficient soft-tissue resolution to visualize structures such as the prostate.

Journal ArticleDOI
TL;DR: A new high-precision method for cone beam CT system calibration that uses multiple projection images acquired from rotating point-like objects and the angle information generated from the rotating gantry system is presented.
Abstract: Cone beam CT systems are being deployed in large numbers for small animal imaging, dental imaging, and other specialty applications. A new high-precision method for cone beam CT system calibration is presented in this paper. It uses multiple projection images acquired from rotating point-like objects (metal ball bearings) and the angle information generated from the rotating gantry system is also used. It is assumed that the whole system has a mechanically stable rotation center and that the detector does not have severe out-of-plane rotation (<2 degrees). Simple geometrical relationships between the orbital paths of individual BBs and five system parameters were derived. Computer simulations were employed to validate the accuracy of this method in the presence of noise. Equal or higher accuracy was achieved compared with previous methods. This method was implemented for the geometrical calibration of both a micro CT scanner and a breast CT scanner. The reconstructed tomographic images demonstrated that the proposed method is robust and easy to implement with high precision.

Journal ArticleDOI
TL;DR: This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view, and shows that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem.
Abstract: This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180deg field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183deg), Sigma 8 mm-f4-EX (180deg), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors

Journal ArticleDOI
TL;DR: A new approach for improving the resolution of PET images using a super-resolution method has been developed and experimentally confirmed, employing a clinical scanner and the improvement in axial resolution requires no changes in hardware.
Abstract: This paper demonstrates a super-resolution method for improving the resolution in clinical positron emission tomography (PET) scanners. Super-resolution images were obtained by combining four data sets with spatial shifts between consecutive acquisitions and applying an iterative algorithm. Super-resolution attenuation corrected PET scans of a phantom were obtained using the two-dimensional and three-dimensional (3-D) acquisition modes of a clinical PET/computed tomography (CT) scanner (Discovery LS, GEMS). In a patient study, following a standard /sup 18/F-FDG PET/CT scan, a super-resolution scan around one small lesion was performed using axial shifts without increasing the patient radiation exposure. In the phantom study, smaller features (3 mm) could be resolved axially with the super-resolution method than without (6 mm). The super-resolution images had better resolution than the original images and provided higher contrast ratios in coronal images and in 3-D acquisition transaxial images. The coronal super-resolution images had superior resolution and contrast ratios compared to images reconstructed by merely interleaving the data to the proper axial location. In the patient study, super-resolution reconstructions displayed a more localized /sup 18/F-FDG uptake. A new approach for improving the resolution of PET images using a super-resolution method has been developed and experimentally confirmed, employing a clinical scanner. The improvement in axial resolution requires no changes in hardware.

Journal ArticleDOI
TL;DR: A more accurate model of CT signal formation is described, taking into account the energy-integrating detection process, nonuniform flux profiles, and data-conditioning processes.
Abstract: The accurate determination of x-ray signal properties is important to several computed tomography (CT) research and development areas, notably for statistical reconstruction algorithms and dose-reduction simulation. The most commonly used model of CT signal formation, assuming monoenergetic x-ray sources with quantum counting detectors obeying simple Poisson statistics, does not reflect the actual physics of CT acquisition. This paper describes a more accurate model, taking into account the energy-integrating detection process, nonuniform flux profiles, and data-conditioning processes. Methods are developed to experimentally measure and theoretically calculate statistical distributions, as well as techniques to analyze CT signal properties. Results indicate the limitations of current models and suggest improvements for the description of CT signal properties.

Journal ArticleDOI
TL;DR: A processing environment is described that integrates and automates data processing and analysis functions for imaging of proton metabolite distributions in the normal human brain, thereby allowing the formation of a database of MR‐measured human metabolite values as a function of acquisition, spatial and subject parameters.
Abstract: Image reconstruction for magnetic resonance spectroscopic imaging (MRSI) requires specialized spatial and spectral data processing methods and benefits from the use of several sources of prior information that are not commonly available, including MRI-derived tissue segmentation, morphological analysis and spectral characteristics of the observed metabolites. In addition, incorporating information obtained from MRI data can enhance the display of low-resolution metabolite images and multiparametric and regional statistical analysis methods can improve detection of altered metabolite distributions. As a result, full MRSI processing and analysis can involve multiple processing steps and several different data types. In this paper, a processing environment is described that integrates and automates these data processing and analysis functions for imaging of proton metabolite distributions in the normal human brain. The capabilities include normalization of metabolite signal intensities and transformation into a common spatial reference frame, thereby allowing the formation of a database of MR-measured human metabolite values as a function of acquisition, spatial and subject parameters. This development is carried out under the MIDAS project (Metabolite Imaging and Data Analysis System), which provides an integrated set of MRI and MRSI processing functions. It is anticipated that further development and distribution of these capabilities will facilitate more widespread use of MRSI for diagnostic imaging, encourage the development of standardized MRSI acquisition, processing and analysis methods and enable improved mapping of metabolite distributions in the human brain.