scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 2007"


Journal ArticleDOI
TL;DR: This paper adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more and establishes key relationships with some popular existing methods and shows how several of these algorithms are special cases of the proposed framework.
Abstract: In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples

1,457 citations


Book
29 May 2007
TL;DR: In this paper, the authors present a method for computing the probability density function of a Gaussian beam in a single-spectral image with respect to the Rayleigh theory of light-matter interaction.
Abstract: Preface. 1. INTRODUCTION. 1.1.Motivation for optical imaging. 1.2.General behavior of light in biological tissue. 1.3.Basic physics of light-matter interaction. 1.4.Absorption and its biological origins. 1.5.Scattering and its biological origins. 1.6.Polarization and its biological origins. 1.7.Fluorescence and its biological origins. 1.8.Image characterization. 1.9.References. 1.10.Further readings. 1.11.Problems. 2. RAYLEIGH THEORY AND MIE THEORY FOR A SINGLE SCATTERER. 2.1.Introduction. 2.2.Summary of the Rayleigh theory. 2.3.Numerical example of the Rayleigh theory. 2.4.Summary of the Mie theory. 2.5.Numerical example of the Mie theory. 2.6.Appendix 2.A. Derivation of the Rayleigh theory. 2.7.Appendix 2.B. Derivation of the Mie theory. 2.8.References. 2.9.Further readings. 2.10.Problems. 3. MONTE CARLO MODELING OF PHOTON TRANSPORT IN BIOLOGICAL TISSUE. 3.1.Introduction. 3.2.Monte Carlo method. 3.3.Definition of problem. 3.4.Propagation of photons. 3.5.Physical quantities. 3.6.Computational examples. 3.7.Appendix 3.A. Summary of MCML. 3.8.Appendix 3.B. Probability density function. 3.9.References. 3.10.Further readings. 3.11.Problems. 4. CONVOLUTION FOR BROADBEAM RESPONSES. 4.1.Introduction. 4.2.General formulation of convolution. 4.3.Convolution over a Gaussian beam. 4.4.Convolution over a top-hat beam. 4.5.Numerical solution to convolution. 4.6.Computational examples. 4.7.Appendix 4.A. Summary of CONV. 4.8.References. 4.9.Further readings. 4.10.Problems. 5. RADIATIVE TRANSFER EQUATION AND DIFFUSION THEORY. 5.1.Introduction. 5.2.Definitions of physical quantities. 5.3.Derivation of the radiative transport equation. 5.4.Diffusion theory. 5.5.Boundary conditions. 5.6.Diffuse reflectance. 5.7.Photon propagation regimes. 5.8.References. 5.9.Further readings. 5.10.Problems. 6. HYBRID MODEL OF MONTE CARLO METHOD AND DIFFUSION THEORY. 6.1.Introduction. 6.2.Definition of problem. 6.3.Diffusion theory. 6.4.Hybrid model. 6.5.Numerical computation. 6.6.Computational examples. 6.7.References. 6.8.Further readings. 6.9.Problems. 7. SENSING OF OPTICAL PROPERTIES AND SPECTROSCOPY. 7.1.Introduction. 7.2.Collimated transmission method. 7.3.Spectrophotometry. 7.4.Oblique-incidence reflectometry. 7.5.White-light spectroscopy. 7.6.Time-resolved measurement. 7.7.Fluorescence spectroscopy. 7.8.Fluorescence modeling. 7.9.References. 7.10.Further readings. 7.11.Problems. 8. BALLISTIC IMAGING AND MICROSCOPY. 8.1.Introduction. 8.2.Characteristics of ballistic light. 8.3.Time-gated imaging. 8.4.Spatial-frequency filtered imaging. 8.5.Polarization-difference imaging. 8.6.Coherence-gated holographic imaging. 8.7.Optical heterodyne imaging. 8.8.Radon transformation and computed tomography. 8.9.Confocal microscopy. 8.10.Two-photon microscopy. 8.11.Appendix 8.A. Holography. 8.12.References. 8.13.Further readings. 8.14.Problems. 9. OPTICAL COHERENCE TOMOGRAPHY. 9.1.Introduction. 9.2.Michelson interferometry. 9.3.Coherence length and coherence time. 9.4.Time-domain OCT. 9.5.Fourier-domain rapid scanning optical delay line. 9.6.Fourier-domain OCT. 9.7.Doppler OCT. 9.8.Group velocity dispersion. 9.9.Monte Carlo modeling of OCT. 9.10.References. 9.11.Further readings. 9.12.Problems. 10. MUELLER OPTICAL COHERENCE TOMOGRAPHY. 10.1.Introduction. 10.2.Mueller calculus versus Jones calculus. 10.3.Polarization state. 10.4.Stokes vector. 10.5.Mueller matrix. 10.6.Mueller matrices for a rotator, a polarizer, and a retarder. 10.7.Measurement of Mueller matrix. 10.8.Jones vector. 10.9.Jones matrix. 10.10.Jones matrices for a rotator, a polarizer, and a retarder. 10.11.Eigenvectors and eigenvalues of Jones matrix. 10.12.Conversion from Jones calculus to Mueller calculus. 10.13.Degree of polarization in OCT. 10.14.Serial Mueller OCT. 10.15.Parallel Mueller OCT. 10.16.References. 10.17.Further readings. 10.18.Problems. 11. DIFFUSE OPTICAL TOMOGRAPHY. 11.1.Introduction. 11.2.Modes of diffuse optical tomography. 11.3.Time-domain system. 11.4.Direct-current system. 11.5.Frequency-domain system. 11.6.Frequency-domain theory: basics. 11.7.Frequency-domain theory: linear image reconstruction. 11.8.Frequency-domain theory: general image reconstruction. 11.9.Appendix 11.A. ART and SIRT. 11.10.References. 11.11.Further readings. 11.12.Problems. 12. PHOTOACOUSTIC TOMOGRAPHY. 12.1.Introduction. 12.2.Motivation for photoacoustic tomography. 12.3.Initial photoacoustic pressure. 12.4.General photoacoustic equation. 12.5.General forward solution. 12.6.Delta-pulse excitation of a slab. 12.7.Delta-pulse excitation of a sphere. 12.8.Finite-duration pulse excitation of a thin slab. 12.9.Finite-duration pulse excitation of a small sphere. 12.10.Dark-field confocal photoacoustic microscopy. 12.11.Synthetic aperture image reconstruction. 12.12.General image reconstruction. 12.13.Appendix 12.A. Derivation of acoustic wave equation. 12.14.Appendix 12.B. Green's function approach. 12.15.References. 12.16.Further readings. 12.17.Problems. 13. ULTRASOUND-MODULATED OPTICAL TOMOGRAPHY. 13.1.Introduction. 13.2.Mechanisms of ultrasonic modulation of coherent light. 13.3.Time-resolved frequency-swept UOT. 13.4.Frequency-swept UOT with parallel-speckle detection. 13.5.Ultrasonically modulated virtual optical source. 13.6.Reconstruction-based UOT. 13.7.UOT with Fabry-Perot interferometry. Problems. Reading. Furhter Reading. APPENDIX A. DEFINITIONS OF OPTICAL PROPERTIES. APPENDIX B. List of Acronyms. Index.

1,117 citations


Journal ArticleDOI
TL;DR: Enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies and clinical results illustrate the capabilities of the algorithm on real patient data.
Abstract: Multislice helical computed tomography scanning offers the advantages of faster acquisition and wide organ coverage for routine clinical diagnostic purposes. However, image reconstruction is faced with the challenges of three-dimensional cone-beam geometry, data completeness issues, and low dosage. Of all available reconstruction methods, statistical iterative reconstruction (IR) techniques appear particularly promising since they provide the flexibility of accurate physical noise modeling and geometric system description. In this paper, we present the application of Bayesian iterative algorithms to real 3D multislice helical data to demonstrate significant image quality improvement over conventional techniques. We also introduce a novel prior distribution designed to provide flexibility in its parameters to fine-tune image quality. Specifically, enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies. Clinical results also illustrate the capabilities of the algorithm on real patient data. Although computational load remains a significant challenge for practical development, superior image quality combined with advancements in computing technology make IR techniques a legitimate candidate for future clinical applications.

987 citations


Journal ArticleDOI
TL;DR: An iterative reconstruction method for undersampled radial MRI which is based on a nonlinear optimization, allows for the incorporation of prior knowledge with use of penalty functions, and deals with data from multiple coils is developed.
Abstract: The reconstruction of artifact-free images from radially encoded MRI acquisitions poses a difficult task for undersampled data sets, that is for a much lower number of spokes in k-space than data samples per spoke. Here, we developed an iterative reconstruction method for undersampled radial MRI which (i) is based on a nonlinear optimization, (ii) allows for the incorporation of prior knowledge with use of penalty functions, and (iii) deals with data from multiple coils. The procedure arises as a twostep mechanism which first estimates the coil profiles and then renders a final image that complies with the actual observations. Prior knowledge is introduced by penalizing edges in coil profiles and by a total variation constraint for the final image. The latter condition leads to an effective suppression of undersampling (streaking) artifacts and further adds a certain degree of denoising. Apart from simulations, experimental results for a radial spin-echo MRI sequence are presented for phantoms and human brain in vivo at 2.9 T using 24, 48, and 96 spokes with 256 data samples. In comparison to conventional reconstructions (regridding) the proposed method yielded visually improved image quality in all cases. Magn Reson Med 57:1086–1098, 2007. © 2007 Wiley-Liss, Inc.

794 citations


Proceedings ArticleDOI
Lu Gan1
01 Jul 2007
TL;DR: This paper proposes and study block compressed sensing for natural images, where image acquisition is conducted in a block-by-block manner through the same operator, and shows that the proposed scheme can sufficiently capture the complicated geometric structures of natural images.
Abstract: Compressed sensing (CS) is a new technique for simultaneous data sampling and compression. In this paper, we propose and study block compressed sensing for natural images, where image acquisition is conducted in a block-by-block manner through the same operator. While simpler and more efficient than other CS techniques, the proposed scheme can sufficiently capture the complicated geometric structures of natural images. Our image reconstruction algorithm involves both linear and nonlinear operations such as Wiener filtering, projection onto the convex set and hard thresholding in the transform domain. Several numerical experiments demonstrate that the proposed block CS compares favorably with existing schemes at a much lower implementation cost.

715 citations


Journal ArticleDOI
TL;DR: A constant azimuthal profile spacing, based on the Golden Ratio, is investigated as optimal for image reconstruction from an arbitrary number of profiles in radial MRI.
Abstract: In dynamic magnetic resonance imaging (MRI) studies, the motion kinetics or the contrast variability are often hard to predict, hampering an appropriate choice of the image update rate or the temporal resolution. A constant azimuthal profile spacing (111.246deg), based on the Golden Ratio, is investigated as optimal for image reconstruction from an arbitrary number of profiles in radial MRI. The profile order is evaluated and compared with a uniform profile distribution in terms of signal-to-noise ratio (SNR) and artifact level. The favorable characteristics of such a profile order are exemplified in two applications on healthy volunteers. First, an advanced sliding window reconstruction scheme is applied to dynamic cardiac imaging, with a reconstruction window that can be flexibly adjusted according to the extent of cardiac motion that is acceptable. Second, a contrast-enhancing k-space filter is presented that permits reconstructing an arbitrary number of images at arbitrary time points from one raw data set. The filter was utilized to depict the T1-relaxation in the brain after a single inversion prepulse. While a uniform profile distribution with a constant angle increment is optimal for a fixed and predetermined number of profiles, a profile distribution based on the Golden Ratio proved to be an appropriate solution for an arbitrary number of profiles

668 citations


Journal ArticleDOI
TL;DR: In this paper, iterative projection algorithms are successfully used as a substitute of lenses to recombine, numerically rather than optically, light scattered by illuminated objects, allowing aberration-free diffraction-limited imaging and the possibility of using radiation for which no lenses exist.
Abstract: Iterative projection algorithms are successfully being used as a substitute of lenses to recombine, numerically rather than optically, light scattered by illuminated objects. Images obtained computationally allow aberration-free diffraction-limited imaging and the possibility of using radiation for which no lenses exist. The challenge of this imaging technique is transferred from the lenses to the algorithms. We evaluate these new computational "instruments" developed for the phase-retrieval problem, and discuss acceleration strategies.

479 citations


Journal ArticleDOI
TL;DR: A new method for recording digital holograms under incoherent illumination, which results in a complex-valued Fresnel hologram that is reconstructed in the computer when the 3D properties of the object are revealed.
Abstract: We present a new method for recording digital holograms under incoherent illumination. Light is reflected from a 3D object, propagates through a diffractive optical element (DOE), and is recorded by a digital camera. Three holograms are recorded sequentially, each for a different phase factor of the DOE. The three holograms are superposed in the computer, such that the result is a complex-valued Fresnel hologram. When this hologram is reconstructed in the computer, the 3D properties of the object are revealed.

397 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: It is shown that iterative removal of pairwise reconstructions with the largest residual and reregistration removes most non-existent epipolar geometries.
Abstract: It is known that the problem of multiview reconstruction can be solved in two steps: first estimate camera rotations and then translations using them. This paper presents new robust techniques for both of these steps, (i) Given pair-wise relative rotations, global camera rotations are estimated linearly in least squares, (ii) Camera translations are estimated using a standard technique based on Second Order Cone Programming. Robustness is achieved by using only a subset of points according to a new criterion that diminishes the risk of choosing a mismatch. It is shown that only four points chosen in a special way are sufficient to represent a pairwise reconstruction almost equally as all points. This leads to a significant speedup. In image sets with repetitive or similar structures, non-existent epipolar geometries may be found. Due to them, some rotations and consequently translations may be estimated incorrectly. It is shown that iterative removal of pairwise reconstructions with the largest residual and reregistration removes most non-existent epipolar geometries. The performance of the proposed method is demonstrated on difficult wide base-line image sets.

369 citations


Journal ArticleDOI
TL;DR: The experimental results show that the reconstructed images are very realistic and that, although it is unlikely that they can fool a human expert, there is a high chance to deceive state-of-the-art commercial fingerprint recognition systems.
Abstract: A minutiae-based template is a very compact representation of a fingerprint image, and for a long time, it has been assumed that it did not contain enough information to allow the reconstruction of the original fingerprint. This work proposes a novel approach to reconstruct fingerprint images from standard templates and investigates to what extent the reconstructed images are similar to the original ones (that is, those the templates were extracted from). The efficacy of the reconstruction technique has been assessed by estimating the success chances of a masquerade attack against nine different fingerprint recognition algorithms. The experimental results show that the reconstructed images are very realistic and that, although it is unlikely that they can fool a human expert, there is a high chance to deceive state-of-the-art commercial fingerprint recognition systems.

329 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: A closed-form expression for the motion error in order to apply motion compensation on a pixel level is developed and the resulting scanning system can capture accurate depth maps of complex dynamic scenes at 17 fps and can cope with both rigid and deformable objects.
Abstract: We present a novel 3D scanning system combining stereo and active illumination based on phase-shift for robust and accurate scene reconstruction. Stereo overcomes the traditional phase discontinuity problem and allows for the reconstruction of complex scenes containing multiple objects. Due to the sequential recording of three patterns, motion will introduce artifacts in the reconstruction. We develop a closed-form expression for the motion error in order to apply motion compensation on a pixel level. The resulting scanning system can capture accurate depth maps of complex dynamic scenes at 17 fps and can cope with both rigid and deformable objects.

Journal ArticleDOI
TL;DR: A new variational method for multi-view stereovision and non-rigid three-dimensional motion estimation from multiple video sequences that minimizes the prediction error of the shape and motion estimates and results in a simpler, more flexible, and more efficient implementation than in existing methods.
Abstract: We present a new variational method for multi-view stereovision and non-rigid three-dimensional motion estimation from multiple video sequences. Our method minimizes the prediction error of the shape and motion estimates. Both problems then translate into a generic image registration task. The latter is entrusted to a global measure of image similarity, chosen depending on imaging conditions and scene properties. Rather than integrating a matching measure computed independently at each surface point, our approach computes a global image-based matching score between the input images and the predicted images. The matching process fully handles projective distortion and partial occlusions. Neighborhood as well as global intensity information can be exploited to improve the robustness to appearance changes due to non-Lambertian materials and illumination changes, without any approximation of shape, motion or visibility. Moreover, our approach results in a simpler, more flexible, and more efficient implementation than in existing methods. The computation time on large datasets does not exceed thirty minutes on a standard workstation. Finally, our method is compliant with a hardware implementation with graphics processor units. Our stereovision algorithm yields very good results on a variety of datasets including specularities and translucency. We have successfully tested our motion estimation algorithm on a very challenging multi-view video sequence of a non-rigid scene.

Journal ArticleDOI
TL;DR: New software for transmission electron tomography, TomoJ, is developed, which provides a user-friendly interface for alignment, reconstruction, and combination of multiple tomographic volumes and includes the most recent algorithms for volume reconstructions used in three-dimensional electron microscopy.
Abstract: Transmission electron tomography is an increasingly common three-dimensional electron microscopy approach that can provide new insights into the structure of subcellular components. Transmission electron tomography fills the gap between high resolution structural methods (X-ray diffraction or nuclear magnetic resonance) and optical microscopy. We developed new software for transmission electron tomography, TomoJ. TomoJ is a plug-in for the now standard image analysis and processing software for optical microscopy, ImageJ. TomoJ provides a user-friendly interface for alignment, reconstruction, and combination of multiple tomographic volumes and includes the most recent algorithms for volume reconstructions used in three-dimensional electron microscopy (the algebraic reconstruction technique and simultaneous iterative reconstruction technique) as well as the commonly used approach of weighted back-projection. The software presented in this work is specifically designed for electron tomography. It has been written in Java as a plug-in for ImageJ and is distributed as freeware.

Proceedings ArticleDOI
TL;DR: A framework for compressive classification that operates directly on the compressive measurements without first reconstructing the image is proposed, and the effectiveness of the smashed filter for target classification using very few measurements is demonstrated.
Abstract: The theory of compressive sensing (CS) enables the reconstruction of a sparse or compressible image or signal from a small set of linear, non-adaptive (even random) projections. However, in many applications, including object and target recognition, we are ultimately interested in making a decision about an image rather than computing a reconstruction. We propose here a framework for compressive classification that operates directly on the compressive measurements without first reconstructing the image. We dub the resulting dimensionally reduced matched filter the smashed filter. The first part of the theory maps traditional maximum likelihood hypothesis testing into the compressive domain; we find that the number of measurements required for a given classification performance level does not depend on the sparsity or compressibility of the images but only on the noise level. The second part of the theory applies the generalized maximum likelihood method to deal with unknown transformations such as the translation, scale, or viewing angle of a target object. We exploit the fact the set of transformed images forms a low-dimensional, nonlinear manifold in the high-dimensional image space. We find that the number of measurements required for a given classification performance level grows linearly in the dimensionality of the manifold but only logarithmically in the number of pixels/samples and image classes. Using both simulations and measurements from a new single-pixel compressive camera, we demonstrate the effectiveness of the smashed filter for target classification using very few measurements.

Journal ArticleDOI
TL;DR: Results demonstrate that although both correction techniques considered lead to significant improvements in accounting for respiratory motion artefacts in the lung fields, the elastic-transformation-based correction leads to a more uniform improvement across the lungs for different lesion sizes and locations.
Abstract: Respiratory motion in emission tomography leads to reduced image quality. Developed correction methodology has been concentrating on the use of respiratory synchronized acquisitions leading to gated frames. Such frames, however, are of low signal-to-noise ratio as a result of containing reduced statistics. In this work, we describe the implementation of an elastic transformation within a list-mode-based reconstruction for the correction of respiratory motion over the thorax, allowing the use of all data available throughout a respiratory motion average acquisition. The developed algorithm was evaluated using datasets of the NCAT phantom generated at different points throughout the respiratory cycle. List-mode-data-based PET-simulated frames were subsequently produced by combining the NCAT datasets with Monte Carlo simulation. A non-rigid registration algorithm based on B-spline basis functions was employed to derive transformation parameters accounting for the respiratory motion using the NCAT dynamic CT images. The displacement matrices derived were subsequently applied during the image reconstruction of the original emission list mode data. Two different implementations for the incorporation of the elastic transformations within the one-pass list mode EM (OPL-EM) algorithm were developed and evaluated. The corrected images were compared with those produced using an affine transformation of list mode data prior to reconstruction, as well as with uncorrected respiratory motion average images. Results demonstrate that although both correction techniques considered lead to significant improvements in accounting for respiratory motion artefacts in the lung fields, the elastic-transformation-based correction leads to a more uniform improvement across the lungs for different lesion sizes and locations.

Proceedings ArticleDOI
11 Oct 2007
TL;DR: A robust real-time algorithm that recognizes fingertips to reconstruct the six-degree-of-freedom camera pose relative to the user's outstretched hand is presented, constructed in a one-time calibration step by measuring the fingertip positions in presence of ground-truth scale information.
Abstract: We present markerless camera tracking and user interface methodology for readily inspecting augmented reality (AR) objects in wearable computing applications. Instead of a marker, we use the human hand as a distinctive pattern that almost all wearable computer users have readily available. We present a robust real-time algorithm that recognizes fingertips to reconstruct the six-degree-of-freedom camera pose relative to the user's outstretched hand. A hand pose model is constructed in a one-time calibration step by measuring the fingertip positions in presence of ground-truth scale information. Through frame-by-frame reconstruction of the camera pose relative to the hand, we can stabilize 3D graphics annotations on top of the hand, allowing the user to inspect such virtual objects conveniently from different viewing angles in AR. We evaluate our approach with regard to speed and accuracy, and compare it to state-of-the-art marker-based AR systems. We demonstrate the robustness and usefulness of our approach in an example AR application for selecting and inspecting world-stabilized virtual objects.

Journal ArticleDOI
TL;DR: Based on 2D in vivo data obtained with a 32‐element phased‐array coil in the heart, it is shown that the number of channels can be compressed to as few as four with only 0.3% SNR loss in an ROI encompassing the heart.
Abstract: Arrays with large numbers of independent coil elements are becoming increasingly available as they provide increased signal-to-noise ratios (SNRs) and improved parallel imaging performance. Processing of data from a large set of independent receive channels is, however, associated with an increased memory and computational load in reconstruction. This work addresses this problem by introducing coil array compression. The method allows one to reduce the number of datasets from independent channels by combining all or partial sets in the time domain prior to image reconstruction. It is demonstrated that array compression can be very effective depending on the size of the region of interest (ROI). Based on 2D in vivo data obtained with a 32-element phased-array coil in the heart, it is shown that the number of channels can be compressed to as few as four with only 0.3% SNR loss in an ROI encompassing the heart. With twofold parallel imaging, only a 2% loss in SNR occurred using the same compression factor.

Journal ArticleDOI
TL;DR: This paper presents a joint formulation for a complex super-resolution problem in which the scenes contain multiple independently moving objects, built upon the maximum a posteriori (MAP) framework, which judiciously combines motion estimation, segmentation, and super resolution together.
Abstract: Super resolution image reconstruction allows the recovery of a high-resolution (HR) image from several low-resolution images that are noisy, blurred, and down sampled. In this paper, we present a joint formulation for a complex super-resolution problem in which the scenes contain multiple independently moving objects. This formulation is built upon the maximum a posteriori (MAP) framework, which judiciously combines motion estimation, segmentation, and super resolution together. A cyclic coordinate descent optimization procedure is used to solve the MAP formulation, in which the motion fields, segmentation fields, and HR images are found in an alternate manner given the two others, respectively. Specifically, the gradient-based methods are employed to solve the HR image and motion fields, and an iterated conditional mode optimization method to obtain the segmentation fields. The proposed algorithm has been tested using a synthetic image sequence, the "Mobile and Calendar" sequence, and the original "Motorcycle and Car" sequence. The experiment results and error analyses verify the efficacy of this algorithm

Journal ArticleDOI
TL;DR: In this paper, a new algorithm for determining the iteration update values in the Gauss-Newton algorithm is presented which is based on the conjugate gradient least squares (CGLS) algorithm.
Abstract: Breast-cancer screening using microwave imaging is emerging as a new promising technique as a supplement to X-ray mammography. To create tomographic images from microwave measurements, it is necessary to solve a nonlinear inversion problem, for which an algorithm based on the iterative Gauss-Newton method has been developed at Dartmouth College. This algorithm determines the update values at each iteration by solving the set of normal equations of the problem using the Tikhonov algorithm. In this paper, a new algorithm for determining the iteration update values in the Gauss-Newton algorithm is presented which is based on the conjugate gradient least squares (CGLS) algorithm. The iterative CGLS algorithm is capable of solving the update problem by operating on just the Jacobian and the regularizing effects of the algorithm can easily be controlled by adjusting the number of iterations. The new algorithm is compared to the Gauss-Newton algorithm with Tikhonov regularization and is shown to reconstruct images of similar quality using fewer iterations.

Journal ArticleDOI
TL;DR: The problem of error propagation in the sequential procedure of sensitivity estimation followed by image reconstruction in existing methods, such as sensitivity encoding (SENSE) and simultaneous acquisition of spatial harmonics (SMASH), is considered and reformulates the image reconstruction problem as a joint estimation of the coil sensitivities and the desired image, which is solved by an iterative optimization algorithm.
Abstract: Parallel magnetic resonance imaging (pMRI) using multichannel receiver coils has emerged as an effective tool to reduce imaging time in various applications. However, the issue of accurate estimation of coil sensitivities has not been fully addressed, which limits the level of speed enhancement achievable with the technology. The self-calibrating (SC) technique for sensitivity extraction has been well accepted, especially for dynamic imaging, and complements the common calibration technique that uses a separate scan. However, the existing method to extract the sensitivity information from the SC data is not accurate enough when the number of data is small, and thus erroneous sensitivities affect the reconstruction quality when they are directly applied to the reconstruction equation. This paper considers this problem of error propagation in the sequential procedure of sensitivity estimation followed by image reconstruction in existing methods, such as sensitivity encoding (SENSE) and simultaneous acquisition of spatial harmonics (SMASH), and reformulates the image reconstruction problem as a joint estimation of the coil sensitivities and the desired image, which is solved by an iterative optimization algorithm. The proposed method was tested on various data sets. The results from a set of in vivo data are shown to demonstrate the effectiveness of the proposed method, especially when a rather large net acceleration factor is used.

Journal ArticleDOI
TL;DR: A dynamic volume imaging based on the principle of electrical capacitance tomography (ECT), namely, ECVT, has been developed in this study and has been successfully verified over actual objects in the experimental conditions.
Abstract: A dynamic volume imaging based on the principle of electrical capacitance tomography (ECT), namely, electrical capacitance volume tomography (ECVT), has been developed in this study. The technique generates, from the measured capacitance, a whole volumetric image of the region enclosed by the geometrically three-dimensional capacitance sensor. This development enables a real-time, 3-D imaging of a moving object or a real-time volume imaging (4-D) to be realized. Moreover, it allows total interrogation of the whole volume within the domain (vessel or conduit) of an arbitrary shape or geometry. The development of the ECVT imaging technique primarily encloses the 3-D capacitance sensor design and the volume image reconstruction technique. The electrical field variation in three-dimensional space forms a basis for volume imaging through different shapes and configurations of ECT sensor electrodes. The image reconstruction scheme is established by implementing the neural-network multicriterion optimization image reconstruction (NN-MOIRT), developed earlier by the authors for the 2-D ECT. The image reconstruction technique is modified by introducing into the algorithm a 3-D sensitivity matrix to replace the 2-D sensitivity matrix in conventional 2-D ECT, and providing additional network constraints including 3-to-2-D image matching function. The additional constraints further enhance the accuracy of the image reconstruction algorithm. The technique has been successfully verified over actual objects in the experimental conditions

Journal ArticleDOI
TL;DR: Two universal reconstruction methods for photoacoustic computed tomography are derived, applicable to an arbitrarily shaped detection surface, by calculating the far-field approximation, a concept well known in physics, where the generated acoustic wave is approximated by an outgoing spherical wave with the reconstruction point as center.
Abstract: Two universal reconstruction methods for photoacoustic (also called optoacoustic or thermoacoustic) computed tomography are derived, applicable to an arbitrarily shaped detection surface. In photoacoustic tomography acoustic pressure waves are induced by illuminating a semitransparent sample with pulsed electromagnetic radiation and are measured on a detection surface outside the sample. The imaging problem consists in reconstructing the initial pressure sources from those measurements. The first solution to this problem is based on the time reversal of the acoustic pressure field with a second order embedded boundary method. The pressure on the arbitrarily shaped detection surface is set to coincide with the measured data in reversed temporal order. In the second approach the reconstruction problem is solved by calculating the far-field approximation, a concept well known in physics, where the generated acoustic wave is approximated by an outgoing spherical wave with the reconstruction point as center. Numerical simulations are used to compare the proposed universal reconstruction methods with existing algorithms.

Journal ArticleDOI
TL;DR: A real-time alignment and reconstruction scheme for electron microscopic tomography (EMT) has been developed and integrated within the UCSF tomography data collection software and has proven to be quite adequate to assess sample quality, or to screen for the best data set for full-resolution reconstruction.

Journal ArticleDOI
TL;DR: It is demonstrated that clinical SPECT systems with focussing pinhole gamma cameras will be able to produce images with a resolution that may become superior to that of PET for major clinical applications.
Abstract: Today the majority of clinical molecular imaging procedures are carried out with single-photon emitters and gamma cameras, in planar mode and single-photon emission computed tomography (SPECT) mode. Thanks to the development of advanced multi-pinhole collimation technologies, SPECT imaging of small experimental animals is rapidly gaining in popularity. Whereas resolutions in routine clinical SPECT are typically larger than 1 cm (corresponding to >1,000 μl), it has recently proved possible to obtain spatial resolutions of about 0.35 mm (≈0.04 μl) in the mouse. Meanwhile, SPECT systems that promise an even better performance are under construction. The new systems are able to monitor functions in even smaller structures of the mouse than was possible with dedicated small animal positron emission tomography (≈1 mm resolution, corresponding to 1 μl). This paper provides a brief history of image formation with pinholes and explains the principles of pinhole imaging and pinhole tomography and the basics of modern image reconstruction methods required for such systems. Some recently introduced ultra-high-resolution small animal SPECT instruments are discussed and new avenues for improving system performance are explored. This may lead to many completely new biomedical applications. We also demonstrate that clinical SPECT systems with focussing pinhole gamma cameras will be able to produce images with a resolution that may become superior to that of PET for major clinical applications. A design study of a cardiac pinhole SPECT system indicates that the heart can be imaged an order of magnitude faster or with much more detail than is possible with currently used parallel-hole SPECT (e.g. 3–4 mm instead of ≈8 mm system resolution).

Journal ArticleDOI
TL;DR: An effective metal artifact-suppressing algorithm is implemented to improve the quality of CBCT images and was able to minimize the metal artifacts in phantom and patient studies.
Abstract: Purpose: Computed tomography (CT) streak artifacts caused by metallic implants remain a challenge for the automatic processing of image data. The impact of metal artifacts in the soft-tissue region is magnified in cone-beam CT (CBCT), because the soft-tissue contrast is usually lower in CBCT images. The goal of this study was to develop an effective offline processing technique to minimize the effect. Methods and Materials: The geometry calibration cue of the CBCT system was used to track the position of the metal object in projection views. The three-dimensional (3D) representation of the object can be established from only two user-selected viewing angles. The position of the shadowed region in other views can be tracked by projecting the 3D coordinates of the object. Automatic image segmentation was used followed by a Laplacian diffusion method to replace the pixels inside the metal object with the boundary pixels. The modified projection data were then used to reconstruct a new CBCT image. The procedure was tested in phantoms, prostate cancer patients with implanted gold markers and metal prosthesis, and a head-and-neck patient with dental amalgam in the teeth. Results: Both phantom and patient studies demonstrated that the procedure was able to minimize the metal artifacts. Soft-tissue visibility was improved near or away from the metal object. The processing time was 1–2 s per projection. Conclusion: We have implemented an effective metal artifact-suppressing algorithm to improve the quality of CBCT images.

Journal ArticleDOI
TL;DR: Comparisons empirically suggest that nonnegative least squares method is the technique of choice for the multifiber reconstruction problem in the presence of intravoxel orientational heterogeneity, and several deconvolution schemes towards achieving stable, sparse, and accurate solutions are investigated.
Abstract: Diffusion magnetic resonance imaging (MRI) is a relatively new imaging modality which is capable of measuring the diffusion of water molecules in biological systems noninvasively. The measurements from diffusion MRI provide unique clues for extracting orientation information of brain white matter fibers and can be potentially used to infer the brain connectivity in vivo using tractography techniques. Diffusion tensor imaging (DTI), currently the most widely used technique, fails to extract multiple fiber orientations in regions with complex microstructure. In order to overcome this limitation of DTI, a variety of reconstruction algorithms have been introduced in the recent past. One of the key ingredients in several model-based approaches is deconvolution operation which is presented in a unified deconvolution framework in this paper. Additionally, some important computational issues in solving the deconvolution problem that are not addressed adequately in previous studies are described in detail here. Further, we investigate several deconvolution schemes towards achieving stable, sparse, and accurate solutions. Experimental results on both simulations and real data are presented. The comparisons empirically suggest that nonnegative least squares method is the technique of choice for the multifiber reconstruction problem in the presence of intravoxel orientational heterogeneity.

Journal ArticleDOI
TL;DR: A sparseness prior regularized weighted l2 norm optimization is proposed to mitigate streaking artifacts based on the fact that most medical images are compressible and is implemented as the regularizer for its simplicity.
Abstract: Recent advances in murine cardiac studies with three-dimensional (3D) cone beam micro-CT used a retrospective gating technique However, this sampling technique results in a limited number of projections with an irregular angular distribution due to the temporal resolution requirements and radiation dose restrictions Both angular irregularity and undersampling complicate the reconstruction process, since they cause significant streaking artifacts This work provides an iterative reconstruction solution to address this particular challenge A sparseness prior regularized weighted l2 norm optimization is proposed to mitigate streaking artifacts based on the fact that most medical images are compressible Total variation is implemented in this work as the regularizer for its simplicity Comparison studies are conducted on a 3D cardiac mouse phantom generated with experimental data After optimization, the method is applied to in vivo cardiac micro-CT data

Journal ArticleDOI
TL;DR: It could be shown that the imaging system in its present configuration is capable of producing three-dimensional images of objects with an overall size in the range of several millimeters to centimeters, and how the technique can be scaled for imaging of smaller objects with higher resolution.
Abstract: A three-dimensional photoacoustic imaging method is presented that uses a Mach-Zehnder interferometer for measurement of acoustic waves generated in an object by irradiation with short laser pulses. The signals acquired with the interferometer correspond to line integrals over the acoustic wave field. An algorithm for reconstruction of a three-dimensional image from such signals measured at multiple positions around the object is shown that is a combination of a frequency-domain technique and the inverse Radon transform. From images of a small source scanning across the interferometer beam it is estimated that the spatial resolution of the imaging system is in the range of 100 to about 300 mum, depending on the interferometer beam width and the size of the aperture formed by the scan length divided by the source-detector distance. By taking an image of a phantom it could be shown that the imaging system in its present configuration is capable of producing three-dimensional images of objects with an overall size in the range of several millimeters to centimeters. Strategies are proposed how the technique can be scaled for imaging of smaller objects with higher resolution.

Proceedings ArticleDOI
10 Apr 2007
TL;DR: This paper describes a new image-based approach to tracking the 6DOF trajectory of a stereo camera pair using a corresponding reference image pairs instead of explicit 3D feature reconstruction of the scene.
Abstract: This paper describes a new image-based approach to tracking the 6DOF trajectory of a stereo camera pair using a corresponding reference image pairs instead of explicit 3D feature reconstruction of the scene. A dense minimisation approach is employed which directly uses all grey-scale information available within the stereo pair (or stereo region) leading to very robust and precise results. Metric 3D structure constraints are imposed by consistently warping corresponding stereo images to generate novel viewpoints at each stereo acquisition. An iterative non-linear trajectory estimation approach is formulated based on a quadrifocal relationship between the image intensities within adjacent views of the stereo pair. A robust M-estimation technique is used to reject outliers corresponding to moving objects within the scene or other outliers such as occlusions and illumination changes. The technique is applied to recovering the trajectory of a moving vehicle in long and difficult sequences of images.

Journal ArticleDOI
TL;DR: A novel approach to demosaicing based on directional filtering and a posteriori decision is presented, which gives good performance even when compared to more demanding techniques.
Abstract: Most digital cameras use a color filter array to capture the colors of the scene. Downsampled versions of the red, green, and blue components are acquired, and an interpolation of the three colors is necessary to reconstruct a full representation of the image. This color interpolation is known as demosaicing. The most effective demosaicing techniques proposed in the literature are based on directional filtering and a posteriori decision. In this paper, we present a novel approach to this reconstruction method. A refining step is included to further improve the resulting reconstructed image. The proposed approach requires a limited computational cost and gives good performance even when compared to more demanding techniques