scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 2001"


Journal ArticleDOI
TL;DR: Using the proposed method, SENSE becomes practical with nonstandard k‐space trajectories, enabling considerable scan time reduction with respect to mere gradient encoding, and the in vivo feasibility of non‐Cartesian SENSE imaging with iterative reconstruction is demonstrated.
Abstract: New, efficient reconstruction procedures are proposed for sensitivity encoding (SENSE) with arbitrary k-space trajectories. The presented methods combine gridding principles with so-called conjugate-gradient iteration. In this fashion, the bulk of the work of reconstruction can be performed by fast Fourier transform (FFT), reducing the complexity of data processing to the same order of magnitude as in conventional gridding reconstruction. Using the proposed method, SENSE becomes practical with nonstandard k-space trajectories, enabling considerable scan time reduction with respect to mere gradient encoding. This is illustrated by imaging simulations with spiral, radial, and random k-space patterns. Simulations were also used for investigating the convergence behavior of the proposed algorithm and its dependence on the factor by which gradient encoding is reduced. The in vivo feasibility of non-Cartesian SENSE imaging with iterative reconstruction is demonstrated by examples of brain and cardiac imaging using spiral trajectories. In brain imaging with six receiver coils, the number of spiral interleaves was reduced by factors ranging from 2 to 6. In cardiac real-time imaging with four coils, spiral SENSE permitted reducing the scan time per image from 112 ms to 56 ms, thus doubling the frame-rate. Magn Reson Med 46:638–651, 2001. © 2001 Wiley-Liss, Inc.

1,221 citations


01 Jan 2001
TL;DR: This chapter discusses reconstruction algorithms, stability and resolution in tomography, and problems that have peculiarities in relation to nonlinear tomography.
Abstract: 1. Introduction 2. Integral geometry 3. Tomography 4. Stability and resolution 5. Reconstruction algorithms 6. Problems that have peculiarities 7. Nonlinear tomography.

848 citations


Book
01 Jan 2001
TL;DR: In this article, the authors present a reconstruction algorithm for nonlinear tomography problems that have peculiarities, based on integral geometry and structural and resolution properties of the tomography images.
Abstract: 1. Introduction 2. Integral geometry 3. Tomography 4. Stability and resolution 5. Reconstruction algorithms 6. Problems that have peculiarities 7. Nonlinear tomography.

763 citations


Journal ArticleDOI
TL;DR: The analytical model for CNR provides a quantitative understanding of the relationship between CNR, dose, and spatial resolution and allows knowledgeable selection of the acquisition and reconstruction parameters that, for a given SPR, are required to restore the CNR to values achieved under conditions of low x-ray scatter.
Abstract: A system for cone-beam computed tomography (CBCT) based on a flat-panel imager (FPI) is used to examine the magnitude and effects of x-ray scatter in FPI-CBCT volume reconstructions. The system is being developed for application in image-guided therapies and has previously demonstrated spatial resolution and soft-tissue visibility comparable or superior to a conventional CT scanner under conditions of low x-ray scatter. For larger objects consistent with imaging of human anatomy (e.g., the pelvis) and for increased cone angle (i.e., larger volumetric reconstructions), however, the effects of x-ray scatter become significant. The magnitude of x-ray scatter with which the FPI-CBCT system must contend is quantified in terms of the scatter-to-primary energy fluence ratio (SPR) and scatter intensity profiles in the detector plane, each measured as a function of object size and cone angle. For large objects and cone angles (e.g., a pelvis imaged with a cone angle of 6 degrees), SPR in excess of 100% is observed. Associated with such levels of x-ray scatter are cup and streak artifacts as well as reduced accuracy in reconstruction values, quantified herein across a range of SPR consistent with the clinical setting. The effect of x-ray scatter on the contrast, noise, and contrast-to-noise ratio (CNR) in FPI-CBCT reconstructions was measured as a function of SPR and compared to predictions of a simple analytical model. The results quantify the degree to which elevated SPR degrades the CNR. For example, FPI-CBCT images of a breast-equivalent insert in water were degraded in CNR by nearly a factor of 2 for SPR ranging from approximately 2% to 120%. The analytical model for CNR provides a quantitative understanding of the relationship between CNR, dose, and spatial resolution and allows knowledgeable selection of the acquisition and reconstruction parameters that, for a given SPR, are required to restore the CNR to values achieved under conditions of low x-ray scatter. For example, for SPR = 100%, the CNR in FPI-CBCT images can be fully restored by: (1) increasing the dose by a factor of 4 (at full spatial resolution); (2) increasing dose and slice thickness by a factor of 2; or (3) increasing slice thickness by a factor of 4 (with no increase in dose). Other reconstruction parameters, such as transaxial resolution length and reconstruction filter, can be similarly adjusted to achieve CNR equal to that obtained in the scatter-free case.

609 citations


Journal ArticleDOI
TL;DR: A new highly efficient super-resolution reconstruction algorithm is developed for this case, which separates the treatment into de-blurring and measurements fusion, preserving the optimality of the entire reconstruction process, in the maximum-likelihood sense.
Abstract: This paper addresses the problem of recovering a super-resolved image from a set of warped blurred and decimated versions thereof. Several algorithms have already been proposed for the solution of this general problem. In this paper, we concentrate on a special case where the warps are pure translations, the blur is space invariant and the same for all the images, and the noise is white. We exploit previous results to develop a new highly efficient super-resolution reconstruction algorithm for this case, which separates the treatment into de-blurring and measurements fusion. The fusion part is shown to be a very simple non-iterative algorithm, preserving the optimality of the entire reconstruction process, in the maximum-likelihood sense. Simulations demonstrate the capabilities of the proposed algorithm.

513 citations


Journal ArticleDOI
TL;DR: A computationally efficient and robust image reconstruction algorithm for breast cancer detection using an ultrawideband confocal microwave imaging system and a two-dimensional anatomically realistic MRI-derived FDTD model of the cancerous breast is presented.
Abstract: We present a computationally efficient and robust image reconstruction algorithm for breast cancer detection using an ultrawideband confocal microwave imaging system. To test the efficacy of this approach, we have developed a two-dimensional (2-D) anatomically realistic MRI-derived FDTD model of the cancerous breast. The image reconstruction algorithm is applied to FDTD-computed backscatter signals, resulting in a microwave image that clearly identifies the presence and location of the malignant lesion. These simulations demonstrate the feasibility of detecting and imaging small breast tumors using this novel approach.

479 citations


Journal ArticleDOI
TL;DR: A new iterative maximum-likelihood reconstruction algorithm for X-ray computed tomography prevents beam hardening artifacts by incorporating a polychromatic acquisition model and preliminary results indicate that metal artifact reduction is a very promising application.
Abstract: A new iterative maximum-likelihood reconstruction algorithm for X-ray computed tomography is presented. The algorithm prevents beam hardening artifacts by incorporating a polychromatic acquisition model. The continuous spectrum of the X-ray tube is modeled as a number of discrete energies. The energy dependence of the attenuation is taken into account by decomposing the linear attenuation coefficient into a photoelectric component and a Compton scatter component. The relative weight of these components is constrained based on prior material assumptions. Excellent results are obtained for simulations and for phantom measurements. Beam-hardening artifacts are effectively eliminated. The relation with existing algorithms is discussed. The results confirm that improving the acquisition model assumed by the reconstruction algorithm results in reduced artifacts. Preliminary results indicate that metal artifact reduction is a very promising application for this new algorithm.

478 citations


Proceedings ArticleDOI
13 Jul 2001
TL;DR: The level set method and fast sweeping and tagging methods are used to reconstruct surfaces from a scattered data set and the reconstructed surface is smoother than piecewise linear and has a natural scaling in the regularization that allows varying flexibility according to the local sampling density.
Abstract: We describe new formulations and develop fast algorithms for implicit surface reconstruction based on variational and partial differential equation (PDE) methods. In particular we use the level set method and fast sweeping and tagging methods to reconstruct surfaces from a scattered data set. The data set might consist of points, curves and/or surface patches. A weighted minimal surface-like model is constructed and its variational level set formulation is implemented with optimal efficiency. The reconstructed surface is smoother than piecewise linear and has a natural scaling in the regularization that allows varying flexibility according to the local sampling density. As is usual with the level set method we can handle complicated topology and deformations, as well as noisy or highly nonuniform data sets easily. The method is based on a simple rectangular grid, although adaptive and triangular grids are also possible. Some consequences, such as hole filling capability, are demonstrated, as well as the viability and convergence of our new fast tagging algorithm.

456 citations


Journal ArticleDOI
TL;DR: This work proposes efficient block circulant preconditioners for solving the Tikhonov-regularized superresolution problem by the conjugate gradient method and extends to underdetermined systems the derivation of the generalized cross-validation method for automatic calculation of regularization parameters.
Abstract: Superresolution reconstruction produces a high-resolution image from a set of low-resolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this ill-conditioned and typically underdetermined large scale problem. We propose efficient block circulant preconditioners for solving the Tikhonov-regularized superresolution problem by the conjugate gradient method. We also extend to underdetermined systems the derivation of the generalized cross-validation method for automatic calculation of regularization parameters. The effectiveness of our preconditioners and regularization techniques is demonstrated with superresolution results for a simulated sequence and a forward looking infrared (FLIR) camera image sequence.

442 citations


Journal ArticleDOI
TL;DR: A model-based iterative image reconstruction scheme that employs adjoint differentiation methods to minimize the difference between measured and predicted data is developed and reported on, the first three-dimensional, volumetric, tomographic localization of vascular reactivity in the brain.
Abstract: We report on the first three-dimensional, volumetric, tomographic localization of vascular reactivity in the brain. To this end we developed a model-based iterative image reconstruction scheme that employs adjoint differentiation methods to minimize the difference between measured and predicted data. The necessary human-head geometry and optode locations were determined with a photogrammetric method. To illustrate the performance of the technique, the three-dimensional distribution of changes in the concentration of oxyhemoglobin, deoxyhemoglobin, and total hemoglobin during a Valsalva maneuver were visualized. The observed results are consistent with previously reported effects concerning optical responses to hemodynamic perturbations.

294 citations


Proceedings ArticleDOI
01 Dec 2001
TL;DR: A novel solution for flow-based tracking and 3D reconstruction of deforming objects in monocular image sequences using a linear combination of 3D basis shapes and the rank constraint is used to achieve robust and precise low-level optical flow estimation.
Abstract: This paper presents a novel solution for flow-based tracking and 3D reconstruction of deforming objects in monocular image sequences. A non-rigid 3D object undergoing rotation and deformation can be effectively approximated using a linear combination of 3D basis shapes. This puts a bound on the rank of the tracking matrix. The rank constraint is used to achieve robust and precise low-level optical flow estimation without prior knowledge of the 3D shape of the object. The bound on the rank is also exploited to handle occlusion at the tracking level leading to the possibility of recovering the complete trajectories of occluded/disoccluded points. Following the same low-rank principle, the resulting flow matrix can be factored to get the 3D pose, configuration coefficients, and 3D basis shapes. The flow matrix is factored in an iterative manner, looping between solving for pose, configuration, and basis shapes. The flow-based tracking is applied to several video sequences and provides the input to the 3D non-rigid reconstruction task. Additional results on synthetic data and comparisons to ground truth complete the experiments.

Journal ArticleDOI
TL;DR: This paper discusses a general theory and techniques for image reconstruction and creating enhanced resolution images from irregularly sampled data, and shows that with minor modification, the algebraic reconstruction technique (ART) is functionally equivalent to Grochenig's irregular sampling reconstruction algorithm.
Abstract: While high resolution, regularly gridded observations are generally preferred in remote sensing, actual observations are often not evenly sampled and have lower-than-desired resolution. Hence, there is an interest in resolution enhancement and image reconstruction. This paper discusses a general theory and techniques for image reconstruction and creating enhanced resolution images from irregularly sampled data. Using irregular sampling theory, we consider how the frequency content in aperture function-attenuated sidelobes can be recovered from oversampled data using reconstruction techniques, thus taking advantage of the high frequency content of measurements made with nonideal aperture filters. We show that with minor modification, the algebraic reconstruction technique (ART) is functionally equivalent to Grochenig's (1992) irregular sampling reconstruction algorithm. Using simple Monte Carlo simulations, we compare and contrast the performance of additive ART, multiplicative ART, and the scatterometer image reconstruction (SIR) (a derivative of multiplicative ART) algorithms with and without noise. The reconstruction theory and techniques have applications with a variety of sensors and can enable enhanced resolution image production from many nonimaging sensors. The technique is illustrated with ERS-2 and SeaWinds scatterometer data.

Journal ArticleDOI
TL;DR: In this paper, an iterative method for the extraction of velocity and angular distributions from two-dimensional (2D) ion/photoelectron imaging experiments is presented, which is based on the close relationship which exists between the initial 3D angular and velocity distribution and the measured 2d angular and radial distributions.
Abstract: We present an iterative method for the extraction of velocity and angular distributions from two-dimensional (2D) ion/photoelectron imaging experiments. This method is based on the close relationship which exists between the initial 3D angular and velocity distribution and the measured 2D angular and radial distributions, and gives significantly better results than other inversion procedures which are commonly used today. Particularly, the procedure gets rid of the center-line noise which is one of the main artifacts in many current ion/photoelectron imaging experiments.

Journal ArticleDOI
TL;DR: A novel computational algorithm is presented, which, at least in principle, yields an exact reconstruction of the absorbing structures in three-dimensional space inside the tissue based on 2D pressure distributions captured outside at different delay times.
Abstract: In medical imaging different techniques have been developed to gain information from inside a tissue. Optoacoustics is a method to generate tomography pictures of tissue using Q-switched laser pulses. Due to thermal and pressure confinement, a short light pulse generates a pressure distribution inside tissue, which mirrors absorbing structures and can be measured outside the tissue. Using a temporal back-projection method, the pressure distribution measured on the tissue surface allows us to gain a tomography picture of the absorbing structures inside tissue. This study presents a novel computational algorithm, which, at least in principle, yields an exact reconstruction of the absorbing structures in three-dimensional space inside the tissue. The reconstruction is based on 2D pressure distributions captured outside at different delay times. The algorithm is tested in a simulation and back-projection of pressure transients of a small absorber and a single point source.

Journal ArticleDOI
TL;DR: A new modified type of internal sensitivity calibration, VD‐AUTO‐SMASH, is proposed, which uses a VD k‐space sampling approach and shows the ability to improve the image quality without significantly increasing the total scan time.
Abstract: Recently a self-calibrating SMASH technique, AUTO-SMASH, was described. This technique is based on PPA with RF coil arrays using auto-calibration signals. In AUTO-SMASH, important coil sensitivity information required for successful SMASH reconstruction is obtained during the actual scan using the correlation between undersampled SMASH signal data and additionally sampled calibration signals with appropriate offsets in k-space. However, AUTO-SMASH is susceptible to noise in the acquired data and to imperfect spatial harmonic generation in the underlying coil array. In this work, a new modified type of internal sensitivity calibration, VD-AUTO-SMASH, is proposed. This method uses a VD k-space sampling approach and shows the ability to improve the image quality without significantly increasing the total scan time. This new k-space adapted calibration approach is based on a k-space-dependent density function. In this scheme, fully sampled low-spatial frequency data are acquired up to a given cutoff-spatial frequency. Above this frequency, only sparse SMASH-type sampling is performed. On top of the VD approach, advanced fitting routines, which allow an improved extraction of coil-weighting factors in the presence of noise, are proposed. It is shown in simulations and in vivo cardiac images that the VD approach significantly increases the potential and flexibility of rapid imaging with AUTO-SMASH.

Journal ArticleDOI
TL;DR: A generalized formulation for parallel MR imaging is derived, demonstrating the relationship between existing techniques such as SMASH and SENSE, and suggesting new algorithms with improved performance.
Abstract: Parallel magnetic resonance (MR) imaging uses spatial encoding from multiple radiofrequency detector coils to supplement the encoding supplied by magnetic field gradients, and thereby to accelerate MR image acquisitions beyond previous limits. A generalized formulation for parallel MR imaging is derived, demonstrating the relationship between existing techniques such as SMASH and SENSE, and suggesting new algorithms with improved performance. Hybrid approaches combining features of both SMASH-like and SENSE-like image reconstructions are constructed, and numerical conditioning techniques are described which can improve the practical robustness of parallel image reconstructions. Incorporation of numerical conditioning directly into parallel reconstructions using the generalized approach also removes a cumbersome and potentially error-prone sensitivity calibration step involving division of two distinct in vivo reference images. Hybrid approaches in combination with numerical conditioning are shown to extend the range of accelerations over which high-quality parallel images may be obtained.

Journal ArticleDOI
01 May 2001
TL;DR: In this article, Component averaging (CAV) is introduced as a new iterative parallel technique suitable for large and sparse unstructured systems of linear equations, which simultaneously projects the current iterate onto all the system's hyperplanes, and is thus inherently parallel.
Abstract: Component averaging (CAV) is introduced as a new iterative parallel technique suitable for large and sparse unstructured systems of linear equations. It simultaneously projects the current iterate onto all the system's hyperplanes, and is thus inherently parallel. However, instead of orthogonal projections and scalar weights (as used, for example, in Cimmino's method), it uses oblique projections and diagonal weighting matrices, with weights related to the sparsity of the system matrix. These features provide for a practical convergence rate which approaches that of algebraic reconstruction technique (ART) (Kaczmarz's row-action algorithm) – even on a single processor. Furthermore, the new algorithm also converges in the inconsistent case. A proof of convergence is provided for unit relaxation, and the fast convergence is demonstrated on image reconstruction problems of the Herman head phantom obtained within the SNARK93 image reconstruction software package. Both reconstructed images and convergence plots are presented. The practical consequences of the new technique are far reaching for real-world problems in which iterative algorithms are used for solving large, sparse, unstructured and often inconsistent systems of linear equations.

Journal ArticleDOI
TL;DR: In this article, numerical calculations on the field distribution in the focus of an optical system with high numerical aperture are presented, where diffraction integrals based on the Debye approximation are derived and evaluated for a radially polarized input field with a doughnut-shaped intensity distribution.
Abstract: We present numerical calculations on the field distribution in the focus of an optical system with high numerical aperture. The diffraction integrals which are based on the Debye approximation are derived and evaluated for a radially polarized input field with a doughnut-shaped intensity distribution. It is shown that this mode focusses down to a spot size significantly smaller as compared to the case of linear polarization. An experimental setup to measure the three-dimensional intensity distribution in the focal region is presented, which is based on the knife-edge method and on tomographic reconstruction.

Journal ArticleDOI
TL;DR: This work estimated the scatter fractions and effects of scatter on image noise, and derived a relationship between the noise in a reconstructed image and in an x-ray intensity measurement, and estimated the image noise under relevant clinical conditions.
Abstract: Cone beam CT has a capability for the 3-dimensional imaging of large volumes with isotropic resolution, and has a potentiality for 4-dimensional imaging (dynamic volume imaging), because cone beam CT acquires data of a large volume with one rotation of an x-ray tube-detector pair. However, one of the potential drawbacks of cone beam CT is a larger amount of scattered x-rays, which may enhance the noise in reconstructed images, and thus affect the low-contrast detectablity. Our aim in this work was to estimate the scatter fractions and effects of scatter on image noise, and to seek methods of improving image quality in cone beam CT. First we derived a relationship between the noise in a reconstructed image and in an x-ray intensity measurement. Then we estimated the scatter to primary ratios in x-ray measurements using a Monte-Carlo simulation. From these we estimated the image noise under relevant clinical conditions. The results showed that the scattered radiation made a substantial contribution to the image noise. However, focused collimators could improve it by decreasing the scattered radiation drastically while keeping the primary radiation at nearly the same level. A conventional grid also improved the image noise, though the improvement was less than that of focused collimators.

Journal ArticleDOI
TL;DR: In this paper, a model problem in electrical impedance tomography for the identification of unknown shapes from data in a narrow strip along the boundary of the domain is investigated and the representation of the shape of the boundary and its evolution during an iterative reconstruction process is achieved by the level set method.
Abstract: A model problem in electrical impedance tomography for the identification of unknown shapes from data in a narrow strip along the boundary of the domain is investigated. The representation of the shape of the boundary and its evolution during an iterative reconstruction process is achieved by the level set method. The shape derivatives of this problem involve the normal derivative of the potential along the unknown boundary. Hence an accurate resolution of its derivatives along the unknown interface is essential. It is obtained by the immersed interface method.

Proceedings ArticleDOI
08 Dec 2001
TL;DR: Results show that this technique can produce images whose error properties are equivalent to the initial approximation used, while their contour smoothness is both visually and quantitatively improved.
Abstract: Image magnification is a common problem in imaging applications, requiring interpolation to "read between the pixels". Although many magnification/interpolation algorithms have been proposed in the literature, all methods must suffer to some degree the effects of imperfect reconstruction: false high-frequency content introduced by the underlying original sampling. Most often, these effects manifest themselves as jagged contours in the image. The paper presents a method for constrained smoothing of such artifacts that attempts to produce smooth reconstructions of the image's level curves while still maintaining image fidelity. This is similar to other iterative reconstruction algorithms and to Bayesian restoration techniques, but instead of assuming a smoothness prior for the underlying intensity function it assumes smoothness of the level curves. Results show that this technique can produce images whose error properties are equivalent to the initial approximation (interpolation) used, while their contour smoothness is both visually and quantitatively improved.

Journal ArticleDOI
TL;DR: In this paper, a near-infrared frequency-domain system designed for tomographic breast imaging is described, which utilizes five optical wavelengths, from 660 to 826 nm, and parallel detection with 16 photomultiplier tubes.
Abstract: A novel near-infrared frequency-domain system designed for tomographic breast imaging is described. The setup utilizes five optical wavelengths, from 660 to 826 nm, and parallel detection with 16 photomultiplier tubes. Direct fiberoptic coupling with the tissue is achieved with a high precision positioning device using 16 motorized actuators (0.5 μm precision) arranged radially in a circular geometry. Images of breast tissue optical absorption and reduced scattering coefficients are obtained using a Newton-type reconstruction algorithm to solve for the optimal fit between the measurement data and predicted data from a finite element solution to the frequency-domain diffusion equation. The design, calibration, and performance of the tomographic imaging system are detailed. Data acquisition from the system requires under 30 s for a single tomographic slice at one optical wavelength with a measurement repeatability for a single phantom on average of 0.5% in ac intensity and 0.4° in phase. Absorbing and scatt...

Journal ArticleDOI
TL;DR: Experimental results on real gray-level images show that it is possible to recover an image to within a specified degree of accuracy and to classify objects reliably even when a large set of descriptors is used.

Journal ArticleDOI
TL;DR: An optimized interleaved‐spiral pulse sequence, providing high spatial and temporal resolution, was developed for dynamic imaging of pulmonary ventilation with hyperpolarized 3He, and tested in healthy volunteers and patients with lung disease.
Abstract: An optimized interleaved-spiral pulse sequence, providing high spatial and temporal resolution, was developed for dynamic imaging of pulmonary ventilation with hyperpolarized (3)He, and tested in healthy volunteers and patients with lung disease. Off-resonance artifacts were minimized by using a short data-sampling period per interleaf, and gradient-fidelity errors were compensated for by using measured k-space trajectories for image reconstruction. A nonsequential acquisition order was implemented to improve image quality during periods of rapid signal change, such as early inspiration. Using a sliding-window reconstruction, cine-movies with a frame rate of 100 images per second were generated. Dynamic images demonstrating minimal susceptibility- and motion-induced artifacts were obtained in sagittal, coronal, and axial orientations. The pulse sequence had the flexibility to image multiple slices almost simultaneously. Our initial experience in healthy volunteers and subjects with lung pathology demonstrated the potential of this new tool for capturing the features of lung gas-flow dynamics.

PatentDOI
TL;DR: In this article, a new image reconstruction technique for imaging two and three-phase flows using electrical capacitance tomography (ECT) has been developed based on multi-criteria optimization using an analog neural network, hereafter referred to as Neural Network Multi-Criteria Optimization Image Reconstruction (NN-MOIRT).
Abstract: A new image reconstruction technique for imaging two- and three-phase flows using electrical capacitance tomography (ECT) has been developed based on multi-criteria optimization using an analog neural network, hereafter referred to as Neural Network Multi-criteria Optimization Image Reconstruction (NN-MOIRT)) The reconstruction technique is a combination between multi-criteria optimization image reconstruction technique for linear tomography, and the so-called linear back projection (LBP) technique commonly used for capacitance tomography The multi-criteria optimization image reconstruction problem is solved using Hopfield model dynamic neural-network computing For three-component imaging, the single-step sigmoid function in the Hopfield networks is replaced by a double-step sigmoid function, allowing the neural computation to converge to three-distinct stable regions in the output space corresponding to the three components, enabling the differentiation among the single phases

Journal ArticleDOI
TL;DR: 3D motion effects are present even in regular, symmetric phantom geometries and the development of a 3D reconstruction algorithm capable of discerning elastic property distributions in the presence of such effects is presented.
Abstract: Accurate characterization of harmonic tissue motion for realistic tissue geometries and property distributions requires knowledge of the full three-dimensional displacement field because of the asymmetric nature of both the boundaries of the tissue domain and the location of internal mechanical heterogeneities. The implications of this for magnetic resonance elastography (MRE) are twofold. First, for MRE methods which require the measurement of a harmonic displacement field within the tissue region of interest, the presence of 3D motion effects reduces or eliminates the possibility that simpler, lower-dimensional motion field images will capture the true dynamics of the entire stimulated tissue. Second, MRE techniques that exploit model-based elastic property reconstruction methods will not be able to accurately match the observed displacements unless they are capable of accounting for 3D motion effects. These two factors are of key importance for MRE techniques based on linear elasticity models to reconstruct mechanical tissue property distributions in biological samples. This article demonstrates that 3D motion effects are present even in regular, symmetric phantom geometries and presents the development of a 3D reconstruction algorithm capable of discerning elastic property distributions in the presence of such effects. The algorithm allows for the accurate determination of tissue mechanical properties at resolutions equal to that of the MR displacement image in complex, asymmetric biological tissue geometries. Simulation studies in a realistic 3D breast geometry indicate that the process can accurately detect 1-cm diameter hard inclusions with 2.5x elasticity contrast to the surrounding tissue.

Proceedings ArticleDOI
01 Dec 2001
TL;DR: This work presents a novel method based on polarization imaging for shape recovery of specular surfaces that overcomes the limitations of the intensity based approach and recovers whole surface patches and not only single curves on the surface.
Abstract: Traditional intensity imaging does not offer a general approach for the perception of textureless and specular reflecting surfaces. Intensity based methods for shape reconstruction of specular surfaces rely on virtual (i.e. mirrored) features moving over the surface under viewer motion. We present a novel method based on polarization imaging for shape recovery of specular surfaces. This method overcomes the limitations of the intensity based approach because no virtual features are required. It recovers whole surface patches and not only single curves on the surface. The presented solution is general as it is independent of the illumination. The polarization image encodes the projection of the surface normals onto the image and therefore provides constraints on the surface geometry. Taking polarization images from multiple views produces enough constraints to infer the complete surface shape. The reconstruction problem is solved by an optimization scheme where the surface geometry is modelled by a set of hierarchical basis functions. The optimization algorithm proves to be well converging, accurate and noise resistant. The work is substantiated by experiments on synthetic and real data.

Journal ArticleDOI
TL;DR: The discretization of the continuous image formation model is improved to explicitly allow for higher order interpolation methods to be used and the constraint sets are modified to reduce the amount of edge ringing present in the high resolution image estimate.
Abstract: In this paper, we propose to improve the POCS-based super-resolution reconstruction (SRR) methods in two ways. First, the discretization of the continuous image formation model is improved to explicitly allow for higher order interpolation methods to be used. Second, the constraint sets are modified to reduce the amount of edge ringing present in the high resolution image estimate. This effectively regularizes the inversion process.

Journal ArticleDOI
TL;DR: An iterative Bayesian reconstruction algorithm for limited view angle tomography, or ectomography, based on the three-dimensional total variation (TV) norm has been developed and has been shown to improve the perceived image quality.
Abstract: An iterative Bayesian reconstruction algorithm for limited view angle tomography, or ectomography, based on the three-dimensional total variation (TV) norm has been developed. The TV norm has been described in the literature as a method for reducing noise in two-dimensional images while preserving edges, without introducing ringing or edge artefacts. It has also been proposed as a 2D regularization function in Bayesian reconstruction, implemented in an expectation maximization algorithm (TV-EM). The TV-EM was developed for 2D single photon emission computed tomography imaging, and the algorithm is capable of smoothing noise while maintaining edges without introducing artefacts. The TV norm was extended from 2D to 3D and incorporated into an ordered subsets expectation maximization algorithm for limited view angle geometry. The algorithm, called TV3D-EM, was evaluated using a modelled point spread function and digital phantoms. Reconstructed images were compared with those reconstructed with the 2D filtered backprojection algorithm currently used in ectomography. Results show a substantial reduction in artefacts related to the limited view angle geometry, and noise levels were also improved. Perhaps most important, depth resolution was improved by at least 45%. In conclusion, the proposed algorithm has been shown to improve the perceived image quality.

Proceedings ArticleDOI
01 Dec 2001
TL;DR: IBR techniques for non-metric reconstructions are described, which are often much easier to obtain since they do not require camera calibration and are well suited to the video stabilization problem.
Abstract: We consider the problem of video stabilization: removing unwanted image perturbations due to unstable camera motions. We approach this problem from an image-based rendering (IBR) standpoint. Given an unstabilized video sequence, the task is to synthesize a new sequence as seen from a stabilized camera trajectory. This task is relatively straightforward if one has a Euclidean reconstruction of the unstabilized camera trajectory and a suitable IBR algorithm. However, it is often not feasible to obtain a Euclidean reconstruction from an arbitrary video sequence. In light of this problem, we describe IBR techniques for non-metric reconstructions, which are often much easier to obtain since they do not require camera calibration. These rendering techniques are well suited to the video stabilization problem. The key idea behind our techniques is that all measurements are specified in the image space, rather than in the non-metric space.