scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 1999"


PatentDOI
TL;DR: The problem of image reconstruction from sensitivity encoded data is formulated in a general fashion and solved for arbitrary coil configurations and k‐space sampling patterns and special attention is given to the currently most practical case, namely, sampling a common Cartesian grid with reduced density.
Abstract: The invention relates to a method of parallel imaging for obtaining images by means of magnetic resonance (MR). The method includes the simultaneous measurement of sets of MR singals by an array of receiver coils, and the reconstruction of individual receiver coil images from the sets of MR signals. In order to reduce the acquisition time, the distance between adjacent phase encoding lines in k-space is increased, compared to standard Fourier imaging, by a non-integer factor smaller than the number of receiver coils. This undersampling gives rise to aliasing artifacts in the individual receiver coil images. An unaliased final image with the same field of view as in standard Fourier imaging is formed from a combination of the individual receiver coil images whereby account is taken of the mutually different spatial sensitivities of the receiver coils at the positions of voxels which in the receiver coil images become superimposed by aliasing. This requires the solution of a linear equation by means of the generalised inverse of a sensitivity matrix. The reduction of the number of phase encoding lines by a non-integer factor compared to standard Fourier imaging provides that different numbers of voxels become superimposed (by aliasing) in different regions of the receiver coil images. This effect can be exploited to shift residual aliasing artifacts outside the area of interest.

6,562 citations


Journal ArticleDOI
TL;DR: Results are shown in which PROPELLER MRI is used to correct for bulk motion in head images and respiratory motion in nongated cardiac images.
Abstract: A method for motion correction, involving both data collection and reconstruction, is presented. The PROPELLER MRI method collects data in concentric rectangular strips rotated about the k-space origin. The central region of k-space is sampled for every strip, which (a) allows one to correct spatial inconsistencies in position, rotation, and phase between strips, (b) allows one to reject data based on a correlation measure indicating through-plane motion, and (c) further decreases motion artifacts through an averaging effect for low spatial frequencies. Results are shown in which PROPELLER MRI is used to correct for bulk motion in head images and respiratory motion in nongated cardiac images. Magn Reson Med 42:963-969, 1999.

917 citations


Journal ArticleDOI
TL;DR: In this article, a holographic reconstruction procedure combining images taken at different distances from the specimen was developed, which results in quantitative phase mapping and, through association with threedimensional reconstruction, in holotomography, the complete three-dimensional mapping of the density in a sample.
Abstract: Because the refractive index for hard x rays is slightly different from unity, the optical phase of a beam is affected by transmission through an object. Phase images can be obtained with extreme instrumental simplicity by simple propagation provided the beam is coherent. But, unlike absorption, the phase is not simply related to image brightness. A holographic reconstruction procedure combining images taken at different distances from the specimen was developed. It results in quantitative phase mapping and, through association with three-dimensional reconstruction, in holotomography, the complete three-dimensional mapping of the density in a sample. This tool in the characterization of materials at the micrometer scale is uniquely suited to samples with low absorption contrast and radiation-sensitive systems.

903 citations


Journal ArticleDOI
TL;DR: A theoretical proof is given which shows that the absence of skew in the image plane is sufficient to allow for self-calibration and a method to detect critical motion sequences is proposed.
Abstract: In this paper the theoretical and practical feasibility of self-calibration in the presence of varying intrinsic camera parameters is under investigation. The paper‘s main contribution is to propose a self-calibration method which efficiently deals with all kinds of constraints on the intrinsic camera parameters. Within this framework a practical method is proposed which can retrieve metric reconstruction from image sequences obtained with uncalibrated zooming/focusing cameras. The feasibility of the approach is illustrated on real and synthetic examples. Besides this a theoretical proof is given which shows that the absence of skew in the image plane is sufficient to allow for self-calibration. A counting argument is developed which—depending on the set of constraints—gives the minimum sequence length for self-calibration and a method to detect critical motion sequences is proposed.

829 citations


PatentDOI
David O. Walsh1
TL;DR: Experimental results indicate SNR performance approaching that of the optimal matched filter and the technique enables near‐optimal reconstruction of multicoil MR imagery without a‐priori knowledge of the individual coil field maps or noise correlation structure.
Abstract: A method to model the NMR signal and/or noise functions as stochastic processes. Locally relevant statistics for the signal and/or noise processes are derived directly from the set of individual coil images, in the form of array correlation matrices, by averaging individual coil image cross-products over two or more pixel locations. An optimal complex weight vector is computed on the basis of the estimated signal and noise correlation statistics. The weight vector is applied to coherently combine the individual coil images at a single pixel location, at multiple pixel locations, or over the entire image field of view (FOV).

721 citations


Journal ArticleDOI
Yoshinori Arai1, E Tammisalo, K Iwai, Koji Hashimoto, Koji Shinoda 
TL;DR: Ortho-CT as mentioned in this paper is a cone-beam-type of CT apparatus consisting of a multifunctional maxillofacial imaging machine (Scanora, Soredex, Helsinki, Finland) in which the film is replaced with an X-ray imaging intensifier (Hamamatsu Photonics, Hamamatsu, Japan).
Abstract: OBJECTIVE To describe a compact computed tomographic apparatus (Ortho-CT) for use in dental practice. METHODS Ortho-CT is a cone-beam-type of CT apparatus consisting of a multifunctional maxillofacial imaging machine (Scanora, Soredex, Helsinki, Finland) in which the film is replaced with an X-ray imaging intensifier (Hamamatsu Photonics, Hamamatsu, Japan). The region of image reconstruction is a cylinder 32 mm in height and 38 mm in diameter and the voxel is a 0.136-mm cube. Scanning is at 85 kV and 10 mA with a 1 mm Cu filter. The scan time is 17 s comparable with that required for rotational panoramic radiography. A single scan collects 512 sets of projection data through 360 degrees and the image is reconstructed by a personal computer. The time required for image reconstruction is about 10 min. RESULTS The resolution limit was about 2.0 lp mm-1 and the skin entrance dose 0.62 mGy. Excellent image quality was obtained with a tissue-equivalent skull phantom: roots, periodontal ligament space, lamina du...

721 citations


Journal ArticleDOI
TL;DR: This paper introduces a simultaneous update algorithm called separable paraboloidal surrogates (SPS) that converges much faster than the transmission EM algorithm and shows that OSTR is superior to OSEM applied to the logarithm of the transmission data.
Abstract: The ordered subsets EM (OSEM) algorithm has enjoyed considerable interest for emission image reconstruction due to its acceleration of the original EM algorithm and ease of programming. The transmission EM reconstruction algorithm converges very slowly and is not used in practice. In this paper, we introduce a simultaneous update algorithm called separable paraboloidal surrogates (SPS) that converges much faster than the transmission EM algorithm. Furthermore, unlike the 'convex algorithm' for transmission tomography, the proposed algorithm is monotonic even with nonzero background counts. We demonstrate that the ordered subsets principle can also be applied to the new SPS algorithm for transmission tomography to accelerate 'convergence', albeit with similar sacrifice of global convergence properties as for OSEM. We implemented and evaluated this ordered subsets transmission (OSTR) algorithm. The results indicate that the OSTR algorithm speeds up the increase in the objective function by roughly the number of subsets in the early iterates when compared to the ordinary SPS algorithm. We compute mean square errors and segmentation errors for different methods and show that OSTR is superior to OSEM applied to the logarithm of the transmission data. However, penalized-likelihood reconstructions yield the best quality images among all other methods tested.

616 citations


Journal ArticleDOI
Hui Hu1
TL;DR: The results show that the slice profile, image artifacts, and noise exhibit performance peaks or valleys at certain helical pitches in the multi-slice CT, whereas in the single- slice CT the image noise remains unchanged and the slices profile and image artifacts steadily deteriorate with helical pitch.
Abstract: The multi-slice CT scanner refers to a special CT system equipped with a multiple-row detector array to simultaneously collect data at different slice locations. The multi-slice CT scanner has the capability of rapidly scanning large longitudinal (z) volume with high z-axis resolution. It also presents new challenges and new characteristics. In this paper, we study the scan and reconstruction principles of the multi-slice helical CT in general and the 4-slice helical CT in particular. The multi-slice helical computed tomography consists of the following three key components: the preferred helical pitches for efficient z sampling in data collection and better artifact control; the new helical interpolation algorithms to correct for fast simultaneous patient translation; and the z-filtering reconstruction for providing multiple tradeoffs of the slice thickness, image noise and artifacts to suit for different application requirements. The concept of the preferred helical pitch is discussed with a newly proposed z sampling analysis. New helical reconstruction algorithms and z-filtering reconstruction are developed for multi-slice CT in general. Furthermore, the theoretical models of slice profile and image noise are established for multi-slice helical CT. For 4-slice helical CT in particular, preferred helical pitches are discussed. Special reconstruction algorithms are developed. Slice profiles, image noises, and artifacts of 4-slice helical CT are studied and compared with single slice helical CT. The results show that the slice profile, image artifacts, and noise exhibit performance peaks or valleys at certain helical pitches in the multi-slice CT, whereas in the single-slice CT the image noise remains unchanged and the slice profile and image artifacts steadily deteriorate with helical pitch. The study indicates that the 4-slice helical CT can provide equivalent image quality at 2 to 3 times the volume coverage speed of the single slice helical CT.

523 citations


Journal ArticleDOI
TL;DR: In this paper, a modified Landweber iteration method is proposed to enhance the quality of the image when two distinct phases are present, and a simple constraint is used as a regularization for computing a stabilized solution, with better immunity to noise and faster convergence.
Abstract: Electrical capacitance tomography (ECT) is a so-called `soft-field' tomography technique. The linear back-projection (LBP) method is used widely for image reconstruction in ECT systems. It is numerically simple and computationally fast because it involves only a single matrix-vector multiplication. However, the images produced by the LBP algorithm are generally qualitative rather than quantitative. This paper presents an image-reconstruction algorithm based on a modified Landweber iteration method that can greatly enhance the quality of the image when two distinct phases are present. In this algorithm a simple constraint is used as a regularization for computing a stabilized solution, with a better immunity to noise and faster convergence. Experimental results are presented.

507 citations


Journal ArticleDOI
TL;DR: A general tridimensional reconstruction algorithm of range and volumetric images, based on deformable simplex meshes, which can handle surfaces without any restriction on their shape or topology.
Abstract: In this paper, we propose a general tridimensional reconstruction algorithm of range and volumetric images, based on deformable simplex meshes. Simplex meshes are topologically dual of triangulations and have the advantage of permitting smooth deformations in a simple and efficient manner. Our reconstruction algorithm can handle surfaces without any restriction on their shape or topology. The different tasks performed during the reconstruction include the segmentation of given objects in the scene, the extrapolation of missing data, and the control of smoothness, density, and geometric quality of the reconstructed meshes. The reconstruction takes place in two stages. First, the initialization stage creates a simplex mesh in the vicinity of the data model either manually or using an automatic procedure. Then, after a few iterations, the mesh topology can be modified by creating holes or by increasing its genus. Finally, an iterative refinement algorithm decreases the distance of the mesh from the data while preserving high geometric and topological quality. Several reconstruction examples are provided with quantitative and qualitative results.

366 citations


Journal ArticleDOI
TL;DR: A new method to compute an attenuation map directly from the emission sinogram, eliminating the transmission scan from the acquisition protocol is proposed, which has been tested on mathematical phantoms and on a few clinical studies.
Abstract: In order to perform attenuation correction in emission tomography an attenuation map is required. The authors propose a new method to compute this map directly from the emission sinogram, eliminating the transmission scan from the acquisition protocol. The problem is formulated as an optimization task where the objective function is a combination of the likelihood and an a priori probability. The latter uses a Gibbs prior distribution to encourage local smoothness and a multimodal distribution for the attenuation coefficients. Since the attenuation process is different in positron emission tomography (PET) and single photon emission tomography (SPECT), a separate algorithm for each case is derived. The method has been tested on mathematical phantoms and on a few clinical studies. For PET, good agreement was found between the images obtained with transmission measurements and those produced by the new algorithm in an abdominal study. For SPECT, promising simulation results have been obtained for nonhomogeneous attenuation due to the presence of the lungs.

Journal ArticleDOI
Hakan Erdogan1, Jeffrey A. Fessler1
TL;DR: The new algorithms are based on paraboloidal surrogate functions for the log likelihood, which lead to monotonic algorithms even for the nonconvex log likelihood that arises due to background events, such as scatter and random coincidences.
Abstract: We present a framework for designing fast and monotonic algorithms for transmission tomography penalized-likelihood image reconstruction. The new algorithms are based on paraboloidal surrogate functions for the log likelihood. Due to the form of the log-likelihood function it is possible to find low curvature surrogate functions that guarantee monotonicity. Unlike previous methods, the proposed surrogate functions lead to monotonic algorithms even for the nonconvex log likelihood that arises due to background events, such as scatter and random coincidences. The gradient and the curvature of the likelihood terms are evaluated only once per iteration. Since the problem is simplified at each iteration, the CPU time is less than that of current algorithms which directly minimize the objective, yet the convergence rate is comparable. The simplicity, monotonicity, and speed of the new algorithms are quite attractive. The convergence rates of the algorithms are demonstrated using real and simulated PET transmission scans.

Journal ArticleDOI
TL;DR: This method uses only the coordinates of the sampled data; unlike previous methods, it does not require knowledge of the trajectories and can easily handle trajectories that “cross” in k‐space.
Abstract: Data collection of MRI which is sampled nonuniformly in k-space is often interpolated onto a Cartesian grid for fast reconstruction. The collected data must be properly weighted before interpolation, for accurate reconstruction. We propose a criterion for choosing the weighting function necessary to compensate for nonuniform sampling density. A numerical iterative method to find a weighting function that meets that criterion is also given. This method uses only the coordinates of the sampled data; unlike previous methods, it does not require knowledge of the trajectories and can easily handle trajectories that "cross" in k-space. Moreover, the method can handle sampling patterns that are undersampled in some regions of k-space and does not require a post-gridding density correction. Weighting functions for various data collection strategies are shown. Synthesized and collected in vivo data also illustrate aspects of this method.

Journal ArticleDOI
TL;DR: This paper proposes a finite element-based method for the reconstruction of three-dimensional resistivity distributions based on the so-called complete electrode model that takes into account the presence of the electrodes and the contact impedances and results from static and dynamic reconstructions with real measurement data are given.
Abstract: In electrical impedance tomography an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. It is often assumed that the injected currents are confined to the two-dimensional (2-D) electrode plane and the reconstruction is based on 2-D assumptions. However, the currents spread out in three dimensions and, therefore, off-plane structures have significant effect on the reconstructed images. In this paper we propose a finite element-based method for the reconstruction of three-dimensional resistivity distributions. The proposed method is based on the so-called complete electrode model that takes into account the presence of the electrodes and the contact impedances. Both the forward and the inverse problems are discussed and results from static and dynamic (difference) reconstructions with real measurement data are given. It is shown that in phantom experiments with accurate finite element computations it is possible to obtain static images that are comparable with difference images that are reconstructed from the same object with the empty (saline filled) tank as a reference.

Journal ArticleDOI
TL;DR: A variant of Tikhonov regularization is examined in which radial variation is allowed in the value of the regularization parameter, which minimizes high-frequency noise in the reconstructed image near the source-detector locations and can produce constant image resolution and contrast across the image field.
Abstract: Diffuse tomography with near-infrared light has biomedical application for imaging hemoglobin, water, lipids, cytochromes, or exogenous contrast agents and is being investigated for breast cancer diagnosis. A Newton–Raphson inversion algorithm is used for image reconstruction of tissue optical absorption and transport scattering coefficients from frequency-domain measurements of modulated phase shift and light intensity. A variant of Tikhonov regularization is examined in which radial variation is allowed in the value of the regularization parameter. This method minimizes high-frequency noise in the reconstructed image near the source–detector locations and can produce constant image resolution and contrast across the image field.

Journal ArticleDOI
TL;DR: A three-dimensional computed microtomography (microCT) system using synchrotron radiation, developed at ESRF, which allows high resolution and a high signal-to-noise ratio imaging, and first results on human trabecular bone samples are presented.
Abstract: X-ray computed microtomography is particularly well suited for studying trabecular bone architecture, which requires three-dimensional (3-D) images with high spatial resolution. For this purpose, we describe a three-dimensional computed microtomography (μCT) system using synchrotron radiation, developed at ESRF. Since synchrotron radiation provides a monochromatic and high photon flux x-ray beam, it allows high resolution and a high signal-to-noise ratio imaging. The principle of the system is based on truly three-dimensional parallel tomographic acquisition. It uses a two-dimensional (2-D) CCD-based detector to record 2-D radiographs of the transmitted beam through the sample under different angles of view. The 3-D tomographic reconstruction, performed by an exact 3-D filtered backprojection algorithm, yields 3-D images with cubic voxels. The spatial resolution of the detector was experimentally measured. For the application to bone investigation, the voxel size was set to 6.65 μm, and the experimental spatial resolution was found to be 11 μm. The reconstructed linear attenuation coefficient was calibrated from hydroxyapatite phantoms. Image processing tools are being developed to extract structural parameters quantifying trabecular bone architecture from the 3-D μCT images. First results on human trabecular bone samples are presented.

Journal ArticleDOI
TL;DR: Numerical studies suggest that intraventricular hemorrhages can be detected using the GIIR technique, even in the presence of a heterogeneous background.
Abstract: Currently available tomographic image reconstruction schemes for optical tomography (OT) are mostly based on the limiting assumptions of small perturbations and a priori knowledge of the optical properties of a reference medium. Furthermore, these algorithms usually require the inversion of large, full, ill-conditioned Jacobian matrixes. In this work a gradient-based iterative image reconstruction (GIIR) method is presented that promises to overcome current limitations. The code consists of three major parts: (1) A finite-difference, time-resolved, diffusion forward model is used to predict detector readings based on the spatial distribution of optical properties; (2) An objective function that describes the difference between predicted and measured data; (3) An updating method that uses the gradient of the objective function in a line minimization scheme to provide subsequent guesses of the spatial distribution of the optical properties for the forward model. The reconstruction of these properties is completed, once a minimum of this objective function is found. After a presentation of the mathematical background, two- and three-dimensional reconstruction of simple heterogeneous media as well as the clinically relevant example of ventricular bleeding in the brain are discussed. Numerical studies suggest that intraventricular hemorrhages can be detected using the GIIR technique, even in the presence of a heterogeneous background.

Journal ArticleDOI
TL;DR: New preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems are described and lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration.
Abstract: Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.

Journal ArticleDOI
Volker Rasche1, Roland Proksa1, Ralph Sinkus1, Peter Börnert1, Holger Eggers1 
TL;DR: The authors introduce the application of the convolution interpolation for resampling of data from one arbitrary grid onto another and suggest that the suggested approach to derive the sampling density function is suitable even for arbitrary sampling patterns.
Abstract: For certain medical applications resampling of data is required. In magnetic resonance tomography (MRT) or computer tomography (CT), e.g., data may be sampled on nonrectilinear grids in the Fourier domain. For the image reconstruction a convolution-interpolation algorithm, often called gridding, can be applied for resampling of the data onto a rectilinear grid. Resampling of data from a rectilinear onto a nonrectilinear grid are needed, e.g., if projections of a given rectilinear data set are to be obtained. In this paper the authors introduce the application of the convolution interpolation for resampling of data from one arbitrary grid onto another. The basic algorithm can be split into two steps. First, the data are resampled from the arbitrary input grid onto a rectilinear grid and second, the rectilinear data is resampled onto the arbitrary output grid. Furthermore, the authors like to introduce a new technique to derive the sampling density function needed for the first step of their algorithm. For fast, sampling-pattern-independent determination of the sampling density function the Voronoi diagram of the sample distribution is calculated. The volume of the Voronoi cell around each sample is used as a measure for the sampling density. It is shown that the introduced resampling technique allows fast resampling of data between arbitrary grids. Furthermore, it is shown that the suggested approach to derive the sampling density function is suitable even for arbitrary sampling patterns. Examples are given in which the proposed technique has been applied for the reconstruction of data acquired along spiral, radial, and arbitrary trajectories and for the fast calculation of projections of a given rectilinearly sampled image.

Journal ArticleDOI
24 Oct 1999
TL;DR: In this paper, a maximum a posteriori algorithm for reduction of metal streak artifacts in X-ray computed tomography is presented, which uses a Markov random field smoothness prior and applies increased sampling in the reconstructed image.
Abstract: A maximum a posteriori algorithm for reduction of metal streak artifacts in X-ray computed tomography is presented. The algorithm uses a Markov random field smoothness prior and applies increased sampling in the reconstructed image. Good results are obtained for simulations and phantom measurements: streak artifacts are reduced while small, line-shaped details are preserved.

Journal ArticleDOI
TL;DR: A systematic method is described for obtaining a surface representation of the geometric central layer of the human cerebral cortex using fuzzy segmentation, an isosurface algorithm, and a deformable surface model, which reconstructs the entire cortex with the correct topology.
Abstract: Reconstructing the geometry of the human cerebral cortex from MR images is an important step in both brain mapping and surgical path planning applications. Difficulties with imaging noise, partial volume averaging, image intensity inhomogeneities, convoluted cortical structures, and the requirement to preserve anatomical topology make the development of accurate automated algorithms particularly challenging. Here the authors address each of these problems and describe a systematic method for obtaining a surface representation of the geometric central layer of the human cerebral cortex. Using fuzzy segmentation, an isosurface algorithm, and a deformable surface model, the method reconstructs the entire cortex with the correct topology, including deep convoluted sulci and gyri. The method is largely automated and its results are robust to imaging noise, partial volume averaging, and image intensity inhomogeneities. The performance of this method is demonstrated, both qualitatively and quantitatively, and the results of its application to six subjects and one simulated MR brain volume are presented.

Proceedings ArticleDOI
01 Jun 1999
TL;DR: The novelty of the approach lies in the use of inter-image homographies to validate and best estimate the plane, and in the minimal initialization requirements-only a single 3D line with a textured neighbourhood is required to generate a plane hypothesis.
Abstract: A new method is described for automatically reconstructing 3D planar faces from multiple images of a scene. The novelty of the approach lies in the use of inter-image homographies to validate and best estimate the plane, and in the minimal initialization requirements-only a single 3D line with a textured neighbourhood is required to generate a plane hypothesis. The planar facets enable line grouping and also the construction of parts of the wireframe which were missed due to the inevitable shortcomings of feature detection and matching. The method allows a piecewise planar model of a scene to be built completely automatically, with no user intervention at any stage, given only the images and camera projection matrices as input. The robustness and reliability of the method are illustrated on several examples, from both aerial and interior views.

Journal ArticleDOI
TL;DR: A wavelet domain approach is proposed, which provides a valuable tool not only for DEMs combination (improving accuracy), but for data evaluation and selection, since the phase error power is estimated for each interferogram.
Abstract: Multibaseline synthetic aperture radar (SAR) interferometry can be exploited successfully for high-quality digital elevation model (DEM) reconstruction, provided that both noise and atmospheric effects are taken into account. A weighted combination of many uncorrelated topographic profiles strongly reduces the impact of phase artifacts on the final DEM. The key issue is weights selection. In the present article a wavelet domain approach is proposed. Taking advantage of the particular frequency trend of the atmospheric distortion, it is possible to estimate, directly from the data, noise and atmospheric distortion power for each interferogram. The available DEMs are then combined by means of a weighted average, carried out in the wavelet domain. This new approach provides a valuable tool not only for DEMs combination (improving accuracy), but for data evaluation and selection, since the phase error power is estimated for each interferogram. Results obtained using simulated and real data (ERS-1/2 TANDEM data of a test area around the Etna volcano, Sicily) are presented.

Journal ArticleDOI
TL;DR: An iterative image reconstruction scheme for optical tomography that is based on the equation of radiative transfer that accurately describes the photon propagation in turbid media without any limiting assumptions regarding the optical properties is reported on.
Abstract: We report on the development of an iterative image reconstruction scheme for optical tomography that is based on the equation of radiative transfer. Unlike the commonly applied diffusion approximation, the equation of radiative transfer accurately describes the photon propagation in turbid media without any limiting assumptions regarding the optical properties. The reconstruction scheme consists of three major parts: (1) a forward model that predicts the detector readings based on solutions of the time-independent radiative transfer equation, (2) an objective function that provides a measure of the differences between the detected and the predicted data, and (3) an updating scheme that uses the gradient of the objective function to perform a line minimization to get new guesses of the optical properties. The gradient is obtained by employing an adjoint differentiation scheme, which makes use of the structure of the finite-difference discrete-ordinate formulation of the transport forward model. Based on the new guess of the optical properties a new forward calculation is performed to get new detector predictions. The reconstruction process is completed when the minimum of the objective function is found within a defined error. To illustrate the performance of the code we present initial reconstruction results based on simulated data.

Journal ArticleDOI
TL;DR: A novel model-based method for the estimation of the three-dimensional position and orientation (pose) of both the femoral and tibial knee prosthesis components during activity is presented and is well suited for kinematics analysis on TKR patients.
Abstract: A better knowledge of the kinematics behavior of total knee replacement (TKR) during activity still remains a crucial issue to validate innovative prosthesis designs and different surgical strategies. Tools for more accurate measurement of in vivo kinematics of knee prosthesis components are therefore fundamental to improve the clinical outcome of knee replacement. In the present study, a novel model-based method for the estimation of the three-dimensional (3-D) position and orientation (pose) of both the femoral and tibial knee prosthesis components during activity is presented. The knowledge of the 3-D geometry of the components and a single plane projection view in a fluoroscopic image are sufficient to reconstruct the absolute and relative pose of the components in space. The technique is based on the best alignment of the component designs with the corresponding projection on the image plane. The image generation process is modeled and an iterative procedure localizes the spatial pose of the object by minimizing the Euclidean distance of the projection rays from the object surface. Computer simulation and static/dynamic in vitro tests using real knee prosthesis show that the accuracy with which relative orientation and position of the components can be estimated is better than 1.5/spl deg/ and 1.5 mm, respectively. In vivo tests demonstrate that the method is well suited for kinematics analysis on TKR patients and that good quality images can be obtained with a carefully positioning of the fluoroscope and an appropriate dosage. With respect to previously adopted template matching techniques, the present method overcomes the complete segmentation of the components on the projected image and also features the simultaneous evaluation of all the six degrees of freedom (DOF) of the object. The expected small difference between successive poses in in vivo sequences strongly reduces the frequency of false poses and both the operator and computation time.

Journal ArticleDOI
TL;DR: Nuclear medicine imaging techniques appear to be a potentially valuable tool during radiotherapy treatment planning for patients with lung cancer and the utilization of accurate nuclear medicine image reconstruction techniques and TCT may improve the treatment planning process.

Journal ArticleDOI
TL;DR: This paper uses a segmented brain model obtained from a magnetic resonance image as a test case to compare the performance of the two-stage reconstruction and the direct reconstruction from a flat prior, and shows that the former achieves superior results in the recovery of localized absorption and scattering hot spots embedded in the background tissue.
Abstract: In this paper we investigate the application of anatomical prior information to image reconstruction in optical tomography. We propose a two-stage reconstruction scheme. The first stage is a reconstruction into a low-dimensional region basis, obtained by segmentation of an image obtained by an independent imaging modality, into areas of distinct tissue types. The reconstruction into this basis recovers global averages of the optical tissue parameters of each region. The recovered distribution of region values provides the starting point for the second stage of the reconstruction into the spatially resolved final image basis. This second step recovers localized perturbations within the regions. The benefit of this method is the improved stability and faster convergence of the imaging process compared with a direct reconstruction into a spatially resolved basis. This is particularly important for the simultaneous reconstruction of absorption and scattering images, where ambiguities between the two parameters and the resulting problems of crosstalk require a good initial parameter distribution to ensure convergence of the reconstruction. We use a segmented brain model obtained from a magnetic resonance image as a test case to compare the performance of the two-stage reconstruction and the direct reconstruction from a flat prior, and show that the former achieves superior results in the recovery of localized absorption and scattering hot spots embedded in the background tissue.

Journal ArticleDOI
TL;DR: The TV norm minimization constraint is extended to the field of SPECT image reconstruction with a Poisson noise model and the proposed iterative Bayesian reconstruction algorithm has the capacity to smooth noise and maintain sharp edges without introducing over/under shoots and ripples around the edges.
Abstract: An iterative Bayesian reconstruction algorithm based on the total variation (TV) norm constraint is proposed. The motivation for using TV regularization is that it is extremely effective for recovering edges of images. This paper extends the TV norm minimization constraint to the field of SPECT image reconstruction with a Poisson noise model. The regularization norm is included in the OSL-EM (one step late expectation maximization) algorithm. Unlike many other edge-preserving regularization techniques, the TV based method depends one parameter. Reconstructions of computer simulations and patient data show that the proposed algorithm has the capacity to smooth noise and maintain sharp edges without introducing over/under shoots and ripples around the edges.

Journal ArticleDOI
HyunWook Park1, Yung-Lyul Lee1
TL;DR: According to the comparison study of PSNR and computation complexity analysis, the proposed algorithm shows better performance than the VM postprocessing algorithm, whereas the subjective image qualities of both algorithms are similar.
Abstract: The reconstructed images from highly compressed MPEG data have noticeable image degradations, such as blocking artifacts near the block boundaries, corner outliers at crosspoints of blocks, and ringing noise near image edges because the MPEG quantizes the transformed coefficients of 8/spl times/8 pixel blocks. A postprocessing algorithm is proposed to reduce quantization effects, such as blocking artifacts, corner outliers, and ringing noise, in MPEG-decompressed images. The proposed postprocessing algorithm reduces the quantization effects adaptively by using both spatial frequency and temporal information extracted from the compressed data. The blocking artifacts are reduced by one-dimensional (1-D) horizontal and vertical low-pass filtering (LPF), and the ringing noise is reduced by two-dimensional (2-D) signal-adaptive filtering (SAF). A comparison study of the peak signal-to-noise ratio (PSNR) and the computation complexity analysis between the proposed algorithm and the MPEG-4 VM (verification model) postprocessing algorithm is performed by computer simulation with several image sequences. According to the comparison study of PSNR and computation complexity analysis, the proposed algorithm shows better performance than the VM postprocessing algorithm, whereas the subjective image qualities of both algorithms are similar.

Journal ArticleDOI
TL;DR: Preliminary simulations using a mesh of the human brain confirm that optimal images are produced from circularly symmetric source-detector distributions, but that quantitatively accurate images can be reconstructed even with a sub-surface imaging, although spatial resolution is modest.
Abstract: Images produced in six different geometries with diffuse optical tomography simulations of tissue have been compared using a finite element-based algorithm with iterative refinement provided by the Newton-Raphson approach. The source-detector arrangements studied include (i) fan-beam tomography, (ii) full reflectance and transmittance tomography, as well as (iii) sub-surface imaging, where each of these three were examined in a circular and a flat slab geometry. The algorithm can provide quantitatively accurate results for all of the tomographic geometries investigated under certain circumstances. For example, quantitatively accurate results occur with sub-surface imaging only when the object to be imaged is fully contained within the diffuse projections. In general the diffuse projections must sample all regions around the target to be characterized in order for the algorithm to recover quantitatively accurate results. Not only is it important to sample the whole space, but maximal angular sampling is required for optimal image reconstruction. Geometries which do not maximize the possible sampling angles cause more noise artifact in the reconstructed images. Preliminary simulations using a mesh of the human brain confirm that optimal images are produced from circularly symmetric source-detector distributions, but that quantitatively accurate images can be reconstructed even with a sub-surface imaging, although spatial resolution is modest.