scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 1997"


Journal ArticleDOI
TL;DR: This paper proposes a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable, which leads to the definition of an original reconstruction algorithm, called ARTUR, which can be applied in a large number of applications in image processing.
Abstract: Many image processing problems are ill-posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such an edge-preserving regularization. Under these conditions, we show that it is possible to introduce an auxiliary variable whose role is twofold. First, it marks the discontinuities and ensures their preservation from smoothing. Second, it makes the criterion half-quadratic. The optimization is then easier. We propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This leads to the definition of an original reconstruction algorithm, called ARTUR. Some theoretical properties of ARTUR are discussed. Experimental results illustrate the behavior of the algorithm. These results are shown in the field of 2D single photon emission tomography, but this method can be applied in a large number of applications in image processing.

1,360 citations


Journal ArticleDOI
TL;DR: P positron-emission tomography (PET) has inherent advantages that avoid the shortcomings of other nuclear medicine imaging methods, and its image reconstruction methods with origins in signal and image processing are discussed.
Abstract: We review positron-emission tomography (PET), which has inherent advantages that avoid the shortcomings of other nuclear medicine imaging methods. PET image reconstruction methods with origins in signal and image processing are discussed, including the potential problems of these methods. A summary of statistical image reconstruction methods, which can yield improved image quality, is also presented.

1,257 citations


Journal ArticleDOI
TL;DR: This paper presents two new rebinning algorithms for the reconstruction of three-dimensional (3-D) positron emission tomography (PET) data that are approximate but allows an efficient implementation based on taking 2-D Fourier transforms of the data.
Abstract: This paper presents two new rebinning algorithms for the reconstruction of three-dimensional (3-D) positron emission tomography (PET) data. A rebinning algorithm is one that first sorts the 3-D data into an ordinary two-dimensional (2-D) data set containing one sinogram for each transaxial slice to be reconstructed; the 3-D image is then recovered by applying to each slice a 2-D reconstruction method such as filtered-backprojection. This approach allows a significant speedup of 3-D reconstruction, which is particularly useful for applications involving dynamic acquisitions or whole-body imaging. The first new algorithm is obtained by discretizing an exact analytical inversion formula. The second algorithm, called the Fourier rebinning algorithm (FORE), is approximate but allows an efficient implementation based on taking 2-D Fourier transforms of the data. This second algorithm was implemented and applied to data acquired with the new generation of PET systems and also to simulated data for a scanner with an 18/spl deg/ axial aperture. The reconstructed images were compared to those obtained with the 3-D reprojection algorithm (3DRP) which is the standard "exact" 3-D filtered-backprojection method. Results demonstrate that FORE provides a reliable alternative to 3DRP, while at the same time achieving an order of magnitude reduction in processing time.

760 citations


Journal ArticleDOI
TL;DR: A compact parametrized model of facial appearance which takes into account all sources of variability and can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition is described.
Abstract: Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression, and lighting. We describe a compact parametrized model of facial appearance which takes into account all these sources of variability. The model represents both shape and gray-level appearance, and is created by performing a statistical analysis over a training set of face images. A robust multiresolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located, and a set of shape, and gray-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition. Experimental results are presented for a database of 690 face images obtained under widely varying conditions of 3D pose, lighting, and facial expression. The system performs well on all the tasks listed above.

706 citations


Journal ArticleDOI
TL;DR: This paper considers models based on radiative transfer theory and its derivatives, which are either stochastic in nature (random walk, Monte Carlo, and Markov processes) or deterministic (partial differential equation models and their solutions).
Abstract: The desire for a diagnostic optical imaging modality has motivated the development of image reconstruction procedures involving solution of the inverse problem. This approach is based on the assumption that, given a set of measurements of transmitted light between pairs of points on the surface of an object, there exists a unique three-dimensional distribution of internal scatterers and absorbers which would yield that set. Thus imaging becomes a task of solving an inverse problem using an appropriate model of photon transport. In this paper we examine the models that have been developed for this task, and review current approaches to image reconstruction. Specifically, we consider models based on radiative transfer theory and its derivatives, which are either stochastic in nature (random walk, Monte Carlo, and Markov processes) or deterministic (partial differential equation models and their solutions). Image reconstruction algorithms are discussed which are based on either direct backprojection, perturbation methods, nonlinear optimization, or Jacobian calculation. Finally we discuss some of the fundamental problems that must be addressed before optical tomography can be considered to be an understood problem, and before its full potential can be realized.

546 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: In this article, a novel scene reconstruction technique is presented, which avoids image correspondence problems by working in a discretized scene space whose voxels are traversed in a fixed visibility ordering.
Abstract: A novel scene reconstruction technique is presented, different from previous approaches in its ability to cope with large changes in visibility and its modeling of intrinsic scene color and texture information. The method avoids image correspondence problems by working in a discretized scene space whose voxels are traversed in a fixed visibility ordering. This strategy takes full account of occlusions and allows the input cameras to be far apart and widely distributed about the environment. The algorithm identifies a special set of invariant voxels which together form a spatial and photometric reconstruction of the scene, fully consistent with the input images. The approach is evaluated with images from both inward- and outward-facing cameras.

531 citations


Journal ArticleDOI
TL;DR: Experimental results with real video demonstrate that a significant increase in the image resolution can be achieved by taking the motion blurring into account especially when there exists large interframe motion.
Abstract: Printing from an NTSC source and conversion of NTSC source material to high-definition television (HDTV) format are some of the applications that motivate superresolution (SR) image and video reconstruction from low-resolution (LR) and possibly blurred sources. Existing methods for SR image reconstruction are limited by the assumptions that the input LR images are sampled progressively, and that the aperture time of the camera is zero, thus ignoring the motion blur occurring during the aperture time. Because of the observed adverse effects of these assumptions for many common video sources, this paper proposes (i) a complete model of video acquisition with an arbitrary input sampling lattice and a nonzero aperture time, and (ii) an algorithm based on this model using the theory of projections onto convex sets to reconstruct SR still images or video from an LR time sequence of images. Experimental results with real video are provided, which clearly demonstrate that a significant increase in the image resolution can be achieved by taking the motion blurring into account especially when there exists large interframe motion.

519 citations


Journal ArticleDOI
TL;DR: An image enhancement method that reduces speckle noise and preserves edges is introduced that is based on a new nonlinear multiscale reconstruction scheme that is obtained by successively combining each coarser scale image with the corresponding modified interscale image.
Abstract: An image enhancement method that reduces speckle noise and preserves edges is introduced. The method is based on a new nonlinear multiscale reconstruction scheme that is obtained by successively combining each coarser scale image with the corresponding modified interscale image. Simulation results are included to demonstrate the performance of the proposed method.

356 citations


Journal ArticleDOI
TL;DR: An iterative reconstruction algorithm based on the Levenberg-Marquardt method is tested with synthetic data and two methods for choosing the regularization parameter, an empirical method and generalized cross validation method, are examined.
Abstract: This paper refers to quantitative reconstruction of the dielectric and conductive property distributions of biological objects by means of active microwave imaging. An iterative reconstruction algorithm based on the Levenberg-Marquardt method is tested with synthetic data. The influence of the receiver geometry is investigated and two methods for choosing the regularization parameter, an empirical method and generalized cross validation (GCV) method, are examined.

322 citations


Journal ArticleDOI
TL;DR: Expressions are derived for the normalized root-mean-square error of an image relative to a reference image, allowing for arbitrary constant (piston) and linear (tilt) phase terms and the relation between the error metric and other quality measures is derived.
Abstract: Expressions are derived for the normalized root-mean-square error of an image relative to a reference image. Different versions of the error metric are invariant to different combinations of effects, including the image's (a) being multiplied by a real or complex-valued constant, (b) having a constant added to its phase, (c) being translated, or (d) being complex conjugated and rotated 180 degrees . Invariance to these effects is particularly important for the phase-retrieval problem. One can also estimate the parameters of those effects. Similarly, two wave fronts can be compared, allowing for arbitrary constant (piston) and linear (tilt) phase terms. One can also include a weighting function. The relation between the error metric and other quality measures is derived.

270 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: In this article, a complete, detailed classification of camera motion sequences that lead to inherent ambiguities in uncalibrated Euclidean reconstruction or self-calibration is studied.
Abstract: In this paper sequences of camera motions that lead to inherent ambiguities in uncalibrated Euclidean reconstruction or self-calibration are studied. Our main contribution is a complete, detailed classification of these critical motion sequences (CMS). The practically important classes are identified and their degrees of ambiguity are derived. We also discuss some practical issues, especially concerning the reduction of the ambiguity of a reconstruction.

Journal ArticleDOI
TL;DR: A real-time interactive MRI system capable of localizing coronary arteries and imaging arrhythmic hearts in real‐time is described, and rapid localization in the abdomen is demonstrated with the spiral‐ring acquisition, whereas peristaltic motion in the small bowel is well visualized using the circular echo‐planar sequence.
Abstract: A real-time interactive MRI system capable of localizing coronary arteries and imaging arrhythmic hearts in real-time is described. Non-2DFT acquisition strategies such as spiral-interleaf, spiral-ring, and circular echo-planar imaging provide short scan times on a conventional scanner. Real-time gridding reconstruction at 8-20 images/s is achieved by distributing the reconstruction on general-purpose UNIX workstations. An X-windows application provides interactive control. A six-interleaf spiral sequence is used for cardiac imaging and can acquire six images/s. A sliding window reconstruction achieves display rates of 16-20 images/s. This allows cardiac images to be acquired in real-time, with minimal motion and flow artifacts, and without breath holding or cardiac gating. Abdominal images are acquired at over 2.5 images/s with spiral-ring or circular echo-planar sequences. Reconstruction rates are 8-10 images/s. Rapid localization in the abdomen is demonstrated with the spiral-ring acquisition, whereas peristaltic motion in the small bowel is well visualized using the circular echo-planar sequence.

Journal ArticleDOI
TL;DR: The new MRP reconstruction method was shown to produce high-quality quantitative emission images with only one parameter setting in addition to the number of iterations, and proved to be the most accurate of the tested methods.
Abstract: The aim of the present study was to investigate a new type of Bayesian one-step late reconstruction method which utilizes a median root prior (MRP). The method favours images which have locally monotonous radioactivity concentrations. The new reconstruction algorithm was applied to ideal simulated data, phantom data and some patient examinations with PET. The same projection data were reconstructed with filtered back-projection (FBP) and maximum likelihood-expectation maximization (ML-EM) methods for comparison. The MRP method provided good-quality images with a similar resolution to the FBP method with a ramp filter, and at the same time the noise properties were as good as with Hann-filtered FBP images. The typical artefacts seen in FBP reconstructed images outside of the object were completely removed, as was the grainy noise inside the object. Quantitatively, the resulting average regional radioactivity concentrations in a large region of interest in images produced by the MRP method corresponded to the FBP and ML-EM results but at the pixel by pixel level the MRP method proved to be the most accurate of the tested methods. In contrast to other iterative reconstruction methods, e.g. ML-EM, the MRP method was not sensitive to the number of iterations nor to the adjustment of reconstruction parameters. Only the Bayesian parameter beta had to be set. The proposed MRP method is much more simple to calculate than the methods described previously, both with regard to the parameter settings and in terms of general use. The new MRP reconstruction method was shown to produce high-quality quantitative emission images with only one parameter setting in addition to the number of iterations.

Journal ArticleDOI
TL;DR: An automated frequency-domain computed tomography scanner, which is more quantitative than earlier systems used in diaphanography because of the combination of intensity modulated signal detection and iterative image reconstruction.
Abstract: The instrument development and design of a prototype frequency-domain optical imaging device for breast cancer detection is described in detail. This device employs radio-frequency intensity modulated near-infrared light to image quantitatively both the scattering and absorption coefficients of tissue. The functioning components of the system include a laser diode and a photomultiplier tube, which are multiplexed automatically through 32 large core fiber optic bundles using high precision linear translation stages. Image reconstruction is based on a finite element solution of the diffusion equation. This tool for solving the forward problem of photon migration is coupled to an iterative optical property estimation algorithm, which uses a Levenberg-Marquardt routine with total variation minimization. The result of this development is an automated frequency-domain optical imager for computed tomography which produces quantitatively accurate images of the test phantoms used to date. This paper is a description and characterization of an automated frequency-domain computed tomography scanner, which is more quantitative than earlier systems used in diaphanography because of the combination of intensity modulated signal detection and iterative image reconstruction.

Journal ArticleDOI
TL;DR: The new grouped-coordinate ascent (GCA) algorithms in the class overcome several limitations associated with previous algorithms, and it is shown that the GCA algorithms converge faster than the SCA algorithm, even on conventional workstations.
Abstract: Presents a new class of algorithms for penalized-likelihood reconstruction of attenuation maps from low-count transmission scans. We derive the algorithms by applying to the transmission log-likelihood a version of the convexity technique developed by De Pierro for emission tomography. The new class includes the single-coordinate ascent (SCA) algorithm and Lange's convex algorithm for transmission tomography as special cases. The new grouped-coordinate ascent (GCA) algorithms in the class overcome several limitations associated with previous algorithms. (1) Fewer exponentiations are required than in the transmission maximum likelihood-expectation maximization (ML-EM) algorithm or in the SCA algorithm. (2) The algorithms intrinsically accommodate nonnegativity constraints, unlike many gradient-based methods. (3) The algorithms are easily parallelizable, unlike the SCA algorithm and perhaps line-search algorithms. We show that the GCA algorithms converge faster than the SCA algorithm, even on conventional workstations. An example from a low-count positron emission tomography (PET) transmission scan illustrates the method.

Proceedings ArticleDOI
17 Jun 1997
TL;DR: The special case of reconstruction from image sequences taken by cameras with skew equal to 0 and aspect ratio equal to 1 has been treated and it is shown that it is possible to reconstruct an unknown object from images taken by a camera with Euclidesan image plane up to similarity transformations, i.e., Euclidean transformations plus changes in the global scale.
Abstract: The special case of reconstruction from image sequences taken by cameras with skew equal to 0 and aspect ratio equal to 1 has been treated. These type of cameras, here called cameras with Euclidean image planes, represent rigid projections where neither the principal point nor the focal length is known, it is shown that it is possible to reconstruct an unknown object from images taken by a camera with Euclidean image plane up to similarity transformations, i.e., Euclidean transformations plus changes in the global scale. An algorithm, using bundle adjustment techniques, has been implemented. The performance of the algorithm is shown on simulated data.

Journal ArticleDOI
TL;DR: A new correction method based on the existing concept of frequency segmented correction but which is faster and the‐oretically more accurate is introduced, yielding sharply focused images.
Abstract: Field inhomogeneities or susceptibility variations produce blurring in images acquired using non-2DFT k-space readout trajectories. This problem is more pronounced for sequences with long readout times such as spiral imaging. Theoretical and practical correction methods based on an acquired field map have been reported in the past. This paper introduces a new correction method based on the existing concept of frequency segmented correction but which is faster and theoretically more accurate. It consists of reconstructing the data at several frequencies to form a set of base images that are then added together with spatially varying linear coefficients derived from the field map. The new algorithm is applied to phantom and in vivo images acquired with projection reconstruction and spiral sequences, yielding sharply focused images.

Journal ArticleDOI
TL;DR: Extension of the technique to handle in vivo data sets by allowing physiological criteria to be taken into account in selecting the images used for construction is illustrated.
Abstract: A system is described that rapidly produces a regular 3-dimensional (3-D) data block suitable for processing by conventional image analysis and volume measurement software. The system uses electromagnetic spatial location of 2-dimensional (2-D) freehand-scanned ultrasound B-mode images, custom-built signal-conditioning hardware, UNIX-based computer processing and an efficient 3-D reconstruction algorithm. Utilisation of images from multiple angles of insonation, "compounding," reduces speckle contrast, improves structure coherence within the reconstructed grey-scale image and enhances the ability to detect structure boundaries and to segment and quantify features. Volume measurements using a series of water-filled latex and cylindrical foam rubber phantoms with volumes down to 0.7 mL show that a high degree of accuracy, precision and reproducibility can be obtained. Extension of the technique to handle in vivo data sets by allowing physiological criteria to be taken into account in selecting the images used for construction is also illustrated.

Journal ArticleDOI
TL;DR: It is suggested, in particular, that both time-resolving, and intensity-modulated systems can reconstruct variations in both optical absorption and scattering, but that unmodulated, non-time-resolved systems are prone to severe artefact.
Abstract: Optical tomography is a new medical imaging modality that is at the threshold of realization. A large amount of clinical work has shown the very real benefits that such a method could provide. At the same time a considerable effort has been put into theoretical studies of its probable success. At present there exist gaps between these two realms. In this paper we review some general approaches to inverse problems to set the context for optical tomography, defining both the terms forward problem and inverse problem. An essential requirement is to treat the problem in a nonlinear fashion, by using an iterative method. This in turn requires a convenient method of evaluating the forward problem, and its derivatives and variance. Photon transport models are described for obtaining analytical and numerical solutions for the most commonly used ones are reviewed. The inverse problem is approached by classical gradient-based solution methods. In order to develop practical implementations of these methods, we discuss the important topic of photon measurement density functions, which represent the derivative of the forward problem. We show some results that represent the most complex and realistic simulations of optical tomography yet developed. We suggest, in particular, that both time-resolved, and intensity-modulated systems can reconstruct variations in both optical absorption and scattering, but that unmodulated, non-time-resolved systems are prone to severe artefact. We believe that optical tomography reconstruction methods can now be reliably applied to a wide variety of real clinical data. The expected resolution of the method is poor, meaning that it is unlikely that the type of high-resolution images seen in computed tomography or medical resonance imaging can ever be obtained. Nevertheless we strongly expect the functional nature of these images to have a high degree of clinical significance.

Journal ArticleDOI
TL;DR: A global correction technique which provides a table of correction coefficients for an image acquired at any arbitrary angle about the patient, and performs corrections on 100 images obtained during rotation of the gantry through 200 degrees and finds that a fifth-order polynomial provides optimum image distortion reduction.
Abstract: X-ray image intensifiers (XRIIs) have many applications in diagnostic imaging including acquisition of near-real-time projection images of the intracranial and coronary vasculature. Recently, there has been some interest in using this projection data to generate three-dimensional (3-D) computed tomographic (CT) reconstructions. The XRII and x-ray tube are rotated around the object, acquiring sufficient data for the simultaneous reconstruction of many transverse slices. Three-dimensional reconstructions are compromised, however, if the projection data is geometrically distorted in any way. Previous studies have shown the distortion in XRIIs to be substantial and to be highly angular dependent. In this paper, we present a global correction technique which provides a table of correction coefficients for an image acquired at any arbitrary angle about the patient. The coefficients are generated using a linear least-squares fit between the detected and known locations of a grid of small steel beads which is attached to the XRII (27 cm nominal diameter). We have performed corrections on 100 images obtained during rotation of the gantry through 200 degrees and find that a fifth-order polynomial provides optimum image distortion reduction (mean residual distortion of 0.07 pixels), however, fourth-order polynomials provide sufficient distortion reduction for our application (mean residual displacement of 0.1 pixels). Using sixth-order polynomials does not provide a statistically significant reduction in image distortion. The spatial distribution of residual distortion did not demonstrate any particular pattern over the face of the XRII. Image angle and coefficient angle must be known to within +/- 2 degrees in order to keep the mean residual distortion be approximately 0.5 pixels.

Journal ArticleDOI
09 Nov 1997
TL;DR: This paper presents the results of combining high sensitivity 3D PET whole-body acquisition followed by fast 2D iterative reconstruction methods based on accurate statistical models made possible by Fourier rebinning (FORE), which accurately converts a 3D data set to a set of 2D sinograms.
Abstract: This paper presents the results of combining high sensitivity 3D PET whole-body acquisition followed by fast 2D iterative reconstruction methods based on accurate statistical models. This combination is made possible by Fourier rebinning (FORE), which accurately converts a 3D data set to a set of 2D sinograms. The combination of volume imaging with statistical reconstruction allows improvement of noise-bias trade-offs when image quality is dominated by measurement statistics. The rebinning of the acquired data into a 2D data set reduces the computation time of the reconstruction. For both penalized weighted least squares (PWLS) and ordered-subset EM (OSEM) reconstruction methods, the usefulness of a realistic model of the expected measurement statistics is shown when the data are pre-corrected for attenuation and random and scattered coincidences, as required for the FORE rebinning algorithm. The results presented are based on 3D simulations of whole body scans that include the major statistical effects of PET acquisition and data correction procedures. As the PWLS method requires knowledge of the variance of the projection data, a simple model for the effect of FORE rebinning on data variance is developed.

Journal ArticleDOI
TL;DR: An algebraic derivation of DeMenthon and Davis' method is given and it is shown that it belongs to a larger class of methods where the perspective camera model is approximated either at zero order (weak perspective) or first order (paraperspective).
Abstract: Recently, DeMenthon and Davis (1992, 1995) proposed a method for determining the pose of a 3-D object with respect to a camera from 3-D to 2-D point correspondences. The method consists of iteratively improving the pose computed with a weak perspective camera model to converge, at the limit, to a pose estimation computed with a perspective camera model. In this paper we give an algebraic derivation of DeMenthon and Davis‘ method and we show that it belongs to a larger class of methods where the perspective camera model is approximated either at zero order (weak perspective) or first order (paraperspective). We describe in detail an iterative paraperspective pose computation method for both non coplanar and coplanar object points. We analyse the convergence of these methods and we conclude that the iterative paraperspective method (proposed in this paper) has better convergence properties than the iterative weak perspective method. We introduce a simple way of taking into account the orthogonality constraint associated with the rotation matrix. We analyse the sensitivity to camera calibration errors and we define the optimal experimental setup with respect to imprecise camera calibration. We compare the results obtained with this method and with a non-linear optimization method.

Proceedings ArticleDOI
03 Mar 1997
TL;DR: This work presents a method for the reconstruction of real-world objects from multiple range images based on a least-squares approach where a distance metric between the overlapping range images is minimized and a resolution hierarchy accelerates the registration substantially.
Abstract: Presents a method for the reconstruction of real-world objects from multiple range images. One major contribution of our approach is the simultaneous registration of all range images acquired from different scanner views. Thus, registration errors are not accumulated, and it is even possible to reconstruct large objects from an arbitrary number of small range images. The registration process is based on a least-squares approach where a distance metric between the overlapping range images is minimized. A resolution hierarchy accelerates the registration substantially. After registration, a volumetric model of the object is carved out. This step is based on the idea that no part of the object can lie between the measured surface and the camera of the scanner. With the marching cube algorithm, a polygonal representation is generated. The accuracy of this polygonal mesh is improved by moving the vertices of the mesh on to the surface implicitly defined by the registered range images.

Journal ArticleDOI
TL;DR: This work investigates the limits for the detection, localization, and characterization of optical inhomogeneities by using diffusing photons as a probe and shows that positional uncertainty in the source and detector lead to significant random errors that degrade the optical information available from diffusing photon.
Abstract: Diffusing photons provide information about the optical properties of turbid media. In biological tissues these optical properties may be correlated to physiological parameters, enabling one to probe effectively the physiological states of tissue for abnormalities such as tumors and hemorrhages. We show that positional uncertainty in the source and detector lead to significant random errors that degrade the optical information available from diffusing photons. We investigate the limits for the detection, localization, and characterization of optical inhomogeneities by using diffusing photons as a probe. Although detection is sufficient for tumor screening, full characterization of the optical properties is desirable for specification of the tumor. Our findings in model breast systems with realistic signal-to-noise ratios indicate that tumors as small as 0.3 cm in diameter can be unambiguously detected; however, simultaneous determination of tumor size and optical properties is possible only if its diameter is of the order of 1.0 cm or larger. On the other hand, if a priori information about the size (optical properties) is available, then the optical properties (size) of tumors as small as 0.3 cm in diameter can be determined.

Journal ArticleDOI
TL;DR: A robust, object-based approach to high-resolution image reconstruction from video using the projections onto convex sets (POCS) framework using a validity map and/or a segmentation map to improve the quality of the reconstructed image.
Abstract: We propose a robust, object-based approach to high-resolution image reconstruction from video using the projections onto convex sets (POCS) framework. The proposed method employs a validity map and/or a segmentation map. The validity map disables projections based on observations with inaccurate motion information for robust reconstruction in the presence of motion estimation errors; while the segmentation map enables object-based processing where more accurate motion models can be utilized to improve the quality of the reconstructed image. Procedures for the computation of the validity map and segmentation map are presented. Experimental results demonstrate the improvement in image quality that can be achieved by the proposed methods.

Journal ArticleDOI
09 Nov 1997
TL;DR: In this article, a fully 3D Bayesian method is described for high resolution reconstruction of images from the Siemens/CTI ECAT EXACT HR+ whole body positron emission tomography (PET) scanner.
Abstract: A fully 3D Bayesian method is described for high resolution reconstruction of images from the Siemens/CTI ECAT EXACT HR+ whole body positron emission tomography (PET) scanner. To maximize resolution recovery from the system the authors model depth dependent geometric efficiency, intrinsic detector efficiency, photon pair non-colinearity, crystal penetration and inter-crystal scatter. They also explicitly model the effects of axial rebinning and angular mashing on the detection probability or system matrix. By fully exploiting sinogram symmetries and using a factored system matrix and automated indexing schemes, the authors are able to achieve substantial savings in both the storage size and time required to compute forward and backward projections. Reconstruction times are further reduced using multi-threaded programming on a four processor Unix server. Bayesian reconstructions are computed using a Huber prior and a shifted-Poisson likelihood model that accounts for the effects of randoms subtraction and scatter. Reconstructions of phantom data show that the 3D Bayesian method can achieve improved FWHM resolution and contrast recovery ratios at matched background noise levels compared to both the 3D reprojection method and an OSEM method based on the shifted-Poisson model.

Journal ArticleDOI
TL;DR: An analytic density compensation function (DCF) for spiral MRI, based on the Jacobian determinant for the transformation between Cartesian coordinates and the spiral sampling parameters of time and interleaf rotation angle, is derived and the reconstruction accuracy achieved using this function is compared with that obtained using several previously published expressions.
Abstract: In interleaved spiral MRI, an object's Fourier transform is sampled along a set of curved trajectories in the spatial frequency domain (k-space). An image of the object is then reconstructed, usually by interpolating the sampled Fourier data onto a Cartesian grid and applying the fast Fourier transform (FFT) algorithm. To obtain accurate results, it is necessary to account for the nonuniform density with which k-space is sampled. An analytic density compensation function (DCF) for spiral MRI, based on the Jacobian determinant for the transformation between Cartesian coordinates and the spiral sampling parameters of time and interleaf rotation angle, is derived in this paper, and the reconstruction accuracy achieved using this function is compared with that obtained using several previously published expressions. Various non-ideal conditions, including intersecting trajectories, are considered. The new DCF eliminated intensity cupping that was encountered in images reconstructed with other functions, and significantly reduced the level of artifact observed when unevenly spaced sampling trajectories, such as those achieved with trapezoidal gradient waveforms, were employed. Modified forms of this function were found to provide similar improvements when intersecting trajectories made the spiral-Cartesian transformation noninvertible, and when the shape of the spiral trajectory varied between interleaves.

Journal ArticleDOI
TL;DR: It is shown that absolute activation levels are strongly dependent on the parameters of the filter used in image construction and that significance of an activation signal can be enhanced through appropriate filter selection.
Abstract: When constructing MR images from acquired spatial frequency data, it can be beneficial to apply a low-pass filter to remove high frequency noise from the resulting images. This amounts to attenuating high spatial frequency fluctuations that can affect detected MR signal. A study is presented of spatially filtering MR data and possible ramifications on detecting regionally specific activation signal. It is shown that absolute activation levels are strongly dependent on the parameters of the filter used in image construction and that significance of an activation signal can be enhanced through appropriate filter selection. A comparison is made between spatially filtering MR image data and applying a Gaussian convolution kernel to statistical parametric maps.

Journal ArticleDOI
TL;DR: A multishot image acquisition method based upon rosette k-space trajectories has been developed and implemented for spectrally selective magnetic resonance imaging (MRI), and the spectral selectivity is demonstrated in vivo in a study in which both water and lipid images are generated from a single imaging data set.
Abstract: In nuclear magnetic resonance, different spectral components often correspond to different chemical species and as such, spectral selectivity can be a valuable tool for diagnostic imaging. In the work presented here, a multishot image acquisition method based upon rosette k-space trajectories has been developed and implemented for spectrally selective magnetic resonance imaging (MRI). Parametric forms for the gradient waveforms and design constraints are derived, and an example multishot gradient design is presented. The spectral behaviour for this imaging method is analyzed in a simulation model. For frequencies that are near to the resonant frequency, this method results in a lower intensity, but undistorted image, while for frequencies that are off-resonance by a large amount, the object is incoherently dephased into noise. A method by which acquisitions are delayed by small amounts is introduced to further reduce the residual intensity for off-resonant signals. An image reconstruction method based on convolution gridding, including a correction method for small amounts of magnetic field inhomogeneity, is implemented. Finally, the spectral selectivity is demonstrated in vivo in a study in which both water and lipid images are generated from a single imaging data set.

Journal ArticleDOI
TL;DR: A new image recovery algorithm to remove, in addition to blocking, ringing artifacts from compressed images and video, is presented, based on the theory of projections onto convex sets (POCS).
Abstract: We present a new image recovery algorithm to remove, in addition to blocking, ringing artifacts from compressed images and video. This new algorithm is based on the theory of projections onto convex sets (POCS). A new family of directional smoothness constraint sets is defined based on line processes modeling of the image edge structure. The definition of these smoothness sets also takes into account the fact that the visibility of compression artifacts in an image is spatially varying. To overcome the numerical difficulty in computing the projections onto these sets, a divide-and-conquer (DAC) strategy is introduced. According to this strategy, new smoothness sets are derived such that their projections are easier to compute. The effectiveness of the proposed algorithm is demonstrated through numerical experiments using Motion Picture Expert Group based (MPEG-based) coders-decoders (codecs).