scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 1995"


Journal ArticleDOI
TL;DR: Compared to classic approaches making use of Newton's method, POSIT does not require starting from an initial guess, and computes the pose using an order of magnitude fewer floating point operations; it may therefore be a useful alternative for real-time operation.
Abstract: In this paper, we describe a method for finding the pose of an object from a single image. We assume that we can detect and match in the image four or more noncoplanar feature points of the object, and that we know their relative geometry on the object. The method combines two algorithms; the first algorithm,POS (Pose from Orthography and Scaling) approximates the perspective projection with a scaled orthographic projection and finds the rotation matrix and the translation vector of the object by solving a linear system; the second algorithm,POSIT (POS with ITerations), uses in its iteration loop the approximate pose found by POS in order to compute better scaled orthographic projections of the feature points, then applies POS to these projections instead of the original image projections. POSIT converges to accurate pose measurements in a few iterations. POSIT can be used with many feature points at once for added insensitivity to measurement errors and image noise. Compared to classic approaches making use of Newton's method, POSIT does not require starting from an initial guess, and computes the pose using an order of magnitude fewer floating point operations; it may therefore be a useful alternative for real-time operation. When speed is not an issue, POSIT can be written in 25 lines or less in Mathematica; the code is provided in an Appendix.

1,195 citations


Journal ArticleDOI
TL;DR: This approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary variables are decoupled, and yields the original function so that the original image estimate can be obtained by joint minimization.
Abstract: One popular method for the recovery of an ideal intensity image from corrupted or indirect measurements is regularization: minimize an objective function that enforces a roughness penalty in addition to coherence with the data. Linear estimates are relatively easy to compute but generally introduce systematic errors; for example, they are incapable of recovering discontinuities and other important image attributes. In contrast, nonlinear estimates are more accurate but are often far less accessible. This is particularly true when the objective function is nonconvex, and the distribution of each data component depends on many image components through a linear operator with broad support. Our approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary variables are decoupled. Minimizing over the auxiliary array alone yields the original function so that the original image estimate can be obtained by joint minimization. This can be done efficiently by Monte Carlo methods, for example by FFT-based annealing using a Markov chain that alternates between (global) transitions from one array to the other. Experiments are reported in optical astronomy, with space telescope data, and computed tomography. >

964 citations


Journal ArticleDOI
TL;DR: A new quantitative fidelity measure, termed as peak signal-to-perceptible-noise ratio (PSPNR), is proposed to assess the quality of the compressed image by taking the perceptible part of the distortion into account.
Abstract: To represent an image of high perceptual quality with the lowest possible bit rate, an effective image compression algorithm should not only remove the redundancy due to statistical correlation but also the perceptually insignificant components from image signals. In this paper, a perceptually tuned subband image coding scheme is presented, where a just-noticeable distortion (JND) or minimally noticeable distortion (MND) profile is employed to quantify the perceptual redundancy. The JND profile provides each signal being coded with a visibility threshold of distortion, below which reconstruction errors are rendered imperceptible. Based on a perceptual model that incorporates the threshold sensitivities due to background luminance and texture masking effect, the JND profile is estimated from analyzing local properties of image signals. According to the sensitivity of human visual perception to spatial frequencies, the full-band JND/MND profile is decomposed into component JND/MND profiles of different frequency subbands. With these component profiles, perceptually insignificant signals in each subband can be screened out, and significant signals can be properly encoded to meet the visibility threshold. A new quantitative fidelity measure, termed as peak signal-to-perceptible-noise ratio (PSPNR), is proposed to assess the quality of the compressed image by taking the perceptible part of the distortion into account. Simulation results show that near-transparent image coding can be achieved at less than 0.4 b/pixel. As compared to the ISO-JPEG standard, the proposed algorithm can remove more perceptual redundancy from the original image, and the visual quality of the reconstructed image is much more acceptable at low bit rates.

650 citations


Journal ArticleDOI
TL;DR: The algorithm is based on a source model emphasizing the visual integrity of detected edges and incorporates a novel edge fitting operator that has been developed for this application, and produces an image of increased resolution with noticeably sharper edges and lower mean-squared reconstruction error than that produced by linear techniques.
Abstract: In this paper, we present a nonlinear interpolation scheme for still image resolution enhancement. The algorithm is based on a source model emphasizing the visual integrity of detected edges and incorporates a novel edge fitting operator that has been developed for this application. A small neighborhood about each pixel in the low-resolution image is first mapped to a best-fit continuous space step edge. The bilevel approximation serves as a local template on which the higher resolution sampling grid can then be superimposed (where disputed values in regions of local window overlap are averaged to smooth errors). The result is an image of increased resolution with noticeably sharper edges and, in all tried cases, lower mean-squared reconstruction error than that produced by linear techniques. >

492 citations


Journal ArticleDOI
TL;DR: A spatially adaptive image recovery algorithm is proposed based on the theory of projections onto convex sets that captures both the local statistical properties of the image and the human perceptual characteristics.
Abstract: At the present time, block-transform coding is probably the most popular approach for image compression. For this approach, the compressed images are decoded using only the transmitted transform data. We formulate image decoding as an image recovery problem. According to this approach, the decoded image is reconstructed using not only the transmitted data but, in addition, the prior knowledge that images before compression do not display between-block discontinuities. A spatially adaptive image recovery algorithm is proposed based on the theory of projections onto convex sets. Apart from the data constraint set, this algorithm uses another new constraint set that enforces between-block smoothness. The novelty of this set is that it captures both the local statistical properties of the image and the human perceptual characteristics. A simplified spatially adaptive recovery algorithm is also proposed, and the analysis of its computational complexity is presented. Numerical experiments are shown that demonstrate that the proposed algorithms work better than both the JPEG deblocking recommendation and our previous projection-based image decoding approach. >

384 citations


Journal ArticleDOI
TL;DR: Preliminary numerical testing of the algorithms on simulated data suggest that the convex algorithm and the ad hoc gradient algorithm are computationally superior to the EM algorithm.
Abstract: This paper reviews and compares three maximum likelihood algorithms for transmission tomography. One of these algorithms is the EM algorithm, one is based on a convexity argument devised by De Pierro (see IEEE Trans. Med. Imaging, vol.12, p.328-333, 1993) in the context of emission tomography, and one is an ad hoc gradient algorithm. The algorithms enjoy desirable local and global convergence properties and combine gracefully with Bayesian smoothing priors. Preliminary numerical testing of the algorithms on simulated data suggest that the convex algorithm and the ad hoc gradient algorithm are computationally superior to the EM algorithm. This superiority stems from the larger number of exponentiations required by the EM algorithm. The convex and gradient algorithms are well adapted to parallel computing. >

368 citations


Book
01 Jan 1995
TL;DR: In this article, the authors describe a wide range of applications and practical results of the techniques detailed above, including those to particulate flows, fluidized beds, mixing transport and separation processes and combustion systems.
Abstract: CONTENTS INCLUDE: Part One - Introduction to Process Tomography and assessment of industrial needs. Part Two - Description of techniques, including electrical sensing, (capacitance, resistance, inductance and triboelectric), ultrasonic sensing, optical sensing, emission tomography. Part Three - Data processing techniques required for image reconstruction, often at high speeds necessary for on-line process applications. The need for parallel processing and implications for sensor design. Finally, a description of the techniques used for quality assurance in process tomography imaging. Part Four is a major aspect of the book - featuring a wide range of applications and practical results of the techniques detailed above, including those to particulate flows, fluidized beds, mixing transport and separation processes and combustion systems.

358 citations


Journal ArticleDOI
TL;DR: Results using simulated data suggest that qualitative images can be produced that readily highlight the location of absorption and scattering heterogeneities within a circular background region of close to 4 cm in diameter over a range of contrast levels, suggesting that absolute optical imaging involving simultaneous recovery of both absorption and scatter profiles in multicentimeter tissues geometries may prove to be extremely difficult.
Abstract: A finite element reconstruction algorithm for optical data based on a diffusion equation approximation is presented. A frequency domain approach is adopted and a unified formulation for three combinations of boundary observables and conditions is described. A multidetector, multisource measurement and excitation strategy is simulated, which includes a distributed model of the light source that illustrates the flexibility of the methodology to modeling adaptations. Simultaneous reconstruction of both absorption and scattering coefficients for a tissue-like medium is achieved for all three boundary data types. The algorithm is found to be computationally practical, and can be implemented without major difficulties in a workstation computing environment. Results using simulated data suggest that qualitative images can be produced that readily highlight the location of absorption and scattering heterogeneities within a circular background region of close to 4 cm in diameter over a range of contrast levels. Absorption images appear to more closely identify the true size of the heterogeneity; however, both the absorption and scattering reconstructions have difficulty with sharp transitions at increasing depth. Quantitatively, the reconstructions are not accurate, suggesting that absolute optical imaging involving simultaneous recovery of both absorption and scattering profiles in multicentimeter tissues geometries may prove to be extremely difficult.

344 citations


Journal ArticleDOI
TL;DR: This paper presents space-alternating generalized EM (SAGE) algorithms for image reconstruction, which update the parameters sequentially using a sequence of small "hidden" data spaces, rather than simultaneously using one large complete-data space.
Abstract: Most expectation-maximization (EM) type algorithms for penalized maximum-likelihood image reconstruction converge slowly, particularly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penalties (or priors) introduce parameter coupling, rendering intractable the M-steps of most EM-type algorithms. This paper presents space-alternating generalized EM (SAGE) algorithms for image reconstruction, which update the parameters sequentially using a sequence of small "hidden" data spaces, rather than simultaneously using one large complete-data space. The sequential update decouples the M-step, so the maximization can typically be performed analytically. We introduce new hidden-data spaces that are less informative than the conventional complete-data space for Poisson data and that yield significant improvements in convergence rate. This acceleration is due to statistical considerations, not numerical overrelaxation methods, so monotonic increases in the objective function are guaranteed. We provide a general global convergence proof for SAGE methods with nonnegativity constraints. >

308 citations


Journal ArticleDOI
TL;DR: In this article, the elastic modulus of soft tissue based on ultrasonic displacement and strain images is reconstructed using a hybrid reconstruction procedure based on numerical solution of the partial differential equations describing mechanical equilibrium of a deformed medium.
Abstract: A method is presented to reconstruct the elastic modulus of soft tissue based on ultrasonic displacement and strain images. Incompressible and compressible media are considered separately. Problems arising with this method, as well as applications to real measurements on gel-based, tissue equivalent phantoms, are given. Results show that artifacts present in strain images can be greatly reduced using a hybrid reconstruction procedure based on numerical solution of the partial differential equations describing mechanical equilibrium of a deformed medium. >

283 citations


Journal ArticleDOI
TL;DR: The shape of the FIS is determined by searching for a shape which maximizes a focus measure, which results in more accurate shape recovery than the traditional methods.
Abstract: A new shape-from-focus method is described which is based on a new concept, named focused image surface (FIS). FIS of an object is defined as the surface formed by the set of points at which the object points are focused by a camera lens. According to paraxial-geometric optics, there is a one-to-one correspondence between the shape of an object and the shape of its FIS. Therefore, the problem of shape recovery can be posed as the problem of determining the shape of the FIS. From the shape of FIS the shape of the object is easily obtained. In this paper the shape of the FIS is determined by searching for a shape which maximizes a focus measure. In contrast to previous literature where the focus measure is computed over the planar image detector of the camera, here the focus measure is computed over the FIS. This results in more accurate shape recovery than the traditional methods. Also, using FIS, a more accurate focused image can be reconstructed from a sequence of images than is possible with traditional methods. The new method has been implemented on an actual camera system, and the results of shape recovery and focused image reconstruction are presented. >

Journal ArticleDOI
TL;DR: A temporally and spatially nonscanning imaging spectrometer is described in terms of computedtomography concepts, specifically the central-slice theorem and experimental results indicate that the instrument performs well in the case of broadband and narrow-band emitters.
Abstract: A temporally and spatially nonscanning imaging spectrometer is described in terms of computedtomography concepts, specifically the central-slice theorem. A sequence of three transmission sinusoidalphase gratings rotated in 60° increments achieves dispersion in multiple directions and into multiple orders. The dispersed images of the system's field stop are interpreted as two-dimensional projections of a three-dimensional (x, y, λ) object cube. Because of the size of the finite focal-plane array, this imaging spectrometer is an example of a limited-view-angle tomographic system. The imaging spectrometer's point spread function is measured experimentally as a function of wavelength and position in the field of view. Reconstruction of the object cube is then achieved through the maximum-likelihood, expectation-maximization algorithm under the assumption of a Poisson likelihood law. Experimental results indicate that the instrument performs well in the case of broadband and narrow-band emitters.

Journal ArticleDOI
TL;DR: It is argued that an object-centered representation is most appropriate for this purpose because it naturally accommodates multiple sources of data, multiple images (including motion sequences of a rigid object), and self-occlusions.
Abstract: Our goal is to reconstruct both the shape and reflectance properties of surfaces from multiple images. We argue that an object-centered representation is most appropriate for this purpose because it naturally accommodates multiple sources of data, multiple images (including motion sequences of a rigid object), and self-occlusions. We then present a specific object-centered reconstruction method and its implementation. The method begins with an initial estimate of surface shape provided, for example, by triangulating the result of conventional stereo. The surface shape and reflectance properties are then iteratively adjusted to minimize an objective function that combines information from multiple input images. The objective function is a weighted sum of stereo, shading, and smoothness components, where the weight varies over the surface. For example, the stereo component is weighted more strongly where the surface projects onto highly textured areas in the images, and less strongly otherwise. Thus, each component has its greatest influence where its accuracy is likely to be greatest. Experimental results on both synthetic and real images are presented.

Journal ArticleDOI
TL;DR: A novel approach to the reconstruction of geometric models and surfaces from given sets of points using volume splines based on use of the Green's function for interpolation of scalar function values of a chosen “carrier” solid.
Abstract: This paper presents a novel approach to the reconstruction of geometric models and surfaces from given sets of points using volume splines. It results in the representation of a solid by the inequality The volume spline is based on use of the Green’s function for interpolation of scalar function values of a chosen “carrier” solid. Our algorithm is capable of generating highly concave and branching objects automatically. The particular case where the surface is reconstructed from cross-sections is discussed too. Potential applications of this algorithm are in tomography, image processing, animation and CAD for bodies with complex surfaces.

Journal ArticleDOI
TL;DR: A number of model based interpolation schemes tailored to the problem of interpolating missing regions in image sequences, and comparisons with earlier work using multilevel median filters demonstrate the higher reconstruction fidelity of the new interpolators.
Abstract: This paper presents a number of model based interpolation schemes tailored to the problem of interpolating missing regions in image sequences. These missing regions may be of arbitrary size and of random, but known, location. This problem occurs regularly with archived film material. The film is abraded or obscured in patches, giving rise to bright and dark flashes, known as "dirt and sparkle" in the motion picture industry. Both 3-D autoregressive models and 3-D Markov random fields are considered in the formulation of the different reconstruction processes. The models act along motion directions estimated using a multiresolution block matching scheme. It is possible to address this sort of impulsive noise suppression problem with median filters, and comparisons with earlier work using multilevel median filters are performed. These comparisons demonstrate the higher reconstruction fidelity of the new interpolators. >

Journal ArticleDOI
H. Schomberg1, J. Timmer
TL;DR: The authors explore a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform f;, which provides a fast and accurate alternative to the filtered backprojection.
Abstract: The authors explore a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform f/spl circ/. The method involves a window function w/spl circ/ and proceeds in three steps. First, the convolution g/spl circ/=w/spl circ/*f/spl circ/ is computed numerically on a Cartesian grid, using the available samples of f/spl circ/. Then, g=wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating w/spl circ/*f/spl circ/ is much less error prone than merely interpolating f/spl circ/. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform. >

Journal ArticleDOI
21 Oct 1995
TL;DR: The authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging and present scatter correction results from human and chest phantom studies.
Abstract: Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [/sup 18/F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. Here, they describe this algorithm and present scatter correction results from human and chest phantom studies.

Proceedings ArticleDOI
20 Jun 1995
TL;DR: The geometry of multi image perspective projection and the matching constraints that this induces on image measurements are studied and their complex algebraic interdependency is captured by quadratic structural simplicity constraints on the Grassmannian.
Abstract: The paper studies the geometry of multi image perspective projection and the matching constraints that this induces on image measurements. The combined image projections define a 3D joint image subspace of the space of combined homogeneous image coordinates. This is a complete projective replica of the 3D world in image coordinates. Its location encodes the imaging geometry and is captured by the 4 index joint image Grassmannian tensor. Projective reconstruction in the joint image is a canonical process requiring only a simple rescaling of image coordinates. Reconstruction in world coordinates amounts to a choice of basis in the joint image. The matching constraints are multilinear tensorial equations in image coordinates that tell whether tokens in different images could be the projections of a single world token. For 2D images of 3D points there are exactly three basic types: the epipolar constraint, A. Shashua's (1995) trilinear one, and a new quadrilinear 4 image one. For images of lines, R. Hartley's (1994) trilinear constraint is the only type. The coefficients of the matching constraints are tensors built directly from the joint image Grassmannian. Their complex algebraic interdependency is captured by quadratic structural simplicity constraints on the Grassmannian. >

Journal ArticleDOI
TL;DR: The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Abstract: The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image that best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem that can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

Journal ArticleDOI
TL;DR: In this paper, a frequency-hopping approach is proposed to process multifrequency CW microwave measurement data so that larger dielectric bodies for microwave imaging can be reconstructed with higher fidelity compared to a single-frequency reconstruction.
Abstract: A frequency-hopping approach is proposed to process multifrequency CW microwave measurement data so that larger dielectric bodies for microwave imaging can be reconstructed with higher fidelity compared to a single-frequency reconstruction. The frequency hopping approach uses only data at a few frequencies, and hence can reduce data acquisition time in a practical system. Moreover, the frequency-hopping approach overcomes the effect of nonlinearity in the optimization procedure so that an algorithm is not being trapped in local minima. In this manner, larger objects with higher contrasts could be reconstructed without a priori information. We demonstrate the reconstruction of an object 10 wavelengths in diameter with permittivity profile contrast larger than 1:2 without using a priori information.

Proceedings ArticleDOI
20 Jun 1995
TL;DR: The trifocal tensor is shown to be essentially identical to a set of coefficients introduced by Shashua (1994) to effect point transfer in the three-view case and to be extended to allow for the computation of the trifoc tensor given any mixture of sufficiently many line and point correspondences.
Abstract: Discusses the basic role of the trifocal tensor in scene reconstruction. This 3/spl times/3/spl times/3 tensor plays a role in the analysis of scenes from three views analogous to the role played by the fundamental matrix in the two-view case. In particular, the trifocal tensor maybe computed by a linear algorithm from a set of 13 line correspondences in three views. It is further shown in this paper to be essentially identical to a set of coefficients introduced by Shashua (1994) to effect point transfer in the three-view case. This observation means that the 13-line algorithm may be extended to allow for the computation of the trifocal tensor given any mixture of sufficiently many line and point correspondences. From the trifocal tensor, the camera image matrices may be computed, and the scene may be reconstructed. For unrelated uncalibrated cameras, this reconstruction is unique up to projectivity. Thus, projective reconstruction of a set of lines and points may be reconstructed linearly from three views. >

Proceedings ArticleDOI
23 Oct 1995
TL;DR: The low resolution to high resolution problem as a maximum likelihood (ML) problem which is solved by the expectation-maximization (EM) algorithm by exploiting the structure of the matrices involved, the problem ran be solved in the discrete frequency domain.
Abstract: In this paper a solution is provided to the problem of obtaining a high resolution image from several low resolution images that have been subsampled and displaced by different amounts of sub-pixel shifts. In its most general form, this problem can be broken up into three sub-problems: registration, restoration, and interpolation. Previous work has either solved all three sub-problems independently, or more recently, solved either the first two steps (registration and restoration) or the last two steps together. However, none of the existing methods solve all three sub-problems simultaneously. This paper poses the low resolution to high resolution problem as a maximum likelihood (ML) problem which is solved by the expectation-maximization (EM) algorithm. By exploiting the structure of the matrices involved, the problem ran be solved in the discrete frequency domain. The ML problem is then the estimation of the sub-pixel shifts, the noise variances of each image, the power spectra of the high resolution image, and the high resolution image itself. Experimental results are shown which demonstrate the effectiveness of this approach.

Journal ArticleDOI
TL;DR: This correspondence addresses the problem of inferring the shape of the unknown object O from the reconstructed object R, and considers two cases: R is the closest approximation of O which can be obtained from its silhouettes, i.e., its visual hull; and R is a generic reconstructed object.
Abstract: Each 2D silhouette of a 3D unknown object O constrains O inside the volume obtained by back-projecting the silhouette from the corresponding viewpoint. A set of silhouettes specifies a boundary volume R obtained by intersecting the volumes due to each silhouette. R more or less closely approximates O, depending on the viewpoints and the object itself. This approach to the reconstruction of 3D objects is usually referred to as volume intersection. This correspondence addresses the problem of inferring the shape of the unknown object O from the reconstructed object R. For doing this, the author divides the points of the surface of R into hard points, which belong to the surface of any possible object originating R, and soft points, which may or may not belong to O. The author considers two cases: In the first case R is the closest approximation of O which can be obtained from its silhouettes, i.e., its visual hull; in the second case, R is a generic reconstructed object. In both cases the author supplies necessary and sufficient conditions for a point to be hard and gives rules for computing the hard surfaces. >

Journal ArticleDOI
01 Aug 1995
TL;DR: The studies confirmed that, when using the body centered cubic grid, the number of grid points can be effectively reduced, decreasing the computational and memory demands while preserving the quality of the reconstructed images.
Abstract: Incorporation of spherically-symmetric volume elements (blobs), instead of the conventional voxels, into iterative image reconstruction algorithms, has been found in our previous studies to lead to significant improvement in the quality of the reconstructed images. Furthermore, for three-dimensional (3D) positron emission tomography the 3D algebraic reconstruction technique using blobs can reach comparable or even better quality than the 3D filtered backprojection method after only one cycle through the projection data. The only shortcoming of the blob reconstruction method is an increased computational demand, because of the overlapping nature of the blobs. In our previous studies the blobs were placed on the same 3D simple cubic grid as used for voxel basis functions. For spherically-symmetric basis functions there are more advantageous arrangements of the 3D grid, enabling a more isotropic distribution of the spherical functions in the 3D space and a better packing efficiency of the image spectrum. Our studies confirmed that, when using the body centered cubic grid, the number of grid points can be effectively reduced, decreasing the computational and memory demands while preserving the quality of the reconstructed images. >

Journal ArticleDOI
TL;DR: A computationally efficient technique for reconstruction of lost transform coefficients at the decoder that takes advantage of the correlation between transformed blocks of the image to minimize blocking artifacts in the image while providing visually pleasing reconstructions is proposed.
Abstract: Transmission of still images and video over lossy packet networks presents a reconstruction problem at the decoder. Specifically, in the case of block-based transform coded images, loss of one or more packets due to network congestion or transmission errors can result in errant or entirely lost blocks in the decoded image. This article proposes a computationally efficient technique for reconstruction of lost transform coefficients at the decoder that takes advantage of the correlation between transformed blocks of the image. Lost coefficients are linearly interpolated from the same coefficients in adjacent blocks subject to a squared edge error criterion, and the resulting reconstructed coefficients minimize blocking artifacts in the image while providing visually pleasing reconstructions. The required computational expense at the decoder per reconstructed block is less than 1.2 times a non-recursive DCT, and as such this technique is useful for low power, low complexity applications that require good visual performance. >

Journal ArticleDOI
TL;DR: Results from a motion phantom as well as in in in vivo gadolinium diethylenetriaminopentaacetic acid bolus tracking studies in a rat model demonstrate the high temporal resolution achievable using these techniques aswell as the tradeoffs available with nonuniform sampling densities.
Abstract: The imaging of dynamic processes in the body is of considerable interest in interventional examinations as well as kinematic studies, and spiral imaging is a fast magnetic resonance imaging technique ideally suited for such fluoroscopic applications. In this manuscript, magnetic resonance fluoroscopy pulse sequences in which interleaved spirals are used to continuously acquire data and reconstruct one movie frame for each repetition time interval are implemented. For many applications, not all of k-space needs to be updated each frame, and nonuniform k-space sampling can be used to exploit this rapid imaging strategy by allowing variable update rates for different spatial frequencies. Using the appropriate reconstruction algorithm, the temporal updating rate for each spatial frequency is effectively proportional to the corresponding k-space sampling density. Results from a motion phantom as well as in in vivo gadolinium diethylenetriaminopentaacetic acid (Gd-DTPA) bolus tracking studies in a rat model demonstrate the high temporal resolution achievable using these techniques as well as the tradeoffs available with nonuniform sampling densities. This paper focuses on the acquisition of real-time dynamic information, and all images presented are reconstructed retrospectively. The issues of real-time data reconstruction and display are not addressed.

Journal ArticleDOI
Ping Wah Wong1
TL;DR: It is shown that the kernel estimation algorithm combined with MAP projection provide the same performance in inverse halftoning compared to the case where the error diffusion kernel is known.
Abstract: Two different approaches in the inverse halftoning of error-diffused images are considered. The first approach uses linear filtering and statistical smoothing that reconstructs a gray-scale image from a given error-diffused image. The second approach can be viewed as a projection operation, where one assumes the error diffusion kernel is known, and finds a gray-scale image that will be halftoned into the same binary image. Two projection algorithms, viz., minimum mean square error (MMSE) projection and maximum a posteriori probability (MAP) projection, that differ on the way an inverse quantization step is performed, are developed. Among the filtering and the two projection algorithms, MAP projection provides the best performance for inverse halftoning. Using techniques from adaptive signal processing, we suggest a method for estimating the error diffusion kernel from the given halftone. This means that the projection algorithms can be applied in the inverse halftoning of any error-diffused image without requiring any a priori information on the error diffusion kernel. It is shown that the kernel estimation algorithm combined with MAP projection provide the same performance in inverse halftoning compared to the case where the error diffusion kernel is known. >

Proceedings ArticleDOI
21 Jun 1995
TL;DR: This work presents some of the recent work in this area, which is based on the registration of multiple images (views) in a projective framework, which directly solve a least-squares estimation problem in the unknown structure and motion parameters, which leads to statistically optimal estimates.
Abstract: There has been a lot of activity recently surrounding the reconstruction of photorealistic 3D scenes and high-resolution images from video sequences. We present some of our recent work in this area, which is based on the registration of multiple images (views) in a projective framework. Unlike most other techniques, we do not rely on special features to form a projective basis. Instead, we directly solve a least-squares estimation problem in the unknown structure and motion parameters, which leads to statistically optimal estimates. We discuss algorithms for both constructing planar and panoramic mosaics, and for projective depth recovery. We also speculate about the ultimate usefulness of projective approaches to visual scene reconstruction.

Journal ArticleDOI
TL;DR: A method for estimating a dense displacement field from sparse displacement measurements based on a multidimensional stochastic model for the smoothness and divergence of the displacement field and the Fisher estimation framework for in vivo heart data is proposed.
Abstract: Magnetic resonance (MR) tagging has shown great potential for noninvasive measurement of the motion of a beating heart. In MR tagged images, the heart appears with a spatially encoded pattern that moves with the tissue. The position of the tag pattern in each frame of the image sequence can be used to obtain a measurement of the 3-D displacement field of the myocardium. The measurements are sparse, however, and interpolation is required to reconstruct a dense displacement field from which measures of local contractile performance such as strain can be computed. Here, the authors propose a method for estimating a dense displacement field from sparse displacement measurements. Their approach is based on a multidimensional stochastic model for the smoothness and divergence of the displacement field and the Fisher estimation framework. The main feature of this method is that both the displacement field model and the resulting estimate equation are defined only on the irregular domain of the myocardium. The authors' methods are validated on both simulated and in vivo heart data.

Journal ArticleDOI
TL;DR: Several techniques for template matching by means of cross-correlation are reviewed and compared on a common task: locating eyes in a database of faces and approximation networks are introduced in an attempt to improve filter design by the introduction of nonlinearity.