scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 1992"


Journal ArticleDOI
TL;DR: In this paper, the singular value decomposition (SVDC) technique is used to factor the measurement matrix into two matrices which represent object shape and camera rotation respectively, and two of the three translation components are computed in a preprocessing stage.
Abstract: Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an ill-conditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orthography without computing depth as an intermediate step. An image stream can be represented by the 2FxP measurement matrix of the image coordinates of P points tracked through F frames. We show that under orthographic projection this matrix is of rank 3. Based on this observation, the factorization method uses the singular-value decomposition technique to factor the measurement matrix into two matrices which represent object shape and camera rotation respectively. Two of the three translation components are computed in a preprocessing stage. The method can also handle and obtain a full solution from a partially filled-in measurement matrix that may result from occlusions or tracking failures. The method gives accurate results, and does not introduce smoothing in either shape or motion. We demonstrate this with a series of experiments on laboratory and outdoor image streams, with and without occlusions.

2,696 citations


Journal ArticleDOI
Cameron J. Dasch1
TL;DR: It is shown that the Abel inversion, onion-peeling, and filtered backprojection methods can be intercompared without assumptions about the object being deconvolved.
Abstract: It is shown that the Abel inversion, onion-peeling, and filtered backprojection methods can be intercompared without assumptions about the object being deconvolved. If the projection data are taken at equally spaced radial positions, the deconvolved field is given by weighted sums of the projections divided by the data spacing. The weighting factors are independent of the data spacing. All the methods are remarkably similar and have Abelian behavior: the field at a radial location is primarily determined by the weighted differences of a few projections around the radial position. Onion-peeling and an Abel inversion using two-point interpolation are similar. When the Shepp-Logan filtered backprojection method is reduced to one dimension, it is essentially identical to an Abel inversion using three-point interpolation. The weighting factors directly determine the relative noise performance: the three-point Abel inversion is the best, while onion peeling is the worst with approximately twice the noise. Based on ease of calculation, robustness, and noise, the three-point Abel inversion is recommended.

702 citations


Journal ArticleDOI
TL;DR: A tomographic inversion procedure is described and applied to a synthetic three-dimensional (3-D) seismic refraction data set, demonstrating that tomography is capable of determining a densely sampled velocity model with large velocity contrasts.
Abstract: A tomographic inversion procedure is described and applied to a synthetic three-dimensional (3-D) seismic refraction data set, demonstrating that tomography is capable of determining a densely sampled velocity model with large velocity contrasts. Forward and inverse modeling procedures are chosen to minimize the computational costs of the inversion. Parameterizing the linearized inversion using functions defined along the ray paths, simple backprojection with zero pixel size is shown to exactly solve the linear problem, producing the smallest model for the slowness perturbation. For small grid cells, simple backprojection closely approximates the exact solution and is a sufficient solution for an iterative nonlinear inversion. This eliminates the need to store or solve a large system of linear equations. Accurate first arrival travel times are rapidly computed using a finite difference algorithm. Forward modeling between each simple backprojection allows the procedure to correctly account for the locations of the rays. This becomes more important as the spatial resolution of the model is improved. The computational efficiency of the entire nonlinear procedure allows the model to be densely sampled, providing a spatially well-resolved 3-D tomographic image. The synthetic refraction survey is designed to be similar to a published 3-D survey over the East Pacific Rise. Tests based on this example and others show that 3-D tomography is capable of inverting a large travel time data set for detailed earth structure with large lateral velocity variations and is stable in the presence of noisy data.

419 citations


Proceedings ArticleDOI
23 Mar 1992
TL;DR: A new two-step procedure is proposed, and it is shown that the POCS formulation presented for the high-resolution image reconstruction problem can also be used as a new method for the restoration of spatially invariant blurred images.
Abstract: The authors address the problem of reconstruction of a high-resolution image from a number of lower-resolution (possibly noisy) frames of the same scene where the successive frames are uniformly based versions of each other at subpixel displacements. In particular, two previously proposed methods, a frequency-domain method and a method based on projections onto convex sets (POCSs), are extended to take into account the presence of both sensor blurring and observation noise. A new two-step procedure is proposed, and it is shown that the POCS formulation presented for the high-resolution image reconstruction problem can also be used as a new method for the restoration of spatially invariant blurred images. Some simulation results are provided. >

377 citations


Journal ArticleDOI
TL;DR: A two-parameter family of spherical volume elements is described that allows control of the smoothness properties of the represented image, whereas conventional voxels are discontinuous.
Abstract: Spherically symmetric volume elements are alternatives to the more conventional voxels for the construction of volume images in the computer. The image representation, and the calculation of projections of it, are essential components of iterative algorithms for image reconstruction from projection data. A two-parameter family of spherical volume elements is described that allows control of the smoothness properties of the represented image, whereas conventional voxels are discontinuous. The rotational symmetry of the spherical elements leads to efficient calculation of projections of the represented image, as required in iterative reconstruction algorithms. For volume elements whose shape is ellipsoidal (rather than spherical) it is shown that efficient calculation of the projections is also possible by means of an image space transformation.

321 citations


Journal ArticleDOI
TL;DR: Convolution backprojection (CBP) image reconstruction has been proposed as a means of producing high-resolution synthetic-aperture radar (SAR) images by processing data directly in the polar recording format which is the conventional recording format for spotlight mode SAR.
Abstract: Convolution backprojection (CBP) image reconstruction has been proposed as a means of producing high-resolution synthetic-aperture radar (SAR) images by processing data directly in the polar recording format which is the conventional recording format for spotlight mode SAR. The CBP algorithm filters each projection as it is recorded and then backprojects the ensemble of filtered projections to create the final image in a pixel-by-pixel format. CBP reconstruction produces high-quality images by handling the recorded data directly in polar format. The CBP algorithm requires only 1-D interpolation along the filtered projections to determine the precise values that must be contributed to the backprojection summation from each projection. The algorithm is thus able to produce higher quality images by eliminating the inaccuracies of 2-D interpolation, as well as using all the data recorded in the spectral domain annular sector more effectively. The computational complexity of the CBP algorithm is O(N/sup 3/). >

319 citations


Journal ArticleDOI
TL;DR: Filtering methods are described to minimize ghosting artifact that is typical in echo planar imaging and results from computer simulation and experiments will be presented.
Abstract: Echo planar imaging is characterized by scanning the 2D k-space after a single excitation. Different sampling patterns have been proposed. A technically feasible method uses a sinusoidal readout gradient resulting is measured data that does not sample k-space in an equidistant manner. In order to employ a conventional 2D-FFT image reconstruction, the data have to be converted to a cartesian grid. This can be done either by interpolation or alternatively by a generalized transformation. Filtering methods are described to minimize ghosting artifact that is typical in echo planar imaging. Results both from computer simulation and from experiments will be presented. Experimental images were obtained using a 2-T whole-body research system.

297 citations


Journal ArticleDOI
TL;DR: This work introduces here a method that automatically removes blur introduced by magnetic field inhomogeneity and susceptibility without using a resonant frequency map, making these imaging methods more useful.
Abstract: For several non-2D Fourier transform imaging methods, off-resonant reconstruction does not just cause geometric distortion, but changes the shape of the point spread function and causes blurring. This effect is well known for projection reconstruction and spiral k-space scanning sequences. We introduce here a method that automatically removes blur introduced by magnetic field inhomogeneity and susceptibility without using a resonant frequency map, making these imaging methods more useful. In this method, the raw data are modulated to several different frequencies and reconstructed to create a series of base images. Determination of degree of blur is done by calculating a focusing measure for each point in each base image and a composite image is then constructed using only the unblurred regions from each base image. This method has been successfully applied to phantom and in vivo images using projection-reconstruction and spiral-scan sequences.

240 citations


Journal ArticleDOI
TL;DR: An analysis of the problem of wave-front reconstruction from Shack-Hartmann measurements is presented, and the advantage of using the Karhunen-Loeve functions for computing the higher-order modes of the wave front is shown.
Abstract: An analysis of the problem of wave-front reconstruction from Shack–Hartmann measurements is presented. The wave-front aberration is assumed to result from passage of the wave front through Kolmogorov turbulence. Limitations of using Zernike polynomials as an orthogonal basis for wave-front reconstruction are highlighted, and the advantage of using the Karhunen–Loeve functions for computing the higher-order modes of the wave front is shown.

204 citations


Journal ArticleDOI
TL;DR: A fast feature-based block matching algorithm using integral projections for the motion vector estimation is proposed, which reduces the motion estimation computations by a factor of two by calculating the one-dimensional cost functions rather than the two-dimensional ones.
Abstract: Block-by-block motion compensation algorithms are studied for video-conference/video-telephone television signals. A fast feature-based block matching algorithm using integral projections for the motion vector estimation is proposed. The proposed algorithm reduces the motion estimation computations by a factor of two by calculating the one-dimensional cost functions rather than the two-dimensional ones. Also, the low sensitivity of the proposed algorithm to the presence of additive noise is shown experimentally. Simulation results based on the original and noisy image sequences are presented. >

144 citations


Proceedings ArticleDOI
11 Oct 1992
TL;DR: The authors present a unified mathematical approach that allows one to formulate both linear and nonlinear algorithms in terms of minimization problems related to the so-called K-functionals of harmonic analysis.
Abstract: The authors present a unified mathematical approach that allows one to formulate both linear and nonlinear algorithms in terms of minimization problems related to the so-called K-functionals of harmonic analysis. They then summarize the previously developed mathematics that analyzes the image compression and Gaussian noise removal algorithms. >

Journal ArticleDOI
TL;DR: An efficient algorithm based on the Hilbert transform for reconstructing cross-sectional or three-dimensional images from the input images acquired by an interference microscope is described and can be easily implemented with a low-cost frame grabber.
Abstract: We describe an efficient algorithm based on the Hilbert transform for reconstructing cross-sectional or three-dimensional images from the input images acquired by an interference microscope. First the design of this filter is presented, and cross-sectional images of an integrated circuit constructed with this algorithm are demonstrated. It is shown that this Hilbert transform algorithm can be easily implemented with a low-cost frame grabber so that the computation time required for image reconstruction is drastically reduced.

Journal ArticleDOI
TL;DR: A frequency-domain method for implementing the synthetic aperture focusing technique is developed and demonstrated using computer simulation and is well suited to reconstructing ultrasonic reflectivity over a volumetric region of space using measurements made over an adjacent two-dimensional aperture.
Abstract: A frequency-domain method for implementing the synthetic aperture focusing technique is developed and demonstrated using computer simulation. As presented, the method is well suited to reconstructing ultrasonic reflectivity over a volumetric region of space using measurements made over an adjacent two-dimensional aperture. Extensive use is made of both one- and two-dimensional Fourier transformations to perform the temporal and spatial correlation required by the technique, making the method well suited to general-purpose computing hardware. Results are presented demonstrating both the lateral and axial resolution achieved by the method. The effect of limiting the reconstruction bandwidth is also demonstrated. >

Journal ArticleDOI
TL;DR: An algorithm for reconstructing the surface shape of a nonrigid transparent object, such as water, from the apparent motion of the observed pattern is described, based on the optical and statistical analysis of the distortions.
Abstract: The appearance of a pattern behind a transparent, moving object is distorted by refraction at the moving object's surface. An algorithm for reconstructing the surface shape of a nonrigid transparent object, such as water, from the apparent motion of the observed pattern is described. This algorithm is based on the optical and statistical analysis of the distortions. It consists of four steps: extraction of optical flow, averaging of each point trajectory obtained from the optical flow sequence, calculation of the surface normal using optical characteristics, and reconstruction of the surface. The algorithm is applied to both synthetic and real images to demonstrate its performance. >

Journal ArticleDOI
TL;DR: In this article, the Simultaneous Iterative Reconstruction Technique (SIRT) is applied to 3D estimation of near-surface velocity and attenuation distributions from 3D surface-survey field data from the Ouachita frontal thrust zone in southeastern Oklahoma.
Abstract: As a result of the similarity between velocity and attenuation imaging, we have implemented both using the same 3-D tomography software, with simple variable changes. The resulting sets of linear equations are solved by the Simultaneous Iterative Reconstruction Technique (SIRT). The algorithm is applied to 3-D estimation of near‐surface velocity and attenuation distributions from 3-D surface‐survey field data from the Ouachita frontal thrust zone in southeastern Oklahoma; the images obtained correlate well with the known surface geology. Resolution analysis by computation of point spread functions indicates highest resolution in the direction parallel to the densest distribution of survey points (the receiver lines).

Journal ArticleDOI
25 Oct 1992
TL;DR: In this paper, noise properties of emission tomographic reconstructed images are compared using maximum-likelihood-expectation-maximization (ML-EM) and filtered-backprojection (FBP) algorithms.
Abstract: Noise properties of emission tomographic reconstructed images are compared using maximum-likelihood-expectation-maximization (ML-EM) and filtered-backprojection (FBP) algorithms. Noise comparisons are made in terms of the covariance matrix which gives information on the noise magnitude and noise correlations. Noise properties are studied as a function of iteration for ML-EM and as a function of noise apodization filter for FBP. It is shown that FBP reconstruction spreads noise variance from image regions containing high count densities into regions of low count densities. It is demonstrated that at lower FBP filter cutoff frequencies the noise is correlated over relatively long distances and the correlation function has deep negative sidelobes. For ML-EM reconstruction it is shown that little noise variance is spread from high count density regions into low count regions and that at lower iteration number the noise is correlated over shorter distances than for FBP. For ML-EM, the correlation function has no negative sidelobes at low iteration numbers. It is concluded from these observations that ML-EM reconstruction offers properties that may exceed FBP in terms of detectability for certain emission tomographic imaging situations. >

Journal ArticleDOI
TL;DR: A statistical description of X-ray CT (computerized tomography) imaging, from the projection data to the reconstructed image, is presented and the Gaussianity of the pixel image generated by the convolution (image reconstruction) algorithm is justified.
Abstract: A statistical description of X-ray CT (computerized tomography) imaging, from the projection data to the reconstructed image, is presented. The Gaussianity of the pixel image generated by the convolution (image reconstruction) algorithm is justified. The conditions for two pixel images to be statistically independent (for a given probability) and the conditions for a group of pixel images to be a spatial stationary random process and ergodic in mean and autocorrelations are derived. These properties provide the basis for establishing the stochastic image model and conducting the statistical image analysis of X-ray CT images. >

Journal ArticleDOI
TL;DR: Three-dimensional reconstruction from a perspective 2D image using mirrors is addressed, using mirrors to form symmetrical relations between the direct image and mirror images to reconstructed by means of plane symmetry recovering method using the vanishing point.
Abstract: Three-dimensional reconstruction from a perspective 2D image using mirrors is addressed. The mirrors are used to form symmetrical relations between the direct image and mirror images. By finding correspondences between them, the 3D shape can be reconstructed by means of plane symmetry recovering method using the vanishing point. Two constraints are used in determining the correspondence. In the case where only one mirror is used, invisible parts both in the direct image and in the mirror image may still remain. Using multiple mirrors, however, occluded parts will decrease or disappear, and occlusion-free object reconstruction becomes possible. >

Journal ArticleDOI
TL;DR: A three-dimensional reconstruction method for simultaneous compensation of attenuation, scatter and distance-dependent detector response for single photon emission computed tomography is described and tested by experimental studies, showing improvement in image noise, recognition of object sizes and shapes, and quantification of concentration ratios.
Abstract: A three-dimensional reconstruction method for simultaneous compensation of attenuation, scatter and distance-dependent detector response for single photon emission computed tomography is described and tested by experimental studies. The method determines the attenuation factors recursively along each projection ray starting at the intersected source voxel closest to the detector. The method substracts the scatter energy window data from the primary energy window data for scatter compensation. The detector response is modelled to be spatially invariant at a constant distance from the detector. The method convolves source distribution with the modelled response function to compensate for the smoothed by use of a non-uniform entropy prior to searching for the maximum a posteriori probability solution. The method was tested using projections acquired from a chest phantom by a three-headed detector system with parallel hole collimators. An improvement was shown in image noise, recognition of object sizes and shapes, and quantification of concentration ratios.

Journal ArticleDOI
TL;DR: In this article, a general formula for image reconstruction from cone beam data is described, and applied to various cone beam geometries results in a class of filtered backprojection algorithms.
Abstract: A general formula for image reconstruction from cone beam data is described. Applying this formula to various cone beam geometries results in a class of filtered backprojection algorithms. This formula is known to lead to exact reconstructions in cases in which the cone vertices form certain unbounded curves. An example of such a curve is an infinite straight line. In the case where the curve is a circle, this formula leads to the well-known Feldkamp algorithm, for which the reconstructions are only approximations to the true image. The authors apply this general formula to the cases where the curve is an ellipse and a sprial, and new algorithms are derived. The properties of these algorithms are investigated through studies of the point spread function and reconstructions of computer generated phantom data.

Journal ArticleDOI
25 Oct 1992
TL;DR: In this paper, the spatial resolution in a reconstructed single photon emission computed tomography (SPECT) image is influenced by the intrinsic resolution of the detector, and the photon-counting efficiency of SPECT systems is also determined by intrinsic resolution.
Abstract: The spatial resolution in a reconstructed single photon emission computed tomography (SPECT) image is influenced by the intrinsic resolution of the detector, and the photon-counting efficiency of SPECT systems is also determined by the intrinsic resolution. The authors demonstrate that improvements in detector resolution can lead to both improved spatial resolution in the image and improved counting efficiency compared to conventional systems. This paradoxical conclusion results from optimizing the geometry of a multiple-pinhole coded-aperture system when detectors of very high resolution are available. Simulation studies that demonstrate the image quality that is attainable with such detectors are reported. Reconstructions are performed using an iterative search algorithm on a custom-designed parallel computer. The imaging system is described by a calculated system matrix relating all voxels in the object space to all pixels on the detector. A resolution close to 2 mm is found on the reconstructed images obtained from these computer simulations with clinically reasonable exposure times. This resolution may be even further improved by optimization of the multiple-pinhole aperture. >

Journal ArticleDOI
01 Aug 1992
TL;DR: An algorithm which circumvents this problem by updating connected groups of pixels formed in an intermediate segmentation step is proposed, which substantially increased the rate of convergence and the quality of the reconstruction.
Abstract: The authors present a method for nondifferentiable optimization in maximum a posteriori estimation of computed transmission tomograms. This problem arises in the application of a Markov random field image model with absolute value potential functions. Even though the required optimization is on a convex function, local optimization methods, which iteratively update pixel values, become trapped on the nondifferentiable edges of the function. An algorithm which circumvents this problem by updating connected groups of pixels formed in an intermediate segmentation step is proposed. Experimental results showed that this approach substantially increased the rate of convergence and the quality of the reconstruction. >

Journal ArticleDOI
TL;DR: The use of the Γ-convergence theory to approximate the functional to be minimized by elliptic functionals, which are more tractable and of relevance to vision applications are suggested.

Journal ArticleDOI
TL;DR: A statistical method for selecting the Gibbs parameter inMAP image restoration from Poisson data using Gibbs priors is presented and a simple iterative feedback algorithm is presented to statistically select the parameter as the MAP image restoration is being performed.
Abstract: A statistical method for selecting the Gibbs parameter in MAP image restoration from Poisson data using Gibbs priors is presented. The Gibbs parameter determines the degree to which the prior influences the restoration. The presented method yields a MAP restored image, minimally influenced by the prior, for which a statistic falls within an appropriate confidence interval. The method assumes that a close approximation to the blurring function is known. A simple iterative feedback algorithm is presented to statistically select the parameter as the MAP image restoration is being performed. This algorithm is heuristically based on a model reference control formulation, but it requires only a minimal number of iterations for the parameter to settle to its statistically specified value. The performance of the statistical method for selecting the prior parameter and that of the iterative feedback algorithm are demonstrated using both 2-D and 3-D images. >

Journal ArticleDOI
TL;DR: A new method of optimized efficiency for the retrospective reconstruction of tomograms is presented, achieved by segmenting the reconstruction process into discrete transformations that are specific to groups of pixels, rather than performing pixel by pixel operations.
Abstract: A new method of optimized efficiency for the retrospective reconstruction of tomograms is presented. The method has been developed for use with isocentric fluoroscopic units and is capable of performing digital tomosynthesis of anatomical planes of user selected orientation and distance from the isocenter. Optimization of efficiency has been achieved by segmenting the reconstruction process into discrete transformations that are specific to groups of pixels, rather than performing pixel by pixel operations. These involve a number of projections of the acquired image matrices as well as parallel translations and summing. Application of this method has resulted in a significant reduction of computing time. The proposed algorithm has been experimentally tested on a radiotherapy simulator unit with the use of a phantom and the obtained results are reported and discussed.

Proceedings ArticleDOI
03 Jan 1992
TL;DR: An approach to shape-from-shading that is based on a connection with a calculus of variations/optimal control problem is proposed, leading naturally to an algorithm for shape reconstruction that is simple, fast, provably convergent, and does not require regularization.
Abstract: An approach to shape-from-shading that is based on a connection with a calculus of variations/optimal control problem is proposed. An explicit representation corresponding to a shaded image is given for the surface; uniqueness of the surface (under suitable conditions) is an immediate consequence. The approach leads naturally to an algorithm for shape reconstruction that is simple, fast, provably convergent (in many cases, provably convergent to the correct solution), and does not require regularization. Given a continuous image, the algorithm can be proved to converge to the continuous surface solution as the image sampling frequency is taken to infinity. Experimental results are presented for synthetic and real images. >

Journal ArticleDOI
TL;DR: The inverse problem involving the determination of a three-dimensional biological structure from images obtained by means of optical-sectioning microscopy is ill posed, and it is shown here that the linear least-squares solution is unstable because of the inversion of small eigenvalues of the microscope's point-spread-function operator.
Abstract: The inverse problem involving the determination of a three-dimensional biological structure from images obtained by means of optical-sectioning microscopy is ill posed. Although the linear least-squares solution can be obtained rapidly by inverse filtering, we show here that it is unstable because of the inversion of small eigenvalues of the microscope's point-spread-function operator. We have regularized the problem by application of the linear-precision-gauge formalism of Joyce and Root [J. Opt. Soc. Am. A 1, 149 (1984)]. In our method the solution is regularized by being constrained to lie in a subspace spanned by the eigenvectors corresponding to a selected number of large eigenvalues. The trade-off between the variance and the regularization error determines the number of eigenvalues inverted in the estimation. The resulting linear method is a one-step algorithm that yields, in a few seconds, solutions that are optimal in the mean-square sense when the correct number of eigenvalues are inverted. Results from sensitivity studies show that the proposed method is robust to noise and to underestimation of the width of the point-spread function. The method proposed here is particularly useful for applications in which processing speed is critical, such as studies of living specimens and time-lapse analyses. For these applications existing iterative methods are impractical without expensive and/or specially designed hardware.

Proceedings ArticleDOI
29 Dec 1992
TL;DR: In this paper, a Forward model is developed in terms of the Green's Function of the Diffusion Approximation to the Radiative Transfer Equation, given a perturbation of the image, the Jacobian of the Forward model can be derived.
Abstract: The successful development and clinical use of instruments that perform real-time near-infra red spectroscopy of transilluminated tissue has led to a widespread interest in the development of an imaging modality. The most promising approach uses picosecond laser pulses input on an object (Omega) , and measures the development of light intensity as a function of time at points on the boundary (partial)(Omega) . The imaging problem is to reconstruct the absorption and scattering coefficients inside (Omega) . We have proposed the following method for the reconstruction algorithm: A Forward model is developed in terms of the Green's Function of the Diffusion Approximation to the Radiative Transfer Equation. Given a perturbation of the image, the Jacobian of the Forward model can be derived. Inversion of the Jacobian then gives a perturbation step for a subsequent iteration. Previously we have derived an analytical expression for the Green's Function in certain simple geometries, and for a homogeneous initial image. We have now developed a Finite Element method to extend this to more general geometries and inhomogeneous images, with the inverse of the system stiffness matrix playing the role of the Green's Function. Thus it is now possible to proceed past the first iteration. The stability of the reconstruction is presented both for the time-independent case where the data is the absolute intensity on the boundary (partial)(Omega) , and for the time-dependent case where the data is the mean time of arrival of light.

Journal ArticleDOI
TL;DR: The aim of medical imaging is to provide in an non - invasive way morphological information about a human patient by performing an ” experiment ” where the interaction of a source of radiation anf the tissue under consideration is measured.
Abstract: The aim of medical imaging is to provide in an non - invasive way morphological information about a human patient. The information is obtained by performing an ” experiment ” where the interaction of a source of radiation anf the tissue under consideration is measured. From the measured data the desired information has to be computed, hence we face an inverse problem. It is always ill - posed in the sense that small errors in the data can be amplified to large changes in the reconstruction. For developing efficient and stable software we have to study the mathematical model; i. e., the description of the experiment based on physical and engeneering knowledge. In optimal situations it is possible to derive” inversion formulas” which relate in a constructive way the data to the searched - for information. Reconstruction algorithms can be found by discretisizing these formulas. But of course we have to perform a stability analysis in order to design the software such that the influence of the data noise is reduced as much as possible. If such inversion formulas are unknown or cannot be discretisized in an accurate way direct discretization and iterative methods are used for the computation.

Journal ArticleDOI
TL;DR: An overview of radio tomography approaches to ionospheric remote sensing in the radio-wave range is provided in this article, where the results of some initial experiments that show the possibilities of radio-tomography approaches are described.
Abstract: An overview of tomographic approaches to ionospheric remote sensing in the radio-wave range is provided. Tomographic methods are divided into deterministic and statistical ones. Deterministic tomography problems can be subdivided into ray radio tomography and diffraction radio tomography. The statistical radio tomography approach is used when it is necessary to reconstruct the statistical structure of a great number of inhomogeneities, on the basis of measurements of field statistics (instead of one realization of the reconstruction of an inhomogeneity). Methods of solving radio tomography problems, and their connection with inverse scattering problems, are considered. The results of some initial experiments that show the possibilities of the radio tomography approaches are described. Future applications and problems are discussed. >