scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 1990"


Journal ArticleDOI
TL;DR: A systematic reconstruction-based method for deciding the highest-order ZERNike moments required in a classification problem is developed and the superiority of Zernike moment features over regular moments and moment invariants was experimentally verified.
Abstract: The problem of rotation-, scale-, and translation-invariant recognition of images is discussed. A set of rotation-invariant features are introduced. They are the magnitudes of a set of orthogonal complex moments of the image known as Zernike moments. Scale and translation invariance are obtained by first normalizing the image with respect to these parameters using its regular geometrical moments. A systematic reconstruction-based method for deciding the highest-order Zernike moments required in a classification problem is developed. The quality of the reconstructed image is examined through its comparison to the original one. The orthogonality property of the Zernike moments, which simplifies the process of image reconstruction, make the suggest feature selection approach practical. Features of each order can also be weighted according to their contribution to the reconstruction process. The superiority of Zernike moment features over regular moments and moment invariants was experimentally verified. >

1,971 citations


Journal ArticleDOI
TL;DR: This method builds on the expectation-maximization approach to maximum likelihood reconstruction from emission tomography data, but aims instead at maximum posterior probability estimation, which takes account of prior belief about smoothness in the isotope concentration.
Abstract: A novel method of reconstruction from single-photon emission computerized tomography data is proposed. This method builds on the expectation-maximization (EM) approach to maximum likelihood reconstruction from emission tomography data, but aims instead at maximum posterior probability estimation, which takes account of prior belief about smoothness in the isotope concentration. A novel modification to the EM algorithm yields a practical method. The method is illustrated by an application to data from brain scans. >

1,289 citations


01 Jan 1990
TL;DR: In this paper, the authors focus on image reconstruction, real-time texture mapping, separable algorithms, two-pass transforms, mesh warping, and special effects, and discuss equations for spatial information, interpolation kernels, filtering problems, and fast-warping techniques based on scanline algorithms.
Abstract: This best-selling, original text focuses on image reconstruction, real-time texture mapping, separable algorithms, two-pass transforms, mesh warping, and special effects. The text, containing all original material, begins with the history of the field and continues with a review of common terminology, mathematical preliminaries, and digital image acquisition. Later chapters discuss equations for spatial information, interpolation kernels, filtering problems, and fast-warping techniques based on scanline algorithms.

1,240 citations


Journal ArticleDOI
TL;DR: An algorithm based on weighted recursive least-squares theory is developed in the wavenumber domain, which is efficient because interpolation and noise removal are performed recursively, and is highly suitable for implementation via the massively parallel computational architectures currently available.
Abstract: In several applications it is required to reconstruct a high-resolution noise-free image from multipath frames of undersampled low-resolution noisy images. Using the aliasing relationship between the undersamples frames and the reference image, an algorithm based on weighted recursive least-squares theory is developed in the wavenumber domain. This algorithm is efficient because interpolation and noise removal are performed recursively, and is highly suitable for implementation via the massively parallel computational architectures currently available. Success in the use of the algorithm is demonstrated through various simulated examples. >

567 citations


Proceedings ArticleDOI
04 Nov 1990
TL;DR: A texture segmentation algorithm inspired by the multichannel filtering theory for visual information processing in the early stages of the human visual system is presented and appears to perform as predicted by preattentive texture discrimination by a human.
Abstract: A texture segmentation algorithm inspired by the multichannel filtering theory for visual information processing in the early stages of the human visual system is presented. The channels are characterized by a bank of Gabor filters that nearly uniformly covers the spatial-frequency domain. A systematic filter selection scheme based on reconstruction of the input image from the filtered images is proposed. Texture features are obtained by subjecting each (selected) filtered image to a nonlinear transformation and computing a measure of energy in a window around each pixel. An unsupervised square-error clustering algorithm is then used to integrate the feature images and produce a segmentation. A simple procedure to incorporate spatial adjacency information in the clustering process is proposed. Experiments on images with natural textures as well as artificial textures with identical second and third-order statistics are reported. The algorithm appears to perform as predicted by preattentive texture discrimination by a human. >

426 citations


Journal ArticleDOI
TL;DR: It is shown that the main image reconstruction methods, namely filtered backprojection and iterative reconstruction, can be directly applied to conformation therapy and first theoretical results are presented.
Abstract: The problem of optimizing the dose distribution for conformation radiotherapy with intensity modulated external beams is similar to the problem of reconstructing a 3D image from its 2D projections. In this paper we analyse the relationship between these problems. We show that the main image reconstruction methods, namely filtered backprojection and iterative reconstruction, can be directly applied to conformation therapy. We examine the features of each of these methods with regard to this new application and we present first theoretical results.

411 citations


Journal ArticleDOI
TL;DR: An OSL (one-step late) algorithm is defined that retains the E-step of the EM algorithm but provides an approximate solution to the M-step, and modifications of the OSL algorithm guarantee convergence to the unique maximum of the log posterior function.
Abstract: P.J. Green has defined an OSL (one-step late) algorithm that retains the E-step of the EM algorithm (for image reconstruction in emission tomography) but provides an approximate solution to the M-step. Further modifications of the OSL algorithm guarantee convergence to the unique maximum of the log posterior function. Convergence is proved under a specific set of sufficient conditions. Several of these conditions concern the potential function of the Gibb's prior, and a number of candidate potential functions are identified. Generalization of the OSL algorithm to transmission tomography is also considered. >

408 citations


Journal ArticleDOI
TL;DR: Review of preliminary results by a panel of radiologists indicates that the residual image degradation is tolerable for selected applications when it is critical to acquire more slices in a patient breathing cycle than is possible with conventional scanning.
Abstract: This paper deals with methods of reducing the total time required to acquire the projection data for a set of contiguous computed tomography (CT) images. Normally during the acquisition of a set of slices, the patient is held stationary during data collection and translated to the next axial location during an interscan delay. We demonstrate using computer simulations and scans of volunteers on a modified scanner how acceptable image quality is achieved if the patient translation time is overlapped with data acquisition. If the concurrent patient translation is ignored, structured artifacts significantly degrade resulting reconstructions. We present a number of weighting schemes for use with the conventional convolution/backprojection algorithm to reduce the structured artifacts through the use of projection modulation using the data from individual and multiple slices. We compare the methods with respect to structured artifacts, noise, resolution and to patient motion. Review of preliminary results by a panel of radiologists indicates that the residual image degradation is tolerable for selected applications when it is critical to acquire more slices in a patient breathing cycle than is possible with conventional scanning.

386 citations


Journal ArticleDOI
TL;DR: The expectation-maximization algorithm is proposed to optimize the nonlinear likelihood function in an efficient way and low-order parametric image and blur models are incorporated into the identification method.
Abstract: A maximum-likelihood approach to the blur identification problem is presented. The expectation-maximization algorithm is proposed to optimize the nonlinear likelihood function in an efficient way. In order to improve the performance of the identification algorithm, low-order parametric image and blur models are incorporated into the identification method. The resulting iterative technique simultaneously identifies and restores noisy blurred images. >

264 citations


Journal ArticleDOI
22 Oct 1990
TL;DR: In this article, a three-dimensional iterative reconstruction algorithm which incorporates models of the geometric point response in the projector-backprojector is presented for parallel, fan, and cone beam geometries.
Abstract: A three-dimensional iterative reconstruction algorithm which incorporates models of the geometric point response in the projector-backprojector is presented for parallel, fan, and cone beam geometries. The algorithms have been tested on an IBM 3090-600S supercomputer. The iterative EM reconstruction algorithm is 50 times longer with geometric response and photon attenuation models than without modeling these physical effects. An improvement in image quality in the reconstruction of projection data collected from a single-photon-emission computed tomography (SPECT) imaging system has been observed. Significant improvements in image quality are obtained when the geometric point response and attenuation are appropriately compensated. It is observed that resolution is significantly improved with attenuation correction alone. Using phantom experiments, it is observed that the modeling of the spatial system response imposes a smoothing without loss of resolution. >

252 citations


Journal ArticleDOI
TL;DR: The methods and algorithms used for volumetric rendering of medical computed tomography data are described in detail and a step-by-step description of the process used to generate two types of images is included.
Abstract: The methods and algorithms used for volumetric rendering of medical computed tomography data are described in detail. Volumetric rendering allows for the use of a mixture paradigm for representation of the volume to be rendered and uses mathematical techniques to reduce or eliminate aliasing. A step-by-step description of the process used to generate two types of images (unshaded and shaded surfaces) is included. The technique generates three-dimensional images of computed tomography data with unprecedented image quality. Images generated with this technique are in routine clinical use. >

Journal ArticleDOI
25 May 1990-Science
TL;DR: A method for reconstructing images from projections that is unique in that the reconstruction of the internal structure can be carried out for objects that diffuse the incident radiation.
Abstract: A method for reconstructing images from projections is described The unique aspect of the procedure is that the reconstruction of the internal structure can be carried out for objects that diffuse the incident radiation The method may be used with photons, phonons, neutrons, and many other kinds of radiation The procedure has applications to medical imaging, industrial imaging, and geophysical imaging

Journal ArticleDOI
TL;DR: It is shown that the combination of different surface-rendering algorithms together with cutting and transparent display allow a realistic visualization of the human anatomy.
Abstract: Multi-slice images obtained from computer tomography and magnetic resonance imaging represent a 3D image volume. For its visualization we use a raycasting algorithm working on a gray-scale voxel data model. This model is extended by additional attributes such as membership to an organ or a second imaging modality (“generalized voxel model”). It is shown that the combination of different surface-rendering algorithms together with cutting and transparent display allow a realistic visualization of the human anatomy.

Journal ArticleDOI
TL;DR: A tutorial review of the results given in the 1985 paper, "Image reconstruction from cone-beam projections: necessary and sufficient conditions and reconstruction methods," will be given here, which will include the advances that have been made since that time.
Abstract: Cone-beam tomography is the science of forming images by inverting three-dimensional divergent cone-beam ray-sum data sets. The impetus for its application is its three-dimensional data collection abilities, which result in (1) significant reduction in the time needed to collect a sufficient number of data to produce a three-dimensional image and (2) elimination of the inaccuracy due to misalignment of cross sectional images. On the other hand, the divergence of cone-beam data has hindered its application. Because of the divergence, the theory that has been developed for fan-beam and parallel two- and three-dimensional tomography does not provide a totally adequate means for analyzing or inverting cone-beam data. Consider the following: In practice, as the data are collected, the vertex of the cone is movedalong some path about the object. Which paths, if any, provide enough information to make an inversion possible? Suppose by some means enough information has been obtained. How does one derive an exact formula for inverting this data? To answer these questions a new theory that takes into account the threedimensional divergence of cone-beam data needs to be developed. In 1985, a paper was published in which several advances in the theory of cone-beam tomography were made. A tutorial review of the results given in the 1985 paper [B. D. Smith, "Image reconstruction from cone-beam projections: necessary and sufficient conditions and reconstruction methods," IEEE Trans. Med Imag. MI-4, 14-28 (1985)] will be given here. This review will include the advances that have been made since that time. Additionally, a brief review of the contributions made by a number of other researchers will be given.

Journal ArticleDOI
TL;DR: A number of different algorithms have recently been proposed to identify the image and blur model parameters from an image that is black-and-white.
Abstract: A number of different algorithms have recently been proposed to identify the image and blur model parameters from an image that is degraded by blur and noise. This paper gives an overview of the developments in image and blur identification under a unifying maximum likelihood framework. In fact, we show that various recently published image and blur identification algorithms are different implementations of the same maximum likelihood estimator resulting from different modeling assumptions and/or considerations about the computational complexity. The use of the maximum likelihood estimation in image and blur identification is illustrated by numerical examples.

Journal ArticleDOI
TL;DR: In this article, the authors considered a continuous idealization of the PET reconstruction problem, considered as an example of bivariate density estimation based on indirect observations, and established exact minimax rates of convergence of estimation, for all possible estimators, over suitable smoothness classes of functions.
Abstract: Several algorithms for image reconstruction in positron emission tomography (PET) have been described in the medical and statistical literature. We study a continuous idealization of the PET reconstruction problem, considered as an example of bivariate density estimation based on indirect observations. Given a large sample of indirect observations, we consider the size of the equivalent sample of observations, whose original exact positions would allow equally accurate estimation of the image of interest. Both for indirect and for direct observations, we establish exact minimax rates of convergence of estimation, for all possible estimators, over suitable smoothness classes of functions. A key technical device is a modulus of continuity appropriate to global function estimation. For indirect data and (in practice unobservable) direct data, the rates for mean integrated square error are $n^{-p/(p + 2)}$ and $(n/\log n)^{-p/(p + 1)}$, respectively, for densities in a class corresponding to bounded square-integrable $p$th derivatives. We obtain numerical values for equivalent sample sizes for minimax linear estimators using a slightly modified error criterion. Modifications of the model to incorporate attenuation and the third dimension effect do not affect the minimax rates. The approach of the paper is applicable to a wide class of linear inverse problems.

Journal ArticleDOI
TL;DR: An algorithm that can produce planes or contours through the volume without any loss of the volume resolution of the original data set is presented and is particularly well suited for three-dimensional magnetic resonance images.
Abstract: It is shown how to generate oblique slices from a set of parallel slices. An algorithm that can produce planes or contours through the volume without any loss of the volume resolution of the original data set is presented. The algorithm uses the Fourier-shift theorem and is efficient for calculating large numbers of slices. Although the algorithm is general, it is particularly well suited for three-dimensional magnetic resonance images, as demonstrated with examples. >

Journal ArticleDOI
TL;DR: The introduction of a damped version of the original algorithm and the study of its relationship with the generalized least squares algorithm enables the authors to explain the physical behavior of the Simultaneous Iterative Reconstruction Technique.
Abstract: The Simultaneous Iterative Reconstruction Technique is a very suitable technique for inverting large sparse linear systems, since it is iterative and does not need the whole matrix to be stored in the internal computer memory. It was designed for medical tomography, but is nowadays commonly used in seismic tomography. The authors propose to analyze this inversion technique, which has nevertheless a few drawbacks, such as inconsistency and the introduction of nonphysical, a priori information into the solution resulting from an implicit rescaling of the problem. The introduction of a damped version of the original algorithm and the study of its relationship with the generalized least squares algorithm (Tarantola and Valette, 1982) enables then to explain the physical behavior of the method. The damped algorithm appears to be more efficient than the original one because it allows one to control the inversion. There are two main options: (1) a fast convergence with, unfortunately, nonphysical, a priori information in the solution and (2) complete control of the a priori information, but at the expense of the convergence speed. Finally, they show how to compute the resolution matrix and an approximation of the estimated model covariance, without passing through memory-consuming matrix inversions.

Journal ArticleDOI
TL;DR: In this article, an efficient method for accurately calculating the center-of-rotation, or projection center, for parallel computed tomography projection data, or sinograms, is described.
Abstract: An efficient method for accurately calculating the center-of-rotation, or projection center, for parallel computed tomography projection data, or sinograms, is described. This method uses all the data in the sinogram to estimate the center by a least-squares technique and requires no previous calibration scan. The method also finds the object's center-of-mass without reconstructing its image. Since the method uses the measured data, it is sensitive to noise in the measurements, but that sensitivity is relatively small compared to other techniques. Examples of its use on simulated and actual data are included. For fan-beam data over 360 degrees , two related methods are described to find the center in the presence or absence of a midline offset. >

Journal ArticleDOI
TL;DR: The problem of identifying the image and blur parameters and restoring a noisy blurred image is addressed and two algorithms for identification/restoration, based on two different choices of complete data, are derived and compared.
Abstract: In this paper, the problem of identifying the image and blur parameters and restoring a noisy blurred image is addressed. Specifying the blurring process by its point spread function (PSF), the blur identification problem is formulated as the maximum likelihood estimation (MLE) of the PSF. Modeling the original image and the additive noise as zeromean Gaussian processes, the MLE of their covariance matrices is also computed. An iterative approach, called the EM (expectation-maximization) algorithm, is used to find the maximum likelihood estimates ofthe relevant unknown parameters. In applying the EM algorithm, the original image is chosen to be part of the complete data; its estimate is computed in the E-step of the EM iterations and represents the restored image. Two algorithms for identification/restoration, based on two different choices of complete data, are derived and compared. Simultaneous blur identification and restoration is performed by the first algorithm, while the identification of the blur results from a separate minimization in the second algorithm. Experiments with simulated and photographically blurred images are shown.

Journal ArticleDOI
TL;DR: Several aspects of the application of regularization theory in image restoration are presented, extended by extending the applicability of the stabilizing functional approach to 2-D ill-posed inverse problems by proposing a variety of regularizing filters and iterative regularizing algorithms.
Abstract: Several aspects of the application of regularization theory in image restoration are presented. This is accomplished by extending the applicability of the stabilizing functional approach to 2-D ill-posed inverse problems. Inverse restoration is formulated as the constrained minimization of a stabilizing functional. The choice of a particular quadratic functional to be minimized is related to the a priori knowledge regarding the original object through a formulation of image restoration as a maximum a posteriori estimation problem. This formulation is based on image representation by certain stochastic partial differential equation image models. The analytical study and computational treatment of the resulting optimization problem are subsequently presented. As a result, a variety of regularizing filters and iterative regularizing algorithms are proposed. A relationship between the regularized solutions proposed and optimal Wiener estimation is identified. The filters and algorithms proposed are evaluated through several experimental results. >

Journal ArticleDOI
TL;DR: Both a new iterative grid-search technique and the iterative Fourier-transform algorithm are used to illuminate the relationships among the ambiguous images nearest a given object, error metric minima, and stagnation points of phase-retrieval algorithms.
Abstract: Both a new iterative grid-search technique and the iterative Fourier-transform algorithm are used to illuminate the relationships among the ambiguous images nearest a given object, error metric minima, and stagnation points of phase-retrieval algorithms. Analytic expressions for the subspace of ambiguous solutions to the phase-retrieval problem are derived for 2 × 2 and 3 × 2 objects. Monte Carlo digital experiments using a reduced-gradient search of these subspaces are used to estimate the probability that the worst-case nearest ambiguous image to a given object has a Fourier modulus error of less than a prescribed amount. Probability distributions for nearest ambiguities are estimated for different object-domain constraints.

Journal ArticleDOI
TL;DR: Using additionally a low-resolution intensity image from a telescope with a small aperture, a fine-resolution image of a general object can be reconstructed in a two-step approach using a modified algorithm that employs an expanding weighting function on the Fourier modulus.
Abstract: It is difficult to reconstruct an image of a complex-valued object from the modulus of its Fourier transform (i.e., retrieve the Fourier phase) except in some special cases. By using additionally a low-resolution intensity image from a telescope with a small aperture, a fine-resolution image of a general object can be reconstructed in a two-step approach. First the Fourier phase over the small aperture is retrieved, using the Gerchberg–Saxton algorithm. Then that phase is used, in conjunction with the Fourier modulus data over a large aperture together with a support constraint on the object, to reconstruct a fine-resolution image (retrieve the phase over the large aperture) by the iterative Fourier-transform algorithm. The second step requires a modified algorithm that employs an expanding weighting function on the Fourier modulus.

Journal ArticleDOI
TL;DR: It is demonstrated that optimal iterative three-dimensional reconstruction approaches can be feasibly applied to emission imaging systems that have highly complex spatial sampling patterns and that generate extremely large numbers of data values.
Abstract: A three-dimensional maximum-likelihood reconstruction method is presented for a prototype electronically collimated single-photon-emission system. The electronically collimated system uses a gamma camera fronted by an array of germanium detectors to detect gamma-ray emissions from a distributed radioisotope source. In this paper we demonstrate that optimal iterative three-dimensional reconstruction approaches can be feasibly applied to emission imaging systems that have highly complex spatial sampling patterns and that generate extremely large numbers of data values. A probabilistic factorization of the system matrix that reduces the computation by several orders of magnitude is derived. We demonstrate a dramatic increase in the convergence speed of the expectation maximization algorithm by sequentially iterating over particular subsets of the data. This result is also applicable to other emission imaging systems.

Journal ArticleDOI
TL;DR: Subsectional and 3D volume imaging are presented as well as a novel phase‐correction method for Hermitian symmetry in “half‐Fourier” echo‐planar imaging.
Abstract: Single-shot echo-planar imaging is notoriously vulnerable to image artifacts, arising from the necessity of alternate echo time reversal during image reconstruction and from static field inhomogeneity. A technique for overcoming these problems, which further permits imaging on systems with relatively poor gradient waveforms, when data are collected always with the same gradient polarity, is presented. Subsectional and 3D volume imaging are presented as well as a novel phase-correction method for Hermitian symmetry in "half-Fourier" echo-planar imaging.

Journal ArticleDOI
TL;DR: Both iterative and one-step algorithms to implement POCS are furnished and the method of projections onto convex sets (POCS) is applied to recover a signal or image from nonuniform samples and prior knowledge.
Abstract: We apply the method of projections onto convex sets (POCS) to recover a signal or image from nonuniform samples and prior knowledge. Both iterative and one-step algorithms to implement POCS are furnished. We discuss our results in relation to the recent work by Sauer and Allebach [ IEEE Trans. Circuits Syst.CAS-34, 1497 ( 1987)], who considered the same problem from a different point of view.

Journal ArticleDOI
TL;DR: Two weighting schemes for the projection and backprojection operations in the EM algorithm are studied and the line-length weighting is susceptible to ring artifacts which are improved by using interpolated projector-backprojectors.
Abstract: Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data, whereas the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weighting schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to-slice cross-talk are not found with parallel beam and fan beam geometries. When using filtered backprojection and iterative EM algorithms, the line-length weighting is susceptible to ring artifacts which are improved by using interpolated projector-backprojectors. >

Journal ArticleDOI
TL;DR: In this article, a photon-address, subplane implementation of the triple correlation (TC) algorithm is evaluated for application to near-real-time, stellar speckle imaging at low-light levels.
Abstract: The performance of a photon-address, subplane implementation of the triple-correlation (TC) algorithm is evaluated for application to near-real-time, stellar speckle imaging at low-light levels. A simple least-squares relaxation algorithm for recovering object phase from the bispectrum is proposed and found to be consistently better than the usual recursive method. Photon-address speckle data from six simulated objects of different degrees of complexity, and from the binary stars β Del and μ Ori, were used in this study. For real-time applications for which computational efficiency is critical, the relaxed two-plane TC algorithm offers excellent performance and rugged-ness with respect to object complexity.

Dissertation
01 May 1990
TL;DR: The aim of this investigation is to understand why the problem is difficult and to find numerical solution methods which respect the difficulties encountered and points to possible routes for their solution.
Abstract: This thesis is concerned with Electrical Impedance Tomogaphy (EIT), a medical imaging technique in which pictures of the electrical conductivity distribution of the body are formed from current and voltage data taken on the body surface. The focus of the thesis is on the mathematical aspects of reconstructing the conductivity image from the measured data (the reconstruction problem). The reconstruction problem is particularly difficult and in this thesis it is investigated analytically and numerically. The aim of this investigation is to understand why the problem is difficult and to find numerical solution methods which respect the difficulties encountered. The analytical investigation of this non-linear inverse problem for an elliptic partial differential equation shows that while the forward mapping is analytic the inverse mapping is discontinuous. A rigorous treatment of the linearisation of the problem is given, including proofs of forms of linearisation assumed by previous authors. It is shown that the derivative of the forward problem is compact. Numerical calculations of the singular value decomposition (SVD) are given including plots of singular values and images of the singular functions. The SVD is used to settle a controversy concerning current drive patterns. Reconstruction algorithms are investigated and use of Regularised Newton methods is suggested. A formula for the second derivative of the forward mapping is derived which proves too computationally expensive to calculate. Use of Tychonov regularisation as well as filtered SVD and iterative methods are discussed. The similarities, and differences, between EIT and X-Ray Computed Tomography (X-Ray CT) are illuminated. This leads to an explanation of methods used by other authors for EIT reconstuction based on X-Ray CT. Details of the author's own implementation of a regularised Newton method are given. Finally the idea of adaptive current patterns is investigated. An algorithm is given for the experimental determination of optimal current patterns and the integration of this technique with regularised Newton methods is explored. Promising numerical results from this technique are given. The thesis concludes with a discussion of some outstanding problems in EIT and points to possible routes for their solution. An appendix gives brief details of the design and development of the Oxford Polytechnic Adaptive Current Tomograph.

Journal ArticleDOI
TL;DR: In this paper, Fermat's principle is used to show that any ray path with traveltime smaller than the measured traveltime is not feasible, and for a given set of trial ray paths, non-feasible models can be classified by their total number of 'feasibility violations', i.e. the number of ray paths with travel time less than that measured.
Abstract: Reconstruction of acoustic, seismic, or electromagnetic wave-speed distributions from first arrival traveltime data is the goal of traveltime tomography. The reconstruction problem is nonlinear, because the ray paths that should be used for tomographic backprojection techniques can depend strongly on the unknown wave speeds. In the author's analysis, Fermat's principle is used to show that trial wave-speed models which produce any ray paths with traveltime smaller than the measured traveltime are not feasible models. Furthermore, for a given set of trial ray paths, non-feasible models can be classified by their total number of 'feasibility violations', i.e. the number of ray paths with traveltime less than that measured. Fermat's principle is subsequently used to convexify the fully nonlinear traveltime tomography problem.