# Showing papers in "Inverse Problems in 2002"

••

TL;DR: In this article, the authors proposed a block-iterative version of the split feasibility problem (SFP) called the CQ algorithm, which involves only the orthogonal projections onto C and Q, which we shall assume are easily calculated, and involves no matrix inverses.

Abstract: Let C and Q be nonempty closed convex sets in R N and R M , respectively, and A an M by N real matrix. The split feasibility problem (SFP) is to find x ∈ C with Ax ∈ Q ,i f suchx exist. An iterative method for solving the SFP, called the CQ algorithm, has the following iterative step: x k+1 = PC (x k + γ A T (PQ − I )Ax k ), where γ ∈ (0, 2/L) with L the largest eigenvalue of the matrix A T A and PC and PQ denote the orthogonal projections onto C and Q, respectively; that is, PC x minimizesc − x� ,o ver allc ∈ C.T heCQ algorithm converges to a solution of the SFP, or, more generally, to a minimizer ofPQ Ac − Acover c in C, whenever such exist. The CQ algorithm involves only the orthogonal projections onto C and Q, which we shall assume are easily calculated, and involves no matrix inverses. If A is normalized so that each row has length one, then L does not exceed the maximum number of nonzero entries in any column of A, which provides a helpful estimate of L for sparse matrices. Particular cases of the CQ algorithm are the Landweber and projected Landweber methods for obtaining exact or approximate solutions of the linear equations Ax = b ;t healgebraic reconstruction technique of Gordon, Bender and Herman is a particular case of a block-iterative version of the CQ algorithm. One application of the CQ algorithm that is the subject of ongoing work is dynamic emission tomographic image reconstruction, in which the vector x is the concatenation of several images corresponding to successive discrete times. The matrix A and the set Q can then be selected to impose constraints on the behaviour over time of the intensities at fixed voxels, as well as to require consistency (or near consistency) with measured data.

884 citations

••

Rice University

^{1}TL;DR: In this article, the authors review theoretical and numerical studies of the inverse problem of electrical impedance tomography, which seeks the electrical conductivity and permittivity inside a body, given simultaneous measurements of electrical currents and potentials at the boundary.

Abstract: We review theoretical and numerical studies of the inverse problem of electrical impedance tomography which seeks the electrical conductivity and permittivity inside a body, given simultaneous measurements of electrical currents and potentials at the boundary.

687 citations

••

TL;DR: In this paper, the authors present a general method for estimating the location of small, well-separated scatterers in a randomly inhomogeneous environment using an active sensor array.

Abstract: We present a general method for estimating the location of small, well-separated scatterers in a randomly inhomogeneous environment using an active sensor array. The main features of this method are (i) an arrival time analysis (ATA) of the echo received from the scatterers, (ii) a singular value decomposition of the array response matrix in the frequency domain, and (iii) the construction of an objective function in the time domain that is statistically stable and peaks on the scatterers. By statistically stable we mean here that the objective function is self-averaging over individual realizations of the medium. This is a new approach to array imaging that is motivated by time reversal in random media, analysed in detail previously. It combines features from seismic imaging, like ATA, with frequency-domain signal subspace methodology like multiple signal classification. We illustrate the theory with numerical simulations for ultrasound.

315 citations

••

TL;DR: In this paper, Monte Carlo sampling is used for nonlinear inverse problems where no analytical expression for the forward relation between data and model parameters is available, and where linearization is unsuccessful.

Abstract: Monte Carlo methods have become important in analysis of nonlinear inverse problems where no analytical expression for the forward relation between data and model parameters is available, and where linearization is unsuccessful. In such cases a direct mathematical treatment is impossible, but the forward relation materializes itself as an algorithm allowing data to be calculated for any given model. Monte Carlo methods can be divided into two categories: the sampling methods and the optimization methods. Monte Carlo sampling is useful when the space of feasible solutions is to be explored, and measures of resolution and uncertainty of solution are needed. The Metropolis algorithm and the Gibbs sampler are the most widely used Monte Carlo samplers for this purpose, but these methods can be refined and supplemented in various ways of which the neighbourhood algorithm is a notable example. Monte Carlo optimization methods are powerful tools when searching for globally optimal solutions amongst numerous local optima. Simulated annealing and genetic algorithms have shown their strength in this respect, but they suffer from the same fundamental problem as the Monte Carlo sampling methods: no provably optimal strategy for tuning these methods to a given problem has been found, only a number of approximate methods.

311 citations

••

TL;DR: In this article, the authors considered the problem of determining the support of a point from the knowledge of the frequency of the point, where the frequency is known (and known) from the data.

Abstract: We consider the scattering of time-harmonic plane waves by an inhomogeneous medium. The far field patterns u? of the scattered waves depend on the index of refraction 1 + q, the frequency, and directions and of observation and incidence, respectively. The inverse problem which is studied in this paper is to determine the support ? of q from the knowledge of u? (, ) for all , where the frequency is fixed (and known). Our new approach is based on the far field operator F which is the integral operator with kernel u? (, ). It depends on the data only and is therefore known (at least approximately). The MUSIC algorithm in signal processing uses the discrete version of F, i.e. the matrix F = (u? ( i, j)) N?N, and determines the locations of the point scatterers. The key idea in both cases is to factorize F and F in the forms where the operator S and the matrix S are 'more explicit' than F and F, respectively, and T, T are suitable isomorphisms. In a first theoretical result we show that the ranges of S and F# coincide, where F# is some suitable combination of the real and imaginary parts of F. In the finite dimensional case a simple argument from matrix theory yields that the ranges of S and F coincide. Since F# is known from the data we can decide for every function on the unit sphere whether it belongs to the range of S or not. We apply this test to the far field patterns of point sources and arrive at an explicit test whether a point z belongs to ? or not. We will demonstrate that this method also leads to a fast visualization of the obstacle.

277 citations

••

TL;DR: In this paper, the selection of multiple regularization parameters is considered in a generalized L-curve framework, and a minimum distance function (MDF) is developed for approximating the regularization parameter corresponding to the generalized corner of the L-hypersurface.

Abstract: The selection of multiple regularization parameters is considered in a generalized L-curve framework. Multiple-dimensional extensions of the L-curve for selecting multiple regularization parameters are introduced, and a minimum distance function (MDF) is developed for approximating the regularization parameters corresponding to the generalized corner of the L-hypersurface. For the single-parameter (i.e. L-curve) case, it is shown through a model that the regularization parameters minimizing the MDF essentially maximize the curvature of the L-curve. Furthermore, for both the single-and multiple-parameter cases the MDF approach leads to a simple fixed-point iterative algorithm for computing regularization parameters. Examples indicate that the algorithm converges rapidly thereby making the problem of computing parameters according to the generalized corner of the L-hypersurface computationally tractable.

253 citations

••

TL;DR: In this paper, the authors used ultrasonic Lamb wave tomography (LW tomography) for inspection of aircraft structures for structural defects such as disbonds, corrosion, and delaminations.

Abstract: Nondestructive evaluation (NDE) of aerospace structures using traditional methods is a complex, time-consuming process critical to maintaining mission readiness and flight safety. Limited access to corrosion-prone structure and the restricted applicability of available NDE techniques for the detection of hidden corrosion or other damage often compound the challenge. In this paper we discuss our recent work using ultrasonic Lamb wave tomography to address this pressing NDE technology need. Lamb waves are ultrasonic guided waves, which allow large sections of aircraft structures to be rapidly inspected for structural flaws such as disbonds, corrosion and delaminations. Because the velocity of Lamb waves depends on thickness, for example, the travel times of the fundamental Lamb modes can be converted into a thickness map of the inspection region. However, extracting quantitative information from Lamb wave data has always involved highly trained personnel with a detailed knowledge of mechanical waveguide physics. Our work focuses on tomographic reconstruction to produce quantitative maps that can be easily interpreted by technicians or fed directly into structural integrity and lifetime prediction codes. Laboratory measurements discussed here demonstrate that Lamb wave tomography using a square perimeter array of transducers with algebraic reconstruction tomography is appropriate for detecting flaws in aircraft materials. The speed and fidelity of the reconstruction algorithms as well as practical considerations for person-portable array-based systems are discussed in this paper.

180 citations

••

TL;DR: In this article, the authors discuss inverse problems as statistical estimation and inference problems, and points to the literature for a variety of techniques and results, including bias, variance, mean-squared error, identifiability, consistency, efficiency, and various forms of optimality.

Abstract: What mathematicians, scientists, engineers, and statisticians mean by “inverse problem” differs. For a statistician, an inverse problem is an inference or estimation problem. The data are finite in number and contain errors, as they do in classical estimation or inference problems, and the unknown typically is infinite-dimensional, as it is in nonparametric regression. The additional complication in an inverse problem is that the data are only indirectly related to the unknown. Standard statistical concepts, questions, and considerations such as bias, variance, mean-squared error, identifiability, consistency, efficiency, and various forms of optimality apply to inverse problems. This article discusses inverse problems as statistical estimation and inference problems, and points to the literature for a variety of techniques and results.

173 citations

••

TL;DR: In this paper, a non-iterative inversion method based on the monotonicity of the resistance matrix and its numerical approximations is proposed for resistivity retrieval in electrical resistance tomography (ERT).

Abstract: In this paper, the inverse problem of resistivity retrieval is addressed in the frame of electrical resistance tomography (ERT). The ERT data is a set of measurements of the dc resistances between pairs of electrodes in contact with the conductor under investigation. This paper is focused on a non-iterative inversion method based on the monotonicity of the resistance matrix (and of its numerical approximations). The main features of the proposed inversion method are its low computational cost requiring the solution of O(n) direct problems, where n is the number of parameters used to represent the unknown resistivity, and its very simple numerical implementation.

150 citations

••

TL;DR: A general method to devise maximum likelihood penalized (regularized) algorithms with positivity constraints is proposed and it is shown that the 'prior' image is a key point in the regularization and that the best results are obtained with Tikhonov regularization with a Laplacian operator.

Abstract: In this paper, we propose a general method to devise maximum likelihood penalized (regularized) algorithms with positivity constraints. Moreover, we explain how to obtain 'product forms' of these algorithms. The algorithmic method is based on Kuhn?Tucker first-order optimality conditions. Its application domain is not restricted to the cases considered in this paper, but it can be applied to any convex objective function with linear constraints. It is specially adapted to the case of objective functions with a bounded domain, which completely encloses the domain of the (linear) constraints. The Poisson noise case typical of this last situation and the Gaussian additive noise case are considered and they are associated with various forms of regularization functions, mainly quadratic and entropy terms. The algorithms are applied to the deconvolution of synthetic images blurred by a realistic point spread function similar to that of Hubble Space Telescope operating in the far-ultraviolet and corrupted by noise. The effect of the relaxation on the convergence speed of the algorithms is analysed. The particular behaviour of the algorithms corresponding to different forms of regularization functions is described. We show that the 'prior' image is a key point in the regularization and that the best results are obtained with Tikhonov regularization with a Laplacian operator. The analysis of the Poisson process and of a Gaussian additive noise leads to similar conclusions. We bring to the fore the close relationship between Tikhonov regularization using derivative operators, and regularization by a distance to a 'default image' introduced by Horne (Horne K 1985 Mon. Not. R. Astron. Soc. 213 129?41).

149 citations

••

TL;DR: In this article, the authors consider synthetic aperture radar and other synthetic aperture imaging systems in which a backscattered wave is measured from a variety of locations and use the tools of microlocal analysis to develop and analyse a three-dimensional imaging algorithm that applies to measurements made on a two-dimensional surface.

Abstract: This paper considers synthetic aperture radar and other synthetic aperture imaging systems in which a backscattered wave is measured from a variety of locations. The paper begins with a (linearized) mathematical model, based on the wave equation, that includes the effects of limited bandwidth and the antenna beam pattern. The model includes antennas with poor directionality, such as are needed in the problem of foliage-penetrating radar, and can also accommodate other effects such as antenna motion and steering. For this mathematical model, we use the tools of microlocal analysis to develop and analyse a three-dimensional imaging algorithm that applies to measurements made on a two-dimensional surface. The analysis shows that simple backprojection should result in an image of the singularities in the scattering region. This image can be improved by following the backprojection with a spatially variable filter that includes not only the antenna beam pattern and source waveform but also a certain geometrical scaling factor called the Beylkin determinant. Moreover, we show how to combine the backprojection and filtering in one step. The resulting algorithm places singularities in the correct locations, with the correct orientations and strengths. The algorithm is analysed to determine which information about the scattering region is reconstructed and to determine the resolution. We introduce a notion of directional resolution to treat the reconstruction of walls and other directional elements. We also determine the fineness with which the data must be sampled in order for the theoretical analysis to apply. Finally, we relate the present analysis to previous work and discuss briefly implications for the case of a single flight track.

••

TL;DR: In this paper, the authors presented two techniques: the iterative time reversal process and the DORT (French acronym for decomposition of the time reversal operator) method for defect detection in titanium billets where the grain structure renders detection difficult.

Abstract: Time reversal techniques are adaptive methods that can be used in nondestructive evaluation to improve flaw detection through inhomogeneous and scattering media. Two techniques are presented: the iterative time reversal process and the DORT (French acronym for decomposition of the time reversal operator) method. In pulse echo mode, iterative time reversal mirrors allow one to accurately control wave propagation and focus selectively on a defect reducing the speckle noise due to the microstructure contribution. The DORT method derives from the mathematical analysis of the iterative time reversal process. Unlike time reversal mirrors, it does not require programmable generators and allows the simultaneous detection and separation of several defects. These two procedures are presented and applied to detection in titanium billets where the grain structure renders detection difficult. Then, they are combined with the simulation code PASS (phased array simulation software) to form images of the samples.

••

TL;DR: In this article, a multiplicative regularization scheme was proposed to deal with the problem of the detection and imaging of homogeneous dielectric objects (the so-called binary objects).

Abstract: In this study we propose a multiplicative regularization scheme to deal with the problem of the detection and imaging of homogeneous dielectric objects (the so-called binary objects). By considering the binary regularizer as a multiplicative constraint for the contrast source inversion (CSI) method we are able to avoid the necessity of determining the regularization parameter before the inversion process has been started. We present some numerical results for some representative two-dimensional configurations, but we also show the three-dimensional reconstruction for a full vectorial electromagnetic problem. We conclude that the binary CSI method is able to obtain reasonable reconstruction results even when a wrong estimate of the material parameter is used. Moreover, generalization of the method allows us to handle inversion of more than one homogeneous scatterer having different material parameters.

••

TL;DR: In this article, the inverse problem of determining the Lame parameters λ(x) and μ(x), for an isotropic elastic body from its Dirichlet-to-Neumann map was studied.

Abstract: We derive three results on the inverse problem of determining the Lame parameters λ(x) and μ(x) for an isotropic elastic body from its Dirichlet-to-Neumann map.

••

TL;DR: In this paper, the Lavrentiev regularization method was used to reconstruct the solution of nonlinear ill-posed problems, where instead of noisy data yδ X with || y − y δ|| ≤ δ are given and F : D(F) ⊂ X → X is a monotone nonlinear operator.

Abstract: In this paper we study the method of Lavrentiev regularization to reconstruct solutions x† of nonlinear ill-posed problems F (x) = y where instead of y noisy data yδ X with || y − yδ|| ≤ δ are given and F : D(F) ⊂ X → X is a monotone nonlinear operator. In this regularization method regularized solutions xαδ are obtained by solving the singularly perturbed nonlinear operator equation F (x) + α(x−) = yδ with some initial guess . Assuming certain conditions concerning the nonlinear operator F and the smoothness of the element −x† we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly.

••

TL;DR: In this article, a single-sided autofocusing technique is proposed and its possible use for inspecting layered materials is surveyed, based on linear acoustics and the recently developed mathematical theory of focusing.

Abstract: A new technique—'single-sided' autofocusing—is proposed and its possible use for inspecting layered materials is surveyed. Based on linear acoustics and the recently developed mathematical theory of focusing, single-sided autofocusing answers the question, 'Given single-sided access, how does one focus sound to a point inside a one-dimensional layered medium at a specified time—given that the velocity profile is unknown?'

••

TL;DR: Refinement and coarsening indicators, which are easy to compute from the gradient of the least squares misfit function, are introduced to construct iteratively the zonation and to prevent overparametrization.

Abstract: When estimating hydraulic transmissivity the question of parametrization is of great importance. The transmissivity is assumed to be a piecewise constant space-dependent function and the unknowns are both the transmissivity values and the zonation, the partition of the domain whose parts correspond to the zones where the transmissivity is constant. Refinement and coarsening indicators, which are easy to compute from the gradient of the least squares misfit function, are introduced to construct iteratively the zonation and to prevent overparametrization.

••

Fudan University

^{1}TL;DR: In this article, a new simple method for choosing regularization parameters is proposed based on the conditional stability estimate for this ill-posed problem, and it has an almost optimal convergence rate when the exact solution is in H2.

Abstract: In this paper, we discuss a classical ill-posed problem—numerical differentiation by the Tikhonov regularization. Based on the conditional stability estimate for this ill-posed problem, a new simple method for choosing regularization parameters is proposed. We show that it has an almost optimal convergence rate when the exact solution is in H2. The advantages of our method are (1) we can get similar computational results with much less computation, in comparison with other methods, and (2) we can find the discontinuous points numerically.

••

TL;DR: In this article, the inverse spectral problem for matrix Dirac type systems with locally summable potentials is solved on the interval and on the half-line, and a direct procedure to recover the potential by the Weyl-Titchmarsh function is given also.

Abstract: The direct spectral problem for the matrix Dirac type systems with locally summable potentials and the inverse spectral problem for the matrix Dirac type systems with locally bounded potentials are solved on the interval and on the half-line. A direct procedure to recover the potential by the Weyl–Titchmarsh function is given also. Some important corollaries on the high-energy asymptotics of the Weyl–Titchmarsh functions and on the local uniqueness for the corresponding inverse problem follow. New results are obtained for the general type canonical systems also.

••

TL;DR: In this paper, a dynamic inverse problem with temporal a priori information is studied, where the investigated object is allowed to change during the measurement procedure, and temporal smoothness is used as a quite general, but for many applications sufficient, a priora information is required.

Abstract: In this paper dynamic inverse problems are studied, where the investigated object is allowed to change during the measurement procedure. In order to achieve reasonable results, temporal a priori information will be considered. Here, 'temporal smoothness' is used as a quite general, but for many applications sufficient, a priori information. This is justified in the case of slight movements during an x-ray scan in computerized tomography, or in the field of current density reconstruction, where one wants to conclude from electrical measurements on the surface of the head, the locations of brain activity. First, the notion of a dynamic inverse problem is introduced, then we describe how temporal smoothness can be incorporated in the regularization of the problem, and finally an efficient solver and some regularization properties of this solver are presented. This theory will be exploited in three practically relevant applications in a following paper.

••

TL;DR: In this article, the problem of the range characterization for the two-dimensional attenuated x-ray transformation is solved by explicit integral relations on the most essential level, on the basis of the integral relation.

Abstract: The problem of the range characterization for the two-dimensional attenuated x-ray transformation is solved (on the most essential level) by explicit integral relations.

••

TL;DR: In this article, the authors present a survey of current and proposed schemes and an overview and discussion of roadblocks to successful implementation of some of the more popular approaches, which is intended to serve as a survey for those physicists and applied mathematicians who are not approaching the subject of radar imaging from a formal radar background.

Abstract: The problem of all-weather noncooperative target recognition is of considerable interest to both defense and civil aviation agencies. Furthermore, the discipline of radar inverse scattering spans a set of real-world problems whose complexity can be as simple as perfectly conducting objects in uniform, isotropic, and clutter-free environments but also includes problems that are progressively more difficult. Consequently, this topic is attractive as a practical starting point for general object characterization schemes. But traditional radar target models—upon which most current radar systems are based—are nearing the end of their usefulness. Unfortunately, most traditional research programmes have emphasized instrumentation over image and model analysis and, consequently, the discipline is unnecessarily jargon-laden and device-specific. The result is that recent contributions in advanced imaging, inverse scattering and model fitting methods have often been 'excluded' from mainstream radar efforts. This topical review is intended to serve as a survey of current and proposed schemes and an overview and discussion of roadblocks to successful implementation of some of the more popular approaches. The presentation has been constructed in a manner that (it is hoped) will appeal to those physicists and applied mathematicians who are not approaching the subject of radar imaging from a formal radar background.

••

TL;DR: In this article, it was shown that the linear and quadratic nonlinear terms entering a nonlinear elliptic equation of divergence type can be uniquely identified by the Dirichlet-to-Neumann map.

Abstract: We prove that the linear and quadratic nonlinear terms entering a nonlinear elliptic equation of divergence type can be uniquely identified by the Dirichlet-to-Neumann map. The unique identifiability is proved using the complex geometrical optics solutions.

••

TL;DR: Natterer's book as mentioned in this paper is an excellent starting point for such a journey into the mathematics of computerized tomography, together with some other books from that period that withstood the 'teeth of time' such as those of Herman (New York: Academic Press 1980) and of Kak and Slaney (Piscataway, NJ: IEEE Press 1988).

Abstract: Sixty-two years passed between the publication of Radon's inversion formula in the Berichte Sachsische Akademie der Wissenschaften in Leipzig in 1917 and the 1979 Nobel prize in medicine awarded to Allen M Cormack and Godfrey N Hounsfield for their pioneering contributions to the development of computerized tomography (CT). The field of computerized tomography has since then witnessed progress and development which can be encapsulated in no other words than scientific explosion. Transmission, emission, ultrasound, optical, electrical impedance and magnetic resonance are all CT imaging modalities based on different physical models. But not only diagnostic radiology has been revolutionized by CT, many other scientific and technological areas from non-destructive material testing to seismic imaging in geophysics and from electron microscopy for biological studies to radiation therapy treatment planning have all been transformed and seen new paths being broken by the introduction of the principles of CT. The mathematical formulation of CT commonly leads to an inverse problem putting the underlying physical phenomenon and its model at the mercy of mathematics and mathematical techniques. Inadequate modelling due to insufficient understanding of the physics or due to practical limitations which result in incomplete data collection make the mathematical inversion difficult, and sometimes impossible. Two fundamentally different approaches are available. One way is to use `continuous' modelling in which quantities are represented by functions and their relations by operators between function spaces. In this approach the inversion problem at hand is solved and then the solution formula(e) are discretized for computational implementation. Another route is to first fully discretize the problem at the modelling stage and represent quantities by finite-dimensional vectors and the relations between them by functions over the vector space. Then a solution of the fully discretized inverse problem is reached which does not need further discretization of formulae for the computer implementation. Natterer's book handles the mathematics of CT in the `continuous' approach. In the preface to the original 1986 book the author wrote: `In this book I have made an attempt to collect some mathematics which is of possible interest both to the research mathematician who wants to understand the theory and algorithms of CT and to the practitioner who wants to apply CT in his special field of interest'. This attempt, one must say, was indeed very successful. In spite of the further tremendous progress that occurred since the original book appeared, the book is still a treasure for anyone joining or already working in the field. This proves that the choice of topics and the organization of material were very well done and are still useful and relevant. After a brief introduction (Chapter I), the book treats the following topics: the Radon transform and related transforms (Chapter II), sampling and resolution (Chapter III), ill-posedness and accuracy (Chapter IV), reconstruction algorithms (Chapter V), incomplete data (Chapter VI) and, finally, an appendix of mathematical tolls (Chapter VII). Except for the addition of a table of errata, the book is an unabridged republication of the original book. Therefore, it is the reader's responsibility to bridge the knowledge and literature gaps from 1986 until today with the aid of other sources. Nonetheless this book is an excellent starting point for such a journey into the mathematics of computerized tomography, together with some other books from that period that withstood the `teeth of time' such as those of Herman (New York: Academic Press 1980) and of Kak and Slaney (Piscataway, NJ: IEEE Press 1988) (see also: Classics in Applied Mathematics, Vol. 33 (Philadelphia, PA: SIAM)). Yet another interesting facet of the development of the field of CT was, and still is, the continuous stream of mathematical problems it generates. Some mathematical problems aim at reaching practical solutions for either a `continuous' model or a fully discretized model of the ever newly emerging real-world CT problems. Others are more theoretical extensions to integral geometry, such as reconstruction from integrals over arbitrary manifolds, and to a variety of other fields in pure mathematics (see, e.g., Grinberg E and Quinto E T (ed) 1990 Integral Geometry and Tomography (Providence, RI: American Mathematical Society)). Natterer's book, although admittedly not handling such extensions, is an indispensable tool for anyone planning to direct his efforts in those directions.

••

TL;DR: In this article, a nonlinear anisotropic diffusion filter is proposed to sharpen edges over a wide range of slope scales and reduce noise conservatively with dissipation purely along feature boundaries.

Abstract: Nonlinear anisotropic diffusion filtering is a procedure based on nonlinear evolution partial differential equations which seeks to improve images qualitatively by removing noise while preserving details and even enhancing edges. However, well known implementations are sensitive to parameters which are necessarily tuned to sharpen a narrow range of edge slopes; otherwise, edges are either blurred or staircased. In this work, nonlinear anisotropic diffusion filters have been developed which sharpen edges over a wide range of slope scales and which reduce noise conservatively with dissipation purely along feature boundaries. Specifically, the range of sharpened edge slopes is widened as backward diffusion normal to level sets is balanced with forward diffusion tangent to level sets. Also, noise is reduced by selectively altering the balance toward diminishing normal backward diffusion and particularly toward total variation filtering. The theoretical motivation for the proposed filters is presented together with computational results comparing them with other nonlinear anisotropic diffusion filters on both synthetic images and magnetic resonance images.

••

TL;DR: In this article, a fast non-iterative numerical algorithm for the conductivity reconstruction using the internal current vector information is proposed, which is mainly based on efficient numerical construction of equipotential lines.

Abstract: We consider magnetic resonance electrical impedance tomography, which aims to reconstruct the conductivity distribution using the internal current density furnished by magnetic resonance imaging. We show the uniqueness of the conductivity reconstruction with one measurement imposing the Dirichlet boundary condition. We also propose a fast non-iterative numerical algorithm for the conductivity reconstruction using the internal current vector information. The algorithm is mainly based on efficient numerical construction of equipotential lines. The resulting numerical method is stable in the sense that the error of the computed conductivity is linearly proportional to the input noise level and the introduction of internal current data makes the impedance tomography problem well-posed. We present various numerical examples to show the feasibility of using our method.

••

TL;DR: In this paper, the concept of dynamic inverse problems was introduced and two procedures, namely STR and STR-C, for the efficient spatio-temporal regularization of such problems were developed.

Abstract: In the first part of this paper the notion of dynamic inverse problems was introduced and two procedures, namely STR and STR-C, for the efficient spatio-temporal regularization of such problems were developed. In this part the application of the new methods to three practically important problems, namely dynamic computerized tomography, dynamic electrical impedance tomography and spatio-temporal current density reconstructions will be presented. Dynamic reconstructions will be carried out in simulated objects which show the quality of the methods and the efficiency of the solution process. A comparison to a Kalman smoother approach will be given for dynEIT.

••

TL;DR: In this paper, the shape of an inhomogeneous scatterer was determined from a knowledge of the time harmonic incident electromagnetic wave and the far-field pattern of the scattered wave with frequency in the resonance region.

Abstract: We consider the inverse scattering problem of determining the shape of an inhomogeneous scatterer in 3 from a knowledge of the time harmonic incident electromagnetic wave and the far-field pattern of the scattered wave with frequency in the resonance region. The approach used is the linear sampling method which does not require a priori knowledge of the characteristics of the medium.

••

TL;DR: In this paper, the results of nonlinear inversion schemes such as contrast source inversion are compared to the output of SAFT for a carefully designed ultrasonic experiment, and it is shown via synthetic as well as experimental data that SAFT can be extended to electromagnetic vector fields and to an inhomogeneous and/or anisotropic background material.

Abstract: Convenient tools for nondestructive evaluation of solids can be electromagnetic and/or elastodynamic waves; since their governing equations, including acoustics, exhibit strong structural similarities, the same inversion concepts apply. In particular, the heuristic SAFT algorithm (synthetic aperture focusing technique) can be—and has been—utilized for all kinds of waves, once a scalar approximation can be justified. Relating SAFT to inverse scattering in terms of diffraction tomography, it turns out that linearization is the most stringent inherent approximation. Hence, the results of nonlinear inversion schemes such as contrast source inversion are compared to the output of SAFT for a carefully designed ultrasonic experiment. In addition, it will be shown via synthetic as well as experimental data that SAFT can be extended to electromagnetic vector fields and to an inhomogeneous and/or anisotropic background material.

••

TL;DR: In this paper, data assimilation has been applied in an estuarine system in order to implement operational analysis in the management of a coastal zone using a nonlinear state-space model.

Abstract: Data assimilation (DA) has been applied in an estuarine system in order to implement operational analysis in the management of a coastal zone. The dynamical evolution of the estuarine variables and corresponding observations are modelled with a nonlinear state-space model. Two DA methods are used for controlling the evolution of the model state by integrating information from observations. These are the reduced rank square root (RRSQRT) Kalman filter, which is a suboptimal implementation of the extended Kalman filter, and the ensemble Kalman filter which allows for nonlinear evolution of error statistics while still applying a linear equation in the analysis. First, these methods are applied and examined with a simple 1D ecological model. Then the RRSQRT Kalman filter is applied to the 3D hydrodynamics of the Odra lagoon using the model TRIM3D and water elevation measurements from fixed pile stations. Geostatistical modelling ideas are discussed in the application of these algorithms. (Some figures in this article are in colour only in the electronic version)