scispace - formally typeset
Search or ask a question
Author

Fadil Santosa

Bio: Fadil Santosa is an academic researcher from University of Minnesota. The author has contributed to research in topics: Inverse problem & Optimization problem. The author has an hindex of 30, co-authored 110 publications receiving 4686 citations. Previous affiliations of Fadil Santosa include University of Delaware & University of Florence.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the linear inversion (deconvolution) of band-limited reflection seismograms is studied and a new cost functional is proposed that allows robust profile reconstruction in the presence of noise.
Abstract: We present a method for the linear inversion (deconvolution) of band-limited reflection seismograms. A large convolution problem is first broken up into a sequence of smaller problems. Each small problem is then posed as an optimization problem to resolve the ambiguity presented by the band-limited nature of the data. A new cost functional is proposed that allows robust profile reconstruction in the presence of noise. An algorithm for minimizing the cost functional is described. We present numerical experiments which simulate data interpretation using this procedure.

549 citations

Journal ArticleDOI
TL;DR: This paper uses the level set method, the variational level set calculus, and the projected gradient method to construct a simple numerical approach for problems of this type involving a vibrating system whose resonant frequency or whose spectral gap is to be optimized subject to constraints on geometry.

517 citations

Journal ArticleDOI
TL;DR: In this paper, an approach for solving inverse problems involving obstacles is proposed, which uses a level-set method which has been shown to be effective in treating problems involving moving boundaries.
Abstract: An approach for solving inverse problems involving obstacles is proposed. The approach uses a level-set method which has been shown to be effective in treating problems involving moving boundaries. We develop two computational methods based on this idea. One method results in a nonlinear time-dependent partial differential equation for the level-set function whose evolution minimizes the residual in the data fit. The second method is an optimization that generates a sequence of level-set functions that reduces the residual. The methods are illustrated in two applications: a deconvolution problem, and a diffraction screen reconstruction problem.

444 citations

Journal ArticleDOI
TL;DR: The purpose of this investigation is to understand situations under which an enhancement method succeeds in recovering an image from data which are noisy and blurred, and selects one that has the least total variation.
Abstract: The purpose of this investigation is to understand situations under which an enhancement method succeeds in recovering an image from data which are noisy and blurred. The method in question is due to Rudin and Osher. The method selects, from a class of feasible images, one that has the least total variation.Our investigation is limited to images which have small total variation. We call such images “blocky” as they are commonly piecewise constant (or nearly so) in grey-level values. The image enhancement is applied to three types of problems, each one leading to an optimization problem. The optimization problems are analyzed in order to understand the conditions under which they can be expected to succeed in reconstructing the desired blocky images. We illustrate the main findings of our work in numerical examples.

286 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present an iterative approach for the retrieval of the unknown cross section of a cylindrical obstacle embedded in a homogeneous medium and illuminated by time-harmonic electromagnetic line sources, where the dielectric parameters of the obstacle and embedding materials are known and piecewise constant.
Abstract: We are concerned with the retrieval of the unknown cross section of a homogeneous cylindrical obstacle embedded in a homogeneous medium and illuminated by time-harmonic electromagnetic line sources. The dielectric parameters of the obstacle and embedding materials are known and piecewise constant. That is, the shape (here, the contour) of the obstacle is sufficient for its full characterization. The inverse scattering problem is then to determine the contour from the knowledge of the scattered field measured for several locations of the sources and/or frequencies. An iterative process is implemented: given an initial contour, this contour is progressively evolved such as to minimize the residual in the data fit. This algorithm presents two main important points. The first concerns the choice of the transformation enforced on the contour. We will show that this involves the design of a velocity field whose expression only requires the resolution of an adjoint problem at each step. The second concerns the use of a level-set function in order to represent the obstacle. This level-set function will be of great use to handle in a natural way splitting or merging of obstacles along the iterative process. The evolution of this level-set is controlled by a Hamilton-Jacobi-type equation which will be solved by using an appropriate finite-difference scheme. Numerical results of inversion obtained from both noiseless and noisy synthetic data illustrate the behaviour of the algorithm for a variety of obstacles.

253 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Abstract: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

14,587 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The theory of compressive sampling, also known as compressed sensing or CS, is surveyed, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition.
Abstract: Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.

9,686 citations

Journal ArticleDOI
TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Abstract: Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0

6,342 citations

Book
01 Jan 2002
TL;DR: The CLAWPACK software as discussed by the authors is a popular tool for solving high-resolution hyperbolic problems with conservation laws and conservation laws of nonlinear scalar scalar conservation laws.
Abstract: Preface 1. Introduction 2. Conservation laws and differential equations 3. Characteristics and Riemann problems for linear hyperbolic equations 4. Finite-volume methods 5. Introduction to the CLAWPACK software 6. High resolution methods 7. Boundary conditions and ghost cells 8. Convergence, accuracy, and stability 9. Variable-coefficient linear equations 10. Other approaches to high resolution 11. Nonlinear scalar conservation laws 12. Finite-volume methods for nonlinear scalar conservation laws 13. Nonlinear systems of conservation laws 14. Gas dynamics and the Euler equations 15. Finite-volume methods for nonlinear systems 16. Some nonclassical hyperbolic problems 17. Source terms and balance laws 18. Multidimensional hyperbolic problems 19. Multidimensional numerical methods 20. Multidimensional scalar equations 21. Multidimensional systems 22. Elastic waves 23. Finite-volume methods on quadrilateral grids Bibliography Index.

5,791 citations