scispace - formally typeset
Search or ask a question

Showing papers by "Stanley Osher published in 2013"


Journal ArticleDOI
TL;DR: This article describes a general formalism for obtaining spatially localized solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics.
Abstract: This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrodinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.

192 citations


Journal ArticleDOI
TL;DR: This work investigates the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis and finds that this method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms.
Abstract: We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms.

189 citations


Book ChapterDOI
01 Jan 2013
TL;DR: This lecture is focusing on the basic analysis of total variation methods and the extension of the original ROF-denoising model due various application fields, and a brief discussion of state-of-the art computational methods.
Abstract: Total variation methods and similar approaches based on regularizations with l 1-type norms (and seminorms) have become a very popular tool in image processing and inverse problems due to peculiar features that cannot be realized with smooth regularizations In particular total variation techniques had particular success due to their ability to realize cartoon-type reconstructions with sharp edges Due to an explosion of new developments in this field within the last decade it is a difficult task to keep an overview of the major results in analysis, the computational schemes, and the application fields With these lectures we attempt to provide such an overview, of course biased by our major lines of research We are focusing on the basic analysis of total variation methods and the extension of the original ROF-denoising model due various application fields Furthermore we provide a brief discussion of state-of-the art computational methods and give an outlook to applications in different disciplines

137 citations


Journal ArticleDOI
TL;DR: The texture norm is defined using the nuclear norm applied to patches in the image, interpreting the texture patches to be low-rank, easier to implement than many of the weak function space norms in the literature and is computationally faster than nonlocal methods since there is no explicit weight.
Abstract: We propose a novel cartoon-texture separation model using a sparse low-rank decomposition. Our texture model connects the separate ideas of robust principal component analysis (PCA) [E. J. Candes, X. Li, Y. Ma, and J. Wright, J. ACM, 58 (2011), 11], nonlocal methods [A. Buades, B. Coll, and J.-M. Morel, Multiscale Model. Simul., 4 (2005), pp. 490--530], [A. Buades, B. Coll, and J.-M. Morel, Numer. Math., 105 (2006), pp. 1--34], [G. Gilboa and S. Osher, Multiscale Model. Simul., 6 (2007), pp. 595--630], [G. Gilboa and S. Osher, Multiscale Model. Simul., 7 (2008), pp. 1005--1028], and cartoon-texture decompositions in an interesting way, taking advantage of each of these methodologies. We define our texture norm using the nuclear norm applied to patches in the image, interpreting the texture patches to be low-rank. In particular, this norm is easier to implement than many of the weak function space norms in the literature and is computationally faster than nonlocal methods since there is no explicit weight ...

135 citations


Journal ArticleDOI
TL;DR: A fast algorithm for directly computing Dτ (Y ) without using SVDs is proposed, which is much more efficient than the approach using the full SVD.
Abstract: We are interested in solving the following minimization problem Dτ (Y ) := arg min X∈Rm×n 1 2 ∥Y −X∥F + τ∥X∥∗, where Y ∈ Rm×n is a given matrix, and ∥ ⋅ ∥F is the Frobenius norm and ∥ ⋅ ∥∗ the nuclear norm. This problem serves as a basic subroutine in many popular numerical schemes for nuclear norm minimization problems, which arise from low rank matrix recovery such as matrix completion. As Dτ (Y ) has an explicit expression which shrinks the singular values of Y and keeps the singular vectors, Dτ is referred to singular value thresholding (SVT) operator in the literature. Conventional approaches for Dτ (Y ) first find the singular value decomposition (SVD) of Y and then shrink the singular values. However, such approaches are time consuming under some circumstances, especially when the rank of Dτ (Y ) is not low compared to the matrix dimension or is completely unpredictable. In this paper, we propose a fast algorithm for directly computing Dτ (Y ) without using SVDs. Numerical experiments show that the proposed algorithm is much more efficient than the approach using the full SVD.

76 citations


Journal ArticleDOI
TL;DR: An interesting property of the Bregman iterative procedure, which is equivalent to the augmented Lagrangian method, for minimizing a convex piece-wise linear function J(x) subject to linear constraints Ax=b, is analyzed.
Abstract: This short article analyzes an interesting property of the Bregman iterative procedure, which is equivalent to the augmented Lagrangian method, for minimizing a convex piece-wise linear function J(x) subject to linear constraints Ax=b. The procedure obtains its solution by solving a sequence of unconstrained subproblems of minimizing $J(x)+\frac{1}{2}\|Ax-b^{k}\|_{2}^{2}$ , where b k is iteratively updated. In practice, the subproblem at each iteration is solved at a relatively low accuracy. Let w k denote the error introduced by early stopping a subproblem solver at iteration k. We show that if all w k are sufficiently small so that Bregman iteration enters the optimal face, then while on the optimal face, Bregman iteration enjoys an interesting error-forgetting property: the distance between the current point $\bar{x}^{k}$ and the optimal solution set X ? is bounded by ?w k+1?w k ?, independent of the previous errors w k?1,w k?2,?,w 1. This property partially explains why the Bregman iterative procedure works well for sparse optimization and, in particular, for ? 1-minimization. The error-forgetting property is unique to J(x) that is a piece-wise linear function (also known as a polyhedral function), and the results of this article appear to be new to the literature of the augmented Lagrangian method.

75 citations


BookDOI
01 Jan 2013
TL;DR: This book takes readers on a tour through modern methods in image analysis and reconstruction based on level set and PDE techniques, the major focus being on morphological and geometric structures in images.

49 citations


Proceedings ArticleDOI
TL;DR: Within a single framework new retinex instances particularly suited for texture-preserving shadow removal, cartoon-texture decomposition, color and hyperspectral image enhancement are defined, and entirely novel retineX formulations are derived by using more interesting non-local versions for the sparsity and fidelity prior.
Abstract: In this paper, we present a unifying framework for retinex that is able to reproduce many of the existing retinex implementations within a single model. The fundamental assumption, as shared with many retinex models, is that the observed image is a multiplication between the illumination and the true underlying reflectance of the object. Starting from Morel’s 2010 PDE model for retinex, where illumination is supposed to vary smoothly and where the reflectance is thus recovered from a hard-thresholded Laplacian of the observed image in a Poisson equation, we define our retinex model in similar but more general two steps. First, look for a filtered gradient that is the solution of an optimization problem consisting of two terms: The first term is a sparsity prior of the reflectance, such as the TV or H1 norm, while the second term is a quadratic fidelity prior of the reflectance gradient with respect to the observed image gradients. In a second step, since this filtered gradient almost certainly is not a consistent image gradient, we then look for a reflectance whose actual gradient comes close. Beyond unifying existing models, we are able to derive entirely novel retinex formulations by using more interesting non-local versions for the sparsity and fidelity prior. Hence we define within a single framework new retinex instances particularly suited for texture-preserving shadow removal, cartoon-texture decomposition, color and hyperspectral image enhancement.

47 citations


Journal ArticleDOI
TL;DR: It is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting, and a significant reduction in computation time is achieved with EST.
Abstract: Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.

38 citations


Journal ArticleDOI
01 Apr 2013
TL;DR: An algorithm specifically designed to take advantage of shared memory, vectorized, parallel and many-core microprocessors such as the Cell processor, new generation Graphics Processing Units (GPUs) and standard vectorized multi-core processors (e.g. quad-core CPUs).
Abstract: In this paper we consider the l 1-compressive sensing problem. We propose an algorithm specifically designed to take advantage of shared memory, vectorized, parallel and many-core microprocessors such as the Cell processor, new generation Graphics Processing Units (GPUs) and standard vectorized multi-core processors (e.g. quad-core CPUs). Besides its implementation is easy. We also give evidence of the efficiency of our approach and compare the algorithm on the three platforms, thus exhibiting pros and cons for each of them.

31 citations


Proceedings Article
16 Jun 2013
TL;DR: A framework to identify data which, when augmented with the current dataset, maximally increases the Fisher information of the ranking is proposed and the addition of a small number of well-chosen pairwise comparisons can significantly increase the Fisher informativeness of theranking.
Abstract: Given a graph where vertices represent alternatives and pairwise comparison data, yij, is given on the edges, the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential function agrees with pairwise comparisons. We study the dependence of the statistical ranking problem on the available pairwise data, i.e., pairs (i,j) for which the pairwise comparison data yij is known, and propose a framework to identify data which, when augmented with the current dataset, maximally increases the Fisher information of the ranking. Under certain assumptions, the data collection problem decouples, reducing to a problem of finding an edge set on the graph (with a fixed number of edges) such that the second eigenvalue of the graph Laplacian is maximal. This reduction of the data collection problem to a spectral graph-theoretic question is one of the primary contributions of this work. As an application, we study the Yahoo! Movie user rating dataset and demonstrate that the addition of a small number of well-chosen pairwise comparisons can significantly increase the Fisher informativeness of the ranking.

Journal ArticleDOI
TL;DR: It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization.
Abstract: Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N>1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting---Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization.

Journal ArticleDOI
TL;DR: A fast graph-cut approach for finding $\epsilon$-optimal solutions, which has been used successfully in image processing and computer vision problems, is described and its efficacy at finding solutions with sparse residual is demonstrated.
Abstract: We consider the problem of establishing a statistical ranking for a set of alternatives from a dataset which consists of an (inconsistent and incomplete) set of quantitative pairwise comparisons of the alternatives. If we consider the directed graph where vertices represent the alternatives and the pairwise comparison data is a function on the arcs, then the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential optimally agrees with the pairwise comparisons. Potentials, optimal in the $l^{2}$-norm sense, can be found by solving a least-squares problem on the digraph and, recently, the residual has been interpreted using the Hodge decomposition (Jiang et. al., 2010). In this work, we consider an $l^{1}$-norm formulation of the statistical ranking problem. We describe a fast graph-cut approach for finding $\epsilon$-optimal solutions, which has been used successfully in image processing and computer vision problems. Applying this method to several datasets, we demonstrate its efficacy at finding solutions with sparse residual.

Book ChapterDOI
01 Jan 2013
TL;DR: Both level set methods and eigenfunction optimization for representing the topography of a dielectric environment and efficient techniques for using gradient methods to solve different material design problems are discussed.
Abstract: The gradient descent/ascent method is a classical approach to find the minimum/maximum of an objective function or functional based on a first-order approximation. The method works in spaces of any number of dimensions, even in infinite-dimensional spaces. This method can converge more efficiently than methods which do not require derivative information; however, in certain circumstances the “cost function space” may become discontinuous and as a result, the derivatives may be difficult or impossible to determine. Here, we discuss both level set methods and eigenfunction optimization for representing the topography of a dielectric environment and efficient techniques for using gradient methods to solve different material design problems. Numerous results are shown to demonstrate the robustness of the gradient-based approach.

Patent
24 Jul 2013
TL;DR: In this paper, a learning configuration related to an acceptable arrangement of items within an environment is presented, and a cost of the learning configuration is determined based on the cost of synthesizing the configuration.
Abstract: A method includes receiving one or more learning configurations, each learning configuration related to an acceptable arrangement of items within an environment. The method further includes extracting from the learning configurations representative item information and relationship information, synthesizing a configuration of items for a defined environment based at least in part on the extracted representative item information and relationship information, determining a cost of the synthesized configuration, and, based on the cost of the synthesized configuration, identifying the synthesized configuration as acceptable.

Book
28 Oct 2013
TL;DR: In this article, a tour through modern methods in image analysis and reconstruction based on level set and PDE techniques, the major focus being on morphological and geometric structures in images.
Abstract: This book takes readers on a tour through modern methods in image analysis and reconstruction based on level set and PDE techniques, the major focus being on morphological and geometric structures in images. The aspects covered include edge-sharpening image reconstruction and denoising, segmentation and shape analysis in images, and image matching. For each, the lecture notes provide insights into the basic analysis of modern variational and PDE-based techniques, as well as computational aspects and applications.

ReportDOI
01 Jan 2013
TL;DR: T theory, algorithms, and software have been developed for the analysis and processing of point cloud sensor data for representation, analysis and visualization of complex urban terrain that involve various parameterizations of terrain data based on implicit surface representations and adaptive multiscale methods that enable high resolution and enhance understanding of topology and geometric features.
Abstract: : Theory, algorithms, and software have been developed for the analysis and processing of point cloud sensor data for representation, analysis and visualization of complex urban terrain. These involve various parameterizations of terrain data based on implicit surface representations and adaptive multiscale methods that enable high resolution and enhance understanding of topology and geometric features. The wavelet and multi scale methods enable fast computation and allow for varying local resolution of the data depending on the local density of the point cloud. The implicit representations which are developed facilitate highly accurate approximation of signed distances to the sensed terrain surface. The level sets of the signed distance provide efficiently computed field of view from specified observation points. Collaboration among MURI focus groups has yielded hybrid methods incorporating the best features of both approaches. Simulation and field experiments have been conducted to test the MURI methodologies. These include problems of sensor assimilation for autonomous navigation of urban terrain, surveillance, secure route planning, line of sight, target acquisition and a host of related problems.

Journal Article
TL;DR: It is demonstrated that the use of a mathematical filter could successfully reduce metallic halation, facilitating the osseointegration evaluation at the bone implant interface in the reconstructed images.
Abstract: Microcomputed tomography (MicroCT) images containing titanium implant suffer from x-rays scattering, artifact and the implant surface is critically affected by metallic halation. To improve the metallic halation artifact, a nonlinear Total Variation denoising algorithm such as Split Bregman algorithm was applied to the digital data set of MicroCT images. This study demonstrated that the use of a mathematical filter could successfully reduce metallic halation, facilitating the osseointegration evaluation at the bone implant interface in the reconstructed images.

Book
01 Jan 2013
TL;DR: A Guide to the TV Zoo- EM-TV methods for inverse problems with Poisson noise- Variational Methods in Image Matching and Motion Extraction- Metrics of Curves in Shape Optimization and Analysis as mentioned in this paper.
Abstract: A Guide to the TV Zoo- EM-TV methods for inverse problems with Poisson noise- Variational Methods in Image Matching and Motion Extraction- Metrics of Curves in Shape Optimization and Analysis

ReportDOI
01 Mar 2013
TL;DR: In this paper, compressive sensing, anomaly detection, target detection, end member detection, unmixing and change detection were tested on data provided by AFRL with good results, including change detection under different lightning conditions.
Abstract: : We have developed and applied successfully new algorithms for hyperspectral imagery These include compressive sensing, anomaly detection, target detection, endmember detection, unmixing and change detection These were tested on data provided by AFRL with good results, including change detection under different lightning conditions Ideas involved Bregman iteration applied to L1 and total variation based optimizations were used and also successfully applied to subsampled data A nonnegative matrix factorization and completion algorithm was introduced which allows the reconstruction of partially observed or corrupted hyperspectral data A surprising spinoff is sparse reconstruction of offshore oil spills based on multispectral measurements