scispace - formally typeset
Search or ask a question

Showing papers by "Stanley Osher published in 2014"


Journal ArticleDOI
TL;DR: A splitting method based on Bregman iteration is represented to tackle the optimization problems with orthogonality constraints and demonstrates the robustness of the method in several problems including direction fields correction, noisy color image restoration and global conformal mapping for genus-0 surfaces construction.
Abstract: Orthogonality constrained problems are widely used in science and engineering. However, it is challenging to solve these problems efficiently due to the non-convex constraints. In this paper, a splitting method based on Bregman iteration is represented to tackle the optimization problems with orthogonality constraints. With the proposed method, the constrained problems can be iteratively solved by computing the corresponding unconstrained problems and orthogonality constrained quadratic problems with analytic solutions. As applications, we demonstrate the robustness of our method in several problems including direction fields correction, noisy color image restoration and global conformal mapping for genus-0 surfaces construction. Numerical comparisons with existing methods are also conducted to illustrate the efficiency of our algorithms.

220 citations


Journal ArticleDOI
TL;DR: A weighted difference of anisotropic and isotropic total variation (TV) as a regularization for image processing tasks, based on the well-known TV model and natural image statistics, improves on the classical TV model consistently and is on par with representative state-of-the-art methods.
Abstract: We propose a weighted difference of anisotropic and isotropic total variation (TV) as a regularization for image processing tasks, based on the well-known TV model and natural image statistics. Due to the form of our model, it is natural to compute via a difference of convex algorithm (DCA). We draw its connection to the Bregman iteration for convex problems and prove that the iteration generated from our algorithm converges to a stationary point with the objective function values decreasing monotonically. A stopping strategy based on the stable oscillatory pattern of the iteration error from the ground truth is introduced. In numerical experiments on image denoising, image deblurring, and magnetic resonance imaging (MRI) reconstruction, our method improves on the classical TV model consistently and is on par with representative state-of-the-art methods.

180 citations


Journal ArticleDOI
TL;DR: This paper revisits some well-known transforms of wavelet transform and shows that it is possible to build their empirical counterparts and proves that such constructions lead to different adaptive frames which show some promising properties for image analysis and processing.
Abstract: A recently developed approach, called “empirical wavelet transform,” aims to build one-dimensional (1D) adaptive wavelet frames accordingly to the analyzed signal. In this paper, we present several extensions of this approach to two-dimensional (2D) signals (images). We revisit some well-known transforms (tensor wavelets, Littlewood--Paley wavelets, ridgelets, and curvelets) and show that it is possible to build their empirical counterparts. We prove that such constructions lead to different adaptive frames which show some promising properties for image analysis and processing.

146 citations


Journal ArticleDOI
TL;DR: It is shown that the sparse Fourier domain approximation of solutions to multiscale PDE problems by soft thresholding enjoys a number of desirable numerical and analytic properties, including convergence for linear PDEs and a modified equation resulting from the sparse approximation.
Abstract: The authors of [Proc. Natl. Acad. Sci. USA, 110 (2013), pp. 6634--6639] proposed sparse Fourier domain approximation of solutions to multiscale PDE problems by soft thresholding. We show here that the method enjoys a number of desirable numerical and analytic properties, including convergence for linear PDEs and a modified equation resulting from the sparse approximation. We also extend the method to solve elliptic equations and introduce sparse approximation of differential operators in the Fourier domain. The effectiveness of the method is demonstrated on homogenization examples, where its complexity is dependent only on the sparsity of the problem and constant in many cases.

61 citations


Journal ArticleDOI
TL;DR: This paper describes an regularized variational framework for developing a spatially localized basis that spans the eigenspace of a differential operator, for instance, the Laplace operator, that combines multiresolution properties in both time and frequency domains.
Abstract: This paper describes an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves, that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities.

42 citations


Journal ArticleDOI
TL;DR: An algorithm which successively adds new measurements at specially chosen locations is introduced by comparing the solutions of the inverse problem obtained from different number of measurements to improve the reconstruction of the sparse initial data.
Abstract: We consider the inverse problem of finding sparse initial data from the sparsely sampled solutions of the heat equation. The initial data are assumed to be a sum of an unknown but finite number of Dirac delta functions at unknown locations. Point-wise values of the heat solution at only a few locations are used in an $l_1$ constrained optimization to find the initial data. A concept of domain of effective sensing is introduced to speed up the already fast Bregman iterative algorithm for $l_1$ optimization. Furthermore, an algorithm which successively adds new measurements at specially chosen locations is introduced. By comparing the solutions of the inverse problem obtained from different number of measurements, the algorithm decides where to add new measurements in order to improve the reconstruction of the sparse initial data.

37 citations


Journal ArticleDOI
TL;DR: In this article, a mathematical model and formalism to study coded exposure (flutter shutter) cameras is proposed, which includes the Poisson photon (shot) noise as well as any additive (readout) noise of finite variance.
Abstract: This paper proposes a mathematical model and formalism to study coded exposure (flutter shutter) cameras. The model includes the Poisson photon (shot) noise as well as any additive (readout) noise of finite variance. This is an improvement compared to our previous work that only considered the Poisson noise. Closed formulae for the mean square error and signal to noise ratio of the coded exposure method are given. These formulae take into account for the whole imaging chain, i.e., the Poisson photon (shot) noise, any additive (readout) noise of finite variance as well as the deconvolution and are valid for any exposure code. Our formalism allows us to provide a curve that gives an absolute upper bound for the gain of any coded exposure camera in function of the temporal sampling of the code. The gain is to be understood in terms of mean square error (or equivalently in terms of signal to noise ratio), with respect to a snapshot (a standard camera).

16 citations


Journal ArticleDOI
TL;DR: This paper studies the Yahoo! Movie user rating data set and demonstrates that the addition of a small number of well-chosen pairwise comparisons can significantly increase the Fisher informativeness of the ranking.
Abstract: Given a graph where vertices represent alternatives and arcs represent pairwise comparison data, the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential function agrees with the pairwise comparisons. Our goal in this paper is to develop a method for collecting data for which the least squares estimator for the ranking problem has maximal Fisher information. Our approach, based on experimental design, is to view data collection as a bi-level optimization problem where the inner problem is the ranking problem and the outer problem is to identify data which maximizes the informativeness of the ranking. Under certain assumptions, the data collection problem decouples, reducing to a problem of finding multigraphs with large algebraic connectivity. This reduction of the data collection problem to graph-theoretic questions is one of the primary contributions of this work. As an application, we study the Yahoo! Movie user rating data set and demonstrate that the addition of a small number of well-chosen pairwise comparisons can significantly increase the Fisher informativeness of the ranking. As another application, we study the 2011-12 NCAA football schedule and propose schedules with the same number of games which are significantly more informative. Using spectral clustering methods to identify highly-connected communities within the division, we argue that the NCAA could improve its notoriously poor rankings by simply scheduling more out-of-conference games.

15 citations


Journal ArticleDOI
TL;DR: In this article, Bregman dynamics are used to solve nonlinear differential inclusions, which is based on the notion of inverse scale space (ISS) developed in applied mathematics, and their solution paths are regularization paths better than the LASSO regularization path.
Abstract: In this paper, we recover sparse signals from their noisy linear measurements by solving nonlinear differential inclusions, which is based on the notion of inverse scale space (ISS) developed in applied mathematics Our goal here is to bring this idea to address a challenging problem in statistics, \emph{ie} finding the oracle estimator which is unbiased and sign-consistent using dynamics We call our dynamics \emph{Bregman ISS} and \emph{Linearized Bregman ISS} A well-known shortcoming of LASSO and any convex regularization approaches lies in the bias of estimators However, we show that under proper conditions, there exists a bias-free and sign-consistent point on the solution paths of such dynamics, which corresponds to a signal that is the unbiased estimate of the true signal and whose entries have the same signs as those of the true signs, \emph{ie} the oracle estimator Therefore, their solution paths are regularization paths better than the LASSO regularization path, since the points on the latter path are biased when sign-consistency is reached We also show how to efficiently compute their solution paths in both continuous and discretized settings: the full solution paths can be exactly computed piece by piece, and a discretization leads to \emph{Linearized Bregman iteration}, which is a simple iterative thresholding rule and easy to parallelize Theoretical guarantees such as sign-consistency and minimax optimal $l_2$-error bounds are established in both continuous and discrete settings for specific points on the paths Early-stopping rules for identifying these points are given The key treatment relies on the development of differential inequalities for differential inclusions and their discretizations, which extends the previous results and leads to exponentially fast recovering of sparse signals before selecting wrong ones

11 citations


Posted Content
TL;DR: In this paper, an exact regularization of the obstacle in terms of an L 1-like penalty on the variational problem is proposed to solve the free boundary inherent in the obstacle problem without any need for problem specific or complicated discretization.
Abstract: We construct an efficient numerical scheme for solving obstacle problems in divergence form. The numerical method is based on a reformulation of the obstacle in terms of an L1-like penalty on the variational problem. The reformulation is an exact regularizer in the sense that for large (but finite) penalty parameter, we recover the exact solution. Our formulation is applied to classical elliptic obstacle problems as well as some related free boundary problems, for example the two-phase membrane problem and the Hele-Shaw model. One advantage of the proposed method is that the free boundary inherent in the obstacle problem arises naturally in our energy minimization without any need for problem specific or complicated discretization. In addition, our scheme also works for nonlinear variational inequalities arising from convex minimization problems.

9 citations


Journal ArticleDOI
01 Nov 2014
TL;DR: This article gives self-contained introductions to l1 optimization for sparse vectors, L1 optimized for finding functions with compact support, and computing sparse solutions from measurements that are corrupted by unknown noisy.
Abstract: : A sparse signal is a signal that has very few nonzero elements or one that becomes so under a basis change or through a certain transform. Exploiting sparsity has become a common task in data sciences. Compressed sensing, regularized regression (e.g., LASSO), and regularized inverse problems (e.g., total variation image reconstruction) have made l1 optimization a central tool in data processing problems. As the name suggests, l1 optimization problems recover sparse solutions by solving an optimization problem involving an l1-norm. Today, the scope of l1 optimization is quickly expanding. The size, complexity, and diversity of instances have grown significantly. Beyond 1D signals and 2D images, high-dimensional quantities such as video, 4D CT, and multi-way tensors have become the data or unknown variables in models. New applications have motivated structured solutions to optimization problems that significantly generalize our notion of sparsity. Such applications look for low-rank matrices or tensors, sparse graphs, tree structured data representations, and sparse representations involving only a few dictionary atoms. This article gives self-contained introductions to l1 optimization for sparse vectors (Section 2), L1 optimization for finding functions with compact support (Section 3), and computing sparse solutions from measurements that are corrupted by unknown noisy (Section 4).

Posted Content
TL;DR: This work proposes a method for calculating Wannier functions of periodic solids directly from a modified variational principle for the energy, subject to the requirement that the Wanniers functions are orthogonal to all their translations ("shift-orthogonality").
Abstract: We propose a method for calculating Wannier functions of periodic solids directly from a modified variational principle for the energy, subject to the requirement that the Wannier functions are orthogonal to all their translations ("shift-orthogonality"). Localization is achieved by adding an $L_1$ regularization term to the energy functional. This approach results in "compressed" Wannier modes with compact support, where one parameter $\mu$ controls the trade-off between the accuracy of the total energy and the size of the support of the Wannier modes. Efficient algorithms for shift-orthogonalization and solution of the variational minimization problem are demonstrated.

Posted Content
TL;DR: This paper presents a fast algorithm for projecting a given function to the set of shift orthogonal functions, a particular class of basis called Shift Orthogonal Basis Functions, which can be parallelized easily and its computational complexity is bounded by O(M\log(M), where $M$ is the number of coefficients used for storing the input.
Abstract: This paper presents a fast algorithm for projecting a given function to the set of shift orthogonal functions (i.e. set containing functions with unit $L^2$ norm that are orthogonal to their prescribed shifts). The algorithm can be parallelized easily and its computational complexity is bounded by $O(M\log(M))$, where $M$ is the number of coefficients used for storing the input. To derive the algorithm, a particular class of basis called Shift Orthogonal Basis Functions are introduced and some theory regarding them is developed.


Posted Content
TL;DR: A convex variational principle is proposed to find sparse representation of low-lying eigenspace of symmetric matrices in the context of electronic structure calculation to form a sparse density matrix minimization algorithm with regularization.
Abstract: We propose a convex variational principle to find sparse representation of low-lying eigenspace of symmetric matrices. In the context of electronic structure calculation, this corresponds to a sparse density matrix minimization algorithm with $\ell_1$ regularization. The minimization problem can be efficiently solved by a split Bergman iteration type algorithm. We further prove that from any initial condition, the algorithm converges to a minimizer of the variational principle.