scispace - formally typeset
Search or ask a question

Showing papers by "Stanley Osher published in 2017"


BookDOI
06 Jan 2017
TL;DR: This book presents very versatile aspects of splitting methods and their applications, motivating the cross-fertilization of ideas.
Abstract: This book is about computational methods based on operator splitting. It consists of twenty-three chapters written by recognized splitting method contributors and practitioners, and covers a vast spectrum of topics and application areas, including computational mechanics, computational physics, image processing, wireless communication, nonlinear optics, and finance. Therefore, the book presents very versatile aspects of splitting methods and their applications, motivating the cross-fertilization of ideas.

161 citations


Journal ArticleDOI
TL;DR: Numerical simulations in image denoising, inpainting, and superresolution problems show that LDMM is a powerful method in image processing.
Abstract: In this paper, we propose a novel low dimensional manifold model (LDMM) and apply it to some image processing problems. LDMM is based on the fact that the patch manifolds of many natural images have low dimensional structure. Based on this fact, the dimension of the patch manifold is used as a regularization to recover the image. The key step in LDMM is to solve a Laplace--Beltrami equation over a point cloud which is solved by the point integral method. The point integral method enforces the sample point constraints correctly and gives better results than the standard graph Laplacian. Numerical simulations in image denoising, inpainting, and superresolution problems show that LDMM is a powerful method in image processing.

136 citations


Journal ArticleDOI
TL;DR: The numerical results in semi-supervised learning and image inpainting show that the weighted nonlocal Laplacian is a reliable and efficient interpolation method and it is fast and easy to implement.
Abstract: Inspired by the nonlocal methods in image processing and the point integral method, we introduce a novel weighted nonlocal Laplacian method to compute a continuous interpolation function on a point cloud in high dimensional space. The numerical results in semi-supervised learning and image inpainting show that the weighted nonlocal Laplacian is a reliable and efficient interpolation method. In addition, it is fast and easy to implement.

66 citations


Journal ArticleDOI
TL;DR: A parallel method for solving possibly non-convex time-dependent Hamilton–Jacobi equations arising from optimal control and differential game problems and a coordinate descent method for the minimization procedure in the Hopf formula is suggested.
Abstract: In this paper we develop a parallel method for solving possibly non-convex time-dependent Hamilton---Jacobi equations arising from optimal control and differential game problems. The subproblems are independent so they can be implemented in an embarrassingly parallel fashion, which usually has an ideal parallel speedup. The algorithm is proposed to overcome the curse of dimensionality (Bellman in Adaptive control processes: a guided tour. Princeton University Press, Princeton, 1961; Dynamic programming. Princeton University Press, Princeton, 1957) when solving HJ PDE . We extend previous work Chow et al. (Algorithm for overcoming the curse of dimensionality for certain non-convex Hamilton---Jacobi equations, Projections and differential games, UCLA CAM report, pp 16---27, 2016) and Darbon and Osher (Algorithms for overcoming the curse of dimensionality for certain Hamilton---Jacobi equations arising in control theory and elsewhere, UCLA CAM report, pp 15---50, 2015) and apply a generalized Hopf formula to solve HJ PDE involving time-dependent and perhaps non-convex Hamiltonians. We suggest a coordinate descent method for the minimization procedure in the Hopf formula. This method is preferable when even the evaluation of the function value itself requires some computational effort, and also when we handle higher dimensional optimization problem. For an integral with respect to time inside the generalized Hopf formula, we suggest using a numerical quadrature rule. Together with our suggestion to perform numerical differentiation to minimize the number of calculation procedures in each iteration step, we are bound to have numerical errors in our computations. These errors can be effectively controlled by choosing an appropriate mesh-size in time and the method does not use a mesh in space. The use of multiple initial guesses is suggested to overcome possibly multiple local extrema in the case when non-convex Hamiltonians are considered. Our method is expected to have wide application in control theory and differential game problems, and elsewhere.

53 citations


Journal ArticleDOI
TL;DR: In this article, a graph-based nonlocal total variation method is proposed for unsupervised classification of hyperspectral images (HSI), where the variational problem is solved by the primal-dual hybrid gradient algorithm.
Abstract: In this paper, a graph-based nonlocal total variation method is proposed for unsupervised classification of hyperspectral images (HSI). The variational problem is solved by the primal-dual hybrid gradient algorithm. By squaring the labeling function and using a stable simplex clustering routine, an unsupervised clustering method with random initialization can be implemented. The effectiveness of this proposed algorithm is illustrated on both synthetic and real-world HSI, and numerical results show that the proposed algorithm outperforms other standard unsupervised clustering methods, such as spherical $K$ -means, nonnegative matrix factorization, and the graph-based Merriman–Bence–Osher scheme.

38 citations


Posted Content
TL;DR: Algorithms to overcome the curse of dimensionality in possibly non-convex state-dependent Hamilton-Jacobi equations (HJ PDEs) arising from optimal control and differential game problems, and elsewhere are developed.
Abstract: In this paper, we develop algorithms to overcome the curse of dimensionality in possibly non-convex state-dependent Hamilton-Jacobi equations (HJ PDEs) arising from optimal control and differential game problems. The subproblems are independent and can be implemented in an embarrassingly parallel fashion. This is an ideal setup for perfect scaling in parallel computing. The algorithm is proposed to overcome the curse of dimensionality [1, 2] when solving HJ PDE. The major contribution of the paper is to change an optimization problem over a space of curves to an optimization problem of a single vector, which goes beyond [23]. We extend [5, 6, 8], and conjecture a (Lax-type) minimization principle when the Hamiltonian is convex, as well as a (Hopf-type) maximization principle when the Hamiltonian is non-convex. The conjectured Hopf-type maximization principle is a generalization of the well-known Hopf formula [11, 16, 30]. We validated formula under restricted assumptions, and bring our readers to [57] which validates that our conjectures in a more general setting after a previous version of our paper. We conjectured the weakest assumption is a psuedoconvexity assumption similar to [46]. The optimization problems are of the same dimension as that of the HJ PDE. We suggest a coordinate descent method for the minimization procedure in the generalized Lax/Hopf formula, and numerical differentiation is used to compute the derivatives. This method is preferable since the evaluation of the function value itself requires some computational effort, especially when we handle high dimensional optimization problem. The use of multiple initial guesses and a certificate of correctness are suggested to overcome possibly multiple local extrema since the optimization process is no longer convex. Our method is expected to have application in control theory and differential game problems, and elsewhere.

32 citations


Posted Content
TL;DR: In this article, a fast Fisher information regularization was proposed to approximate the optimal transport distance, which is shown to be smooth and strictly convex, thus many classical fast algorithms are available.
Abstract: We propose a fast algorithm to approximate the optimal transport distance. The main idea is to add a Fisher information regularization into the dynamical setting of the problem, originated by Benamou and Brenier. The regularized problem is shown to be smooth and strictly convex, thus many classical fast algorithms are available. In this paper, we adopt Newton's method, which converges to the minimizer with a quadratic rate. Several numerical examples are provided.

31 citations


Posted Content
TL;DR: In this article, a connection between non-convex optimization methods for training deep neural networks and nonlinear partial differential equations (PDEs) is established, where relaxation techniques arising in statistical physics which have already been used successfully in this context are reinterpreted as solutions of a viscous Hamilton-Jacobi PDE.
Abstract: In this paper we establish a connection between non-convex optimization methods for training deep neural networks and nonlinear partial differential equations (PDEs). Relaxation techniques arising in statistical physics which have already been used successfully in this context are reinterpreted as solutions of a viscous Hamilton-Jacobi PDE. Using a stochastic control interpretation allows we prove that the modified algorithm performs better in expectation that stochastic gradient descent. Well-known PDE regularity results allow us to analyze the geometry of the relaxed energy landscape, confirming empirical evidence. The PDE is derived from a stochastic homogenization problem, which arises in the implementation of the algorithm. The algorithms scale well in practice and can effectively tackle the high dimensionality of modern neural networks.

20 citations


Journal ArticleDOI
TL;DR: A fast new numerical method for redistancing objective functions based on the Hopf-Lax formula that is expected to be generalized and widely applied to many fields such as computational fluid dynamics, the minimal surface problem, and elsewhere.

17 citations


Journal ArticleDOI
TL;DR: The derivation of this method is disciplined, relying on a saddle point formulation of the convex problem, and can be adapted to a wide range of other constrained convex optimization problems.
Abstract: We solve the non-linearized and linearized obstacle problems efficiently using a primal-dual hybrid gradients method involving projection and/or $$L^1$$ penalty. Since this method requires no matrix inversions or explicit identification of the contact set, we find that this method, on a variety of test problems, achieves the precision of previous methods with a speed up of 1–2 orders of magnitude. The derivation of this method is disciplined, relying on a saddle point formulation of the convex problem, and can be adapted to a wide range of other constrained convex optimization problems.

13 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper establishes a connection between non-convex optimization and nonlinear partial differential equations (PDEs) and suggests new algorithms based on the non-viscous Hamilton-Jacobi PDE that can effectively tackle the high dimensionality of modern neural networks.
Abstract: This paper establishes a connection between non-convex optimization and nonlinear partial differential equations (PDEs). We interpret empirically successful relaxation techniques motivated from statistical physics for training deep neural networks as solutions of a viscous Hamilton-Jacobi (HJ) PDE. The underlying stochastic control interpretation allows us to prove that these techniques perform better than stochastic gradient descent. Our analysis provides insight into the geometry of the energy landscape and suggests new algorithms based on the non-viscous Hamilton-Jacobi PDE that can effectively tackle the high dimensionality of modern neural networks.

Journal ArticleDOI
TL;DR: In this paper, a low-dimensional manifold model (LDMM) was proposed for extremely strong noise attenuation in the presence of strong noise or low signal-to-noise ratio (SINR) situations.
Abstract: We have found that seismic data can be described in a low-dimensional manifold, and then we investigated using a low-dimensional manifold model (LDMM) method for extremely strong noise attenuation The LDMM supposes the dimension of the patch manifold of seismic data should be low In other words, the degree of freedom of the patches should be low Under the linear events assumption on a patch, the patch can be parameterized by the intercept and slope of the event, if the seismic wavelet is identical everywhere The denoising problem is formed as an optimization problem, including a fidelity term and an LDMM regularization term We have tested LDMM on synthetic seismic data with different noise levels LDMM achieves better denoised results in comparison with the Fourier, curvelet and nonlocal mean filtering methods, especially in the presence of strong noise or low signal-to-noise ratio situations We have also tested LDMM on field records, indicating that LDMM is a method for handling relatively

Journal ArticleDOI
TL;DR: Compressed modes are solutions of the Laplace equation with a potential and a subgradient term and come from addition of an $L^1$ penalty in the corresponding variational princip...
Abstract: Compressed modes are solutions of the Laplace equation with a potential and a subgradient term. The subgradient term comes from addition of an $L^1$ penalty in the corresponding variational princip...

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new method for calculating the time-optimal guidance control for a multiple vehicle pursuit-evasion system, where a joint differential game of k pursuing vehicles relative to the evader is constructed, and a Hamilton-Jacobi-Isaacs (HJI) equation that describes the evolution of the value function is formulated.
Abstract: Presented is a new method for calculating the time-optimal guidance control for a multiple vehicle pursuit-evasion system. A joint differential game of k pursuing vehicles relative to the evader is constructed, and a Hamilton-Jacobi-Isaacs (HJI) equation that describes the evolution of the value function is formulated. The value function is built such that the terminal cost is the squared distance from the boundary of the terminal surface. Additionally, all vehicles are assumed to have bounded controls. Typically, a joint state space constructed in this way would have too large a dimension to be solved with existing grid-based approaches. The value function is computed efficiently in high-dimensional space, without a discrete grid, using the generalized Hopf formula. The optimal time-to-reach is iteratively solved, and the optimal control is inferred from the gradient of the value function.

Proceedings ArticleDOI
01 May 2017
TL;DR: This paper proposes accelerated methods for EEG source imaging based on the TV regularization and its variants that have more rapid convergence than the state-of-the-art methods and have the potential to achieve the real-time temporal resolution.
Abstract: Electroencephalography (EEG) signal has been playing a crucial role in clinical diagnosis and treatment of neurological diseases. However, it is very challenging to efficiently reconstruct the high-resolution brain image from very few scalp EEG measurements due to high ill-posedness. Recently some efforts have been devoted to developing EEG source reconstruction methods using various forms of regularization, including the l 1 -norm, the total variation (TV), as well as the fractional-order TV. However, since high-dimensional data are very large, these methods are difficult to implement. In this paper, we propose accelerated methods for EEG source imaging based on the TV regularization and its variants. Since the gradient/fractional-order gradient operators have coordinate friendly structures, we apply the Chambolle-Pock and ARock algorithms, along with diagonal preconditioning. In our algorithms, the coordinates of primal and dual variables are updated in an asynchronously parallel fashion. A variety of experiments show that the proposed algorithms have more rapid convergence than the state-of-the-art methods and have the potential to achieve the real-time temporal resolution.

Proceedings ArticleDOI
01 Mar 2017
TL;DR: Improved accuracy in classification over data-mining techniques like k-means, unmixing techniques like Hierarchical Non-Negative Matrix Factorization, and graph-based methods like Non-Local Total Variation is demonstrated.
Abstract: We propose a semi-supervised algorithm for processing and classification of hyperspectral imagery. For initialization, we keep 20% of the data intact, and use Principal Component Analysis to discard voxels from noisier bands and pixels. Then, we use either an Accelerated Proximal Gradient algorithm (APGL), or a modified APGL algorithm with a penalty term for distance between inpainted pixels and endmembers (APGL Hyp), on the initialized datacube to inpaint the missing data. APGL and APGL Hyp are distinguished by performance on datasets with full pixels removed or extreme noise. This inpainting technique results in band-by-band datacube sharpening and removal of noise from individual spectral signatures. We can also classify the inpainted cube by assigning each pixel to its nearest endmember via Euclidean distance. We demonstrate improved accuracy in classification over data-mining techniques like k-means, unmixing techniques like Hierarchical Non-Negative Matrix Factorization, and graph-based methods like Non-Local Total Variation.

Proceedings ArticleDOI
TL;DR: In this paper, a primal-dual method for efficient numerical solution of the Hamilton-Jacobi (HJ) equation is presented. But the solution at each point is completely independent, and allows a massively parallel implementation if solutions at multiple points are desired.
Abstract: Presented is a method for efficient computation of the Hamilton-Jacobi (HJ) equation for time-optimal control problems using the generalized Hopf formula. Typically, numerical methods to solve the HJ equation rely on a discrete grid of the solution space and exhibit exponential scaling with dimension. The generalized Hopf formula avoids the use of grids and numerical gradients by formulating an unconstrained convex optimization problem. The solution at each point is completely independent, and allows a massively parallel implementation if solutions at multiple points are desired. This work presents a primal-dual method for efficient numeric solution and presents how the resulting optimal trajectory can be generated directly from the solution of the Hopf formula, without further optimization. Examples presented have execution times on the order of milliseconds and experiments show computation scales approximately polynomial in dimension with very small high-order coefficients.