scispace - formally typeset
Search or ask a question

Showing papers by "Stanley Osher published in 2016"


Journal ArticleDOI
TL;DR: Darbon et al. as mentioned in this paper used the classical Hopf formulas for solving initial value problems for HJ PDEs and showed that these formulas are polynomial in the dimension.
Abstract: It is well known that time-dependent Hamilton–Jacobi–Isaacs partial differential equations (HJ PDEs) play an important role in analyzing continuous dynamic games and control theory problems. An important tool for such problems when they involve geometric motion is the level set method (Osher and Sethian in J Comput Phys 79(1):12–49, 1988). This was first used for reachability problems in Mitchell et al. (IEEE Trans Autom Control 50(171):947–957, 2005) and Mitchell and Tomlin (J Sci Comput 19(1–3):323–346, 2003). The cost of these algorithms and, in fact, all PDE numerical approximations is exponential in the space dimension and time. In Darbon (SIAM J Imaging Sci 8(4):2268–2293, 2015), some connections between HJ PDE and convex optimization in many dimensions are presented. In this work, we propose and test methods for solving a large class of the HJ PDE relevant to optimal control problems without the use of grids or numerical approximations. Rather we use the classical Hopf formulas for solving initial value problems for HJ PDE (Hopf in J Math Mech 14:951–973, 1965). We have noticed that if the Hamiltonian is convex and positively homogeneous of degree one (which the latter is for all geometrically based level set motion and control and differential game problems) that very fast methods exist to solve the resulting optimization problem. This is very much related to fast methods for solving problems in compressive sensing, based on $$\ell _1$$ optimization (Goldstein and Osher in SIAM J Imaging Sci 2(2):323–343, 2009; Yin et al. in SIAM J Imaging Sci 1(1):143–168, 2008). We seem to obtain methods which are polynomial in the dimension. Our algorithm is very fast, requires very low memory and is totally parallelizable. We can evaluate the solution and its gradient in very high dimensions at $$10^{-4}$$ – $$10^{-8}$$ s per evaluation on a laptop. We carefully explain how to compute numerically the optimal control from the numerical solution of the associated initial valued HJ PDE for a class of optimal control problems. We show that our algorithms compute all the quantities we need to obtain easily the controller. In addition, as a step often needed in this procedure, we have developed a new and equally fast way to find, in very high dimensions, the closest point y lying in the union of a finite number of compact convex sets $$\Omega $$ to any point x exterior to the $$\Omega $$ . We can also compute the distance to these sets much faster than Dijkstra type “fast methods,” e.g., Dijkstra (Numer Math 1:269–271, 1959). The term “curse of dimensionality” was coined by Bellman (Adaptive control processes, a guided tour. Princeton University Press, Princeton, 1961; Dynamic programming. Princeton University Press, Princeton, 1957), when considering problems in dynamic optimization.

116 citations


Posted Content
TL;DR: This work proposes and test methods for solving a large class of the HJ PDE relevant to optimal control problems without the use of grids or numerical approximations and develops a new and equally fast way to find the closest point y lying in the union of a finite number of compact convex sets.
Abstract: It is well known that time dependent Hamilton-Jacobi-Isaacs partial differential equations (HJ PDE), play an important role in analyzing continuous dynamic games and control theory problems. An important tool for such problems when they involve geometric motion is the level set method. This was first used for reachability problems. The cost of these algorithms, and, in fact, all PDE numerical approximations is exponential in the space dimension and time. In this work we propose and test methods for solving a large class of the HJ PDE relevant to optimal control problems without the use of grids or numerical approximations. Rather we use the classical Hopf formulas for solving initial value problems for HJ PDE. We have noticed that if the Hamiltonian is convex and positively homogeneous of degree one that very fast methods exist to solve the resulting optimization problem. This is very much related to fast methods for solving problems in compressive sensing, based on $\ell_1$ optimization. We seem to obtain methods which are polynomial in the dimension. Our algorithm is very fast, requires very low memory and is totally parallelizable. We can evaluate the solution and its gradient in very high dimensions at $10^{-4}$ to $10^{-8}$ seconds per evaluation on a laptop. We carefully explain how to compute numerically the optimal control from the numerical solution of the associated initial valued HJ-PDE for a class of optimal control problems. We show that our algorithms compute all the quantities we need to obtain easily the controller. The term curse of dimensionality, was coined by Richard Bellman in 1957 when considering problems in dynamic optimization.

81 citations


Journal ArticleDOI
TL;DR: This work has designed a new patch selection method for DDTF seismic data recovery to accelerate the filter bank training process in DDTF, while doing less damage to the recovery quality.
Abstract: Seismic data denoising and interpolation are essential preprocessing steps in any seismic data processing chain. Sparse transforms with a fixed basis are often used in these two steps. Recently, we have developed an adaptive learning method, the data-driven tight frame (DDTF) method, for seismic data denoising and interpolation. With its adaptability to seismic data, the DDTF method achieves high-quality recovery. For 2D seismic data, the DDTF method is much more efficient than traditional dictionary learning methods. But for 3D or 5D seismic data, the DDTF method results in a high computational expense. The motivation behind this work is to accelerate the filter bank training process in DDTF, while doing less damage to the recovery quality. The most frequently used method involves only a randomly selective subset of the training set. However, this random selection method uses no prior information of the data. We have designed a new patch selection method for DDTF seismic data recovery. We suppose...

70 citations


Journal ArticleDOI
TL;DR: This paper recovers sparse signals from their noisy linear measurements by solving nonlinear differential inclusions by solving the notion of inverse scale space (ISS) developed in applied mathematics, and shows how to efficiently compute their solution paths in both continuous and discretized settings.

51 citations


Journal ArticleDOI
TL;DR: The effectiveness of this proposed algorithm is illustrated on both synthetic and real-world HSI, and numerical results show that the proposed algorithm outperforms other standard unsupervised clustering methods, such as spherical , nonnegative matrix factorization, and the graph-based Merriman–Bence–Osher scheme.
Abstract: In this paper, a graph-based nonlocal total variation method (NLTV) is proposed for unsupervised classification of hyperspectral images (HSI). The variational problem is solved by the primal-dual hybrid gradient (PDHG) algorithm. By squaring the labeling function and using a stable simplex clustering routine, an unsupervised clustering method with random initialization can be implemented. The effectiveness of this proposed algorithm is illustrated on both synthetic and real-world HSI, and numerical results show that the proposed algorithm outperforms other standard unsupervised clustering methods such as spherical K-means, nonnegative matrix factorization (NMF), and the graph-based Merriman-Bence-Osher (MBO) scheme.

37 citations


Journal ArticleDOI
25 Apr 2016-ACS Nano
TL;DR: It is found that amide-based hydrogen bonds cross molecular domain boundaries and areas of local disorder in buried hydrogen-bonding networks within self-assembled monolayers of 3-mercapto-N-nonylpropionamide.
Abstract: We map buried hydrogen-bonding networks within self-assembled monolayers of 3-mercapto-N-nonylpropionamide on Au{111}. The contributing interactions include the buried S-Au bonds at the substrate surface and the buried plane of linear networks of hydrogen bonds. Both are simultaneously mapped with submolecular resolution, in addition to the exposed interface, to determine the orientations of molecular segments and directional bonding. Two-dimensional mode-decomposition techniques are used to elucidate the directionality of these networks. We find that amide-based hydrogen bonds cross molecular domain boundaries and areas of local disorder.

23 citations


Journal ArticleDOI
TL;DR: This work investigates the extension of the recently proposed weighted Fourier burst accumulation method into the wavelet domain and suggests replacing the rigid registration step used in the original algorithm with a nonrigid registration in order to process sequences acquired through atmospheric turbulence.
Abstract: Abstract. We investigate the extension of the recently proposed weighted Fourier burst accumulation (FBA) method into the wavelet domain. The purpose of FBA is to reconstruct a clean and sharp image from a sequence of blurred frames. This concept lies in the construction of weights to amplify dominant frequencies in the Fourier spectrum of each frame. The reconstructed image is then obtained by taking the inverse Fourier transform of the average of all processed spectra. We first suggest replacing the rigid registration step used in the original algorithm with a nonrigid registration in order to process sequences acquired through atmospheric turbulence. Second, we propose to work in a wavelet domain instead of the Fourier one. This leads us to the construction of two types of algorithms. Finally, we propose an alternative approach to replace the weighting idea by an approach promoting the sparsity in the used space. Several experiments are provided to illustrate the efficiency of the proposed methods.

21 citations


Journal ArticleDOI
TL;DR: A novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization.
Abstract: EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of the brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and l_(1-2) regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in image processing field. In order to handle 3D EEG source images, we propose a voxel-based TGV (vTGV) regularization that extends the definition of second-order TGV from 2D planar image to 3D irregular surfaces such as cortex surface. In addition, the l_(1-2) regularization is utilized to promote sparsity on the current density itself. We demonstrate that l_(1-2) regularization is able to enhance sparsity and accelerate computations than l_1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenario.

20 citations


Posted Content
TL;DR: A primal-dual algorithm to approximate the Earth Mover's distance is proposed, which uses very simple updates at each iteration and is shown to converge very rapidly.
Abstract: : We propose a new algorithm to approximate the Earth Mover's distance(EMD). Our main idea is motivated by the theory of optimal transport, in which EMDcan be reformulated as a familiar homogeneous degree 1 regularized minimization. Thenew minimization problem is very similar to problems which have been solved in the eldsof compressed sensing and image processing, where several fast methods are available.In this paper, we adopt a primal-dual algorithm designed there, which uses very simpleupdates at each iteration and is shown to converge very rapidly. Several numericalexamples are provided.

15 citations


Journal ArticleDOI
TL;DR: In this article, a variational multiphase image segmentation model based on fuzzy membership functions and L1-norm fidelity is proposed, which is more robust to outliers such as impulse noise and keeps better contrast.
Abstract: In this paper, we propose a variational multiphase image segmentation model based on fuzzy membership functions and L1-norm fidelity. Then we apply the alternating direction method of multipliers to solve an equivalent problem. All the subproblems can be solved efficiently. Specifically, we propose a fast method to calculate the fuzzy median. Experimental results and comparisons show that the L1-norm based method is more robust to outliers such as impulse noise and keeps better contrast than its L2-norm counterpart. Theoretically, we prove the existence of the minimizer and analyze the convergence of the algorithm.

15 citations


Proceedings Article
22 Nov 2016
TL;DR: The statistical dist ributions of Jacobian maps in the logarithmic space are examined, and a new framework for constructing log-unb iased image registration methods is developed that yields both theoretically and intuitively deformation maps, and is compatible with largedeformation models.
Abstract: In the past decade, information theory has been studied exte nsively in medical imaging. In particular, image matching by maximizing mutual information has been shown to yield good results in multi-modal image registration. However, there has been few rigorous studies to date that investigate the statistical aspect of the resulting deformation fields. Different regularization te chniques have been proposed, sometimes generating deformations very different from one another. In this paper , we apply information theory to quantifying the magnitude of deformations. We examine the statistical dist ributions of Jacobian maps in the logarithmic space, and develop a new framework for constructing log-unb iased image registration methods. The proposed framework yields both theoretically and intuitively corre ct deformation maps, and is compatible with largedeformation models. In the results section, we tested the pr oposed method using pairs of synthetic binary images, two-dimensional serial MRI images, and three-dime nsional serial MRI volumes. We compared our results to those computed using the viscous fluid registrati on method, and demonstrated that the proposed method is advantageous when recovering voxel-wise local ti ssue change.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: A graph Fractional-Order Total Variation based method, which provides the freedom to choose the smoothness order by imposing sparsity of the spatial fractional derivatives so that it locates source peaks accurately and demonstrates the superior performance of gFOTV not only in spatial resolution but also in localization accuracy and total reconstruction accuracy.
Abstract: EEG source imaging is able to reconstruct sources in the brain from scalp measurements with high temporal resolution. Due to the limited number of sensors, it is very challenging to locate the source accurately with high spatial resolution. Recently, several total variation (TV) based methods have been proposed to explore sparsity of the source spatial gradients, which is based on the assumption that the source is constant at each subregion. However, since the sources have more complex structures in practice, these methods have difficulty in recovering the current density variation and locating source peaks. To overcome this limitation, we propose a graph Fractional-Order Total Variation (gFOTV) based method, which provides the freedom to choose the smoothness order by imposing sparsity of the spatial fractional derivatives so that it locates source peaks accurately. The performance of gFOTV and various state-of-the-art methods is compared using a large amount of simulations and evaluated with several quantitative criteria. The results demonstrate the superior performance of gFOTV not only in spatial resolution but also in localization accuracy and total reconstruction accuracy.

Book ChapterDOI
TL;DR: This model is based on the observation that the spatial–spectral blocks of hyperspectral images typically lie close to a collection of low dimensional manifolds and directly uses the dimension of the manifold as a regularization term in a variational functional, which can be solved efficiently by alternating direction of minimization and advanced numerical discretization.
Abstract: In this chapter, we present a low dimensional manifold model (LDMM) for hyperspectral image reconstruction. This model is based on the observation that the spatial–spectral blocks of hyperspectral images typically lie close to a collection of low dimensional manifolds. To emphasize this, we directly use the dimension of the manifold as a regularization term in a variational functional, which can be solved efficiently by alternating direction of minimization and advanced numerical discretization. Experiments on the reconstruction of hyperspectral images from sparse and noisy sampling demonstrate the superiority of LDMM in terms of both speed and accuracy.

Book ChapterDOI
01 Jan 2016
TL;DR: A new and remarkably fast algorithm for solving a large class of high dimensional Hamilton-Jacobi (H-J) initial value problems arising in optimal control and elsewhere is outlined.
Abstract: In this chapter we briefly outline a new and remarkably fast algorithm for solving a large class of high dimensional Hamilton-Jacobi (H-J) initial value problems arising in optimal control and elsewhere [1]. This is done without the use of grids or numerical approximations. Moreover, by using the level set method [8] we can rapidly compute projections of a point in \(\mathbb{R}^{n}\), n large to a fairly arbitrary compact set [2]. The method seems to generalize widely beyond what will we present here to some nonconvex Hamiltonians, new linear programming algorithms, differential games, and perhaps state dependent Hamiltonians.

Proceedings ArticleDOI
05 Jun 2016
TL;DR: In this paper, a detail-preserving image deconvolution method was proposed to improve image quality in the super-resolution optical fluctuation imaging and other diffraction-limited/superresolution imaging modalities.
Abstract: We propose a detail-preserving image deconvolution method which outperforms state-of-the-art methods, and can further improve image quality in the super-resolution optical fluctuation imaging and other diffraction-limited/superresolution imaging modalities. Article not available.

Posted Content
TL;DR: In this article, the dimension of the manifold is directly used as a regularizer in a variational functional, which is solved efficiently by alternating direction of minimization and weighted nonlocal Laplacian.
Abstract: We present a scalable low dimensional manifold model for the reconstruction of noisy and incomplete hyperspectral images. The model is based on the observation that the spatial-spectral blocks of a hyperspectral image typically lie close to a collection of low dimensional manifolds. To emphasize this, the dimension of the manifold is directly used as a regularizer in a variational functional, which is solved efficiently by alternating direction of minimization and weighted nonlocal Laplacian. Unlike general 3D images, the same similarity matrix can be shared across all spectral bands for a hyperspectral image, therefore the resulting algorithm is much more scalable than that for general 3D data. Numerical experiments on the reconstruction of hyperspectral images from sparse and noisy sampling demonstrate the superiority of our proposed algorithm in terms of both speed and accuracy.