scispace - formally typeset
Search or ask a question

Showing papers by "Stanley Osher published in 2012"


Journal ArticleDOI
TL;DR: This paper is designed to establish connections between these two major image restoration approaches: variational methods and wavelet frame based methods to provide new interpretations and understanding of both approaches, and hence, lead to new applications for both approaches.
Abstract: From the beginning of science, visual observations have been playing important roles. Advances in computer technology have made it possible to apply some of the most sophisticated developments in mathematics and the sciences to the design and implementation of fast algorithms running on a large number of processors to process image data. As a result, image processing and analysis techniques are now applied to virtually all natural sciences and technical disciplines ranging from computer sciences and electronic engineering to biology and medical sciences; and digital images have come into everyone’s life. Image restoration, including image denoising, deblurring, inpainting, computed tomography, etc., is one of the most important areas in image processing and analysis. Its major purpose is to enhance the quality of a given image that is corrupted in various ways during the process of imaging, acquisition and communication, and enables us to see crucial but subtle objects reside in the image. Therefore, image restoration is an important step to take towards the accurate interpretations of the physical world and making the optimal decisions. Mathematics has been playing an important role in image and signal processing from the very beginning; for example, Fourier analysis is one of the main tools in signal and image analysis, processing, and restoration. In fact, mathematics has been one of the driving forces of the modern development of image analysis, processing and restorations. At the same time, the interesting and challenging problems in imaging science also gave birth to new mathematical theories, techniques and methods. The variational methods (e.g. total variation based methods) and wavelets and wavelet frame based methods developed in the last few decades for image and signal processing are two successful recent examples among many. This paper is designed to establish connections between these two major image restoration approaches: variational methods and wavelet frame based methods. Such connections provide new interpretations and understanding of both approaches, and hence, lead to new applications for both approaches. We start with an introduction of both the variational and wavelet frame based methods. The basic linear image restoration model used for variational methods is

359 citations


Journal ArticleDOI
TL;DR: This work uses l1, ∞ regularization to select the dictionary from the data and shows how to relax the restriction-to-X constraint by initializing an alternating minimization approach with the solution of the convex model, obtaining a dictionary close to but not necessarily in X.
Abstract: A collaborative convex framework for factoring a data matrix X into a nonnegative product AS , with a sparse coefficient matrix S, is proposed. We restrict the columns of the dictionary matrix A to coincide with certain columns of the data matrix X, thereby guaranteeing a physically meaningful dictionary and dimensionality reduction. We use l1, ∞ regularization to select the dictionary from the data and show that this leads to an exact convex relaxation of l0 in the case of distinct noise-free data. We also show how to relax the restriction-to-X constraint by initializing an alternating minimization approach with the solution of the convex model, obtaining a dictionary close to but not necessarily in X. We focus on applications of the proposed framework to hyperspectral endmember and abundance identification and also show an application to blind source separation of nuclear magnetic resonance data.

168 citations


Journal ArticleDOI
TL;DR: The proposed algorithm is inspired by recent efficient l1 optimization techniques and it naturally preserves the level set function as a distance function during the evolution, which avoids the classical re-distancing problem in level set methods.
Abstract: The level set method is a popular technique for tracking moving interfaces in several disciplines, including computer vision and fluid dynamics. However, despite its high flexibility, the original level set method is limited by two important numerical issues. First, the level set method does not implicitly preserve the level set function as a distance function, which is necessary to estimate accurately geometric features, s.a. the curvature or the contour normal. Second, the level set algorithm is slow because the time step is limited by the standard Courant-Friedrichs-Lewy (CFL) condition, which is also essential to the numerical stability of the iterative scheme. Recent advances with graph cut methods and continuous convex relaxation methods provide powerful alternatives to the level set method for image processing problems because they are fast, accurate, and guaranteed to find the global minimizer independently to the initialization. These recent techniques use binary functions to represent the contour rather than distance functions, which are usually considered for the level set method. However, the binary function cannot provide the distance information, which can be essential for some applications, s.a. the surface reconstruction problem from scattered points and the cortex segmentation problem in medical imaging. In this paper, we propose a fast algorithm to preserve distance functions in level set methods. Our algorithm is inspired by recent efficient l1 optimization techniques, which will provide an efficient and easy to implement algorithm. It is interesting to note that our algorithm is not limited by the CFL condition and it naturally preserves the level set function as a distance function during the evolution, which avoids the classical re-distancing problem in level set methods. We apply the proposed algorithm to carry out image segmentation, where our methods prove to be 5-6 times faster than standard distance preserving level set techniques. We also present two applications where preserving a distance function is essential. Nonetheless, our method stays generic and can be applied to any level set methods that require the distance information.

93 citations


Book ChapterDOI
01 Jan 2012
TL;DR: In this article, a gradient-based bound-constrained split Bregman method (GBSB) was proposed for large-scale 3D reconstruction of optical energy.
Abstract: This chapter focuses on quantitative photoacoustic tomography to recover optical maps from the deposited optical energy. After a brief overview of models, theories and algorithms, we provide an algorithm for large-scale 3D reconstructions, so-called gradient-based bound-constrained split Bregman method (GBSB).

78 citations


Journal ArticleDOI
TL;DR: In this article, a total variation (TV) and non-local TV regularized model based on Retinex theory is proposed to solve the color constancy problem in human visual system.
Abstract: A feature of the human visual system (HVS) is color constancy, namely, the ability to determine the color under varying illumination conditions. Retinex theory, formulated by Edwin H. Land, aimed to simulate and explain how the HVS perceives color. In this paper, we establish a total variation (TV) and nonlocal TV regularized model of Retinex theory that can be solved by a fast computational approach based on Bregman iteration. We demonstrate the performance of our method by numerical results.

74 citations


Journal ArticleDOI
TL;DR: A novel adaptive approach for solving `-minimization problems as frequently arising in compressed sensing, which is based on the recently introduced inverse scale space method that allows to efficiently compute minimizers by solving a sequence of low-dimensional nonnegative least-squares problems.
Abstract: In this paper we introduce a novel adaptive approach for solving `-minimization problems as frequently arising in compressed sensing, which is based on the recently introduced inverse scale space method. The scheme allows to efficiently compute minimizers by solving a sequence of low-dimensional nonnegative least-squares problems. We provide a detailed convergence analysis in a general setup as well as refined results under special conditions. In addition we discuss experimental observations in several numerical examples.

69 citations


Journal ArticleDOI
TL;DR: This work proposes models to learn a circulant sensing matrix/operator for one and higher dimensional signals and shows that given the dictionary of the signal(s) to be sensed, the learned circulants matrix/ operator is more effective than a randomly generated circulante sensing matrix and even slightly so than a (non-circulant) Gaussian random sensing matrix.
Abstract: In signal acquisition, Toeplitz and circulant matrices are widely used as sensing operators. They correspond to discrete convolutions and are easily or even naturally realized in various applications. For compressive sensing, recent work has used random Toeplitz and circulant sensing matrices and proved their efficiency in theory, by computer simulations, as well as through physical optical experiments. Motivated by recent work [8], we propose models to learn a circulant sensing matrix/operator for one and higher dimensional signals. Given the dictionary of the signal(s) to be sensed, the learned circulant sensing matrix/operator is more effective than a randomly generated circulant sensing matrix/operator, and even slightly so than a (non-circulant) Gaussian random sensing matrix. In addition, by exploiting the circulant structure, we improve the learning from the patch scale in [8] to the much large image scale. Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance. We demonstrate these results using both synthetic sparse signals and real images.

16 citations


Journal ArticleDOI
TL;DR: The convex speech enhancement method is presented based on convex optimization and pause detection of the speech sources and found to outperform a list of existing blind speech separation approaches on both synthetic and room recorded speech mixtures in terms of the overall computational speed and separation quality.
Abstract: A convex speech enhancement (CSE) method is presented based on convex optimization and pause detection of the speech sources. Channel spatial difference is identified for enhancing each speech source individually while suppressing other interfering sources. Sparse unmixing filters indicating channel spatial differences are sought by l1 norm regularization and the split Bregman method. A subdivided split Bregman method is developed for efficiently solving the problem in severely reverberant environments. The speech pause detection is based on a binary mask source separation method. The CSE method is evaluated objectively and subjectively, and found to outperform a list of existing blind speech separation approaches on both synthetic and room recorded speech mixtures in terms of the overall computational speed and separation quality.

15 citations


Book ChapterDOI
16 Jul 2012
TL;DR: A novel fast method for implicit surface reconstruction from unorganized point clouds is proposed that employs a multigrid solver on a narrow band of the level set function that represents the reconstructed surface.
Abstract: In this paper we propose a novel fast method for implicit surface reconstruction from unorganized point clouds. Our algorithm employs a multigrid solver on a narrow band of the level set function that represents the reconstructed surface, which greatly improves computational efficiency of surface reconstruction. The new model can accurately reconstruct surfaces from noisy unorganized point clouds that also have missing information.

14 citations


Posted Content
26 Jul 2012
TL;DR: This paper argues that the NCAA could improve its notoriously poor rankings by simply scheduling more out-of-conference games by using spectral clustering methods to identify highly-connected communities within the division.
Abstract: Given a graph where vertices represent alternatives and arcs represent pairwise comparison data, the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential function agrees with the pairwise comparisons. Our goal in this paper is to develop a method for collecting data for which the least squares estimator for the ranking problem has maximal information. Our approach, based on experimental design, is to view data collection as a bi-level optimization problem where the inner problem is the ranking problem and the outer problem is to identify data which maximizes the informativeness of the ranking. Under certain assumptions, the data collection problem decouples, reducing to a problem of finding graphs with large algebraic connectivity. This reduction of the data collection problem to graph-theoretic questions is one of the primary contributions of this work. As an application, we study the 2011-12 NCAA football schedule and propose schedules with the same number of games which are significantly more informative. Using spectral clustering methods to identify highly-connected communities within the division, we argue that the NCAA could improve its notoriously poor rankings by simply scheduling more out-of-conference games.

11 citations


01 Oct 2012
TL;DR: A fast procedure that generates a new regularization path without tuning the regularization parameter is introduced and the linearized Bregman algorithm is derived, which is algebraically simple and computationally efficient.
Abstract: Sparse logistic regression is an important linear classifier in statistical learning, providing an attractive route for feature selection. A popular approach is based on minimizing an l1-regularization term with a regularization parameter λ that affects the solution sparsity. To determine an appropriate value for the regularization parameter, one can apply the grid search method or the Bayesian approach. The grid search method requires constructing a regularization path, by solving a sequence of minimization problems with varying values of the regularization parameter, which is typically time consuming. In this paper, we introduce a fast procedure that generates a new regularization path without tuning the regularization parameter. We first derive the direct Bregman method by replacing the l1-norm by Bregman divergence, and contrast it with the grid search method. For faster path computation, we further derive the linearized Bregman algorithm, which is algebraically simple and computationally efficient. Finally we demonstrate some empirical results for the linearized Bregman algorithm on benchmark data and study feature selection as an inverse problem. Compared with the grid search method, the linearized Bregman algorithm generates a different regularization path with comparable classification performance, in a much more computationally efficient manner. AMS classification scheme numbers: 65, 62, 35 Submitted to: Inverse Problems Linearized Bregman 2

Posted Content
TL;DR: In this paper, the authors view data collection as a bi-level optimization problem where the inner problem is the ranking problem and the outer problem is to identify data which maximizes the informativeness of the ranking.
Abstract: Given a graph where vertices represent alternatives and arcs represent pairwise comparison data, the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential function agrees with the pairwise comparisons. Our goal in this paper is to develop a method for collecting data for which the least squares estimator for the ranking problem has maximal Fisher information. Our approach, based on experimental design, is to view data collection as a bi-level optimization problem where the inner problem is the ranking problem and the outer problem is to identify data which maximizes the informativeness of the ranking. Under certain assumptions, the data collection problem decouples, reducing to a problem of finding multigraphs with large algebraic connectivity. This reduction of the data collection problem to graph-theoretic questions is one of the primary contributions of this work. As an application, we study the Yahoo! Movie user rating dataset and demonstrate that the addition of a small number of well-chosen pairwise comparisons can significantly increase the Fisher informativeness of the ranking. As another application, we study the 2011-12 NCAA football schedule and propose schedules with the same number of games which are significantly more informative. Using spectral clustering methods to identify highly-connected communities within the division, we argue that the NCAA could improve its notoriously poor rankings by simply scheduling more out-of-conference games.

Journal ArticleDOI
TL;DR: An efficient musical noise reduction method is presented based on a convex model of time-domain sparse filters that can be used as a post-processing tool for more general and recent versions of TF domain BSS methods as well.
Abstract: Blind source separation (BSS) methods are useful tools to recover or enhance in- dividual speech sources from their mixtures in a multi-talker environment. A class of efficient BSS methods are based on the mutual exclusion hypothesis of the source signal Fourier spectra on the time- frequency (TF) domain, and subsequent data clustering and classification. Though such methodology is simple, the discontinuous decisions in the TF domain for classification often cause defects in the recovered signals in the time domain. The defects are perceived as unpleasant ringing sounds, the so called musical noise. Post-processing is desired for further quality enhancement. In this paper, an efficient musical noise reduction method is presented based on a convex model of time-domain sparse filters. The sparse filters are intended to cancel out the interference due to major sparse peaks in the mixing coefficients or physically the early arrival and high energy portion of the room impulse responses. This strategy is efficiently carried out by l1 regularization and the split Bregman method. Evaluations by both synthetic and room recorded speech and music data show that our method outperforms existing musical noise reduction methods in terms of objective and subjective measures. Our method can be used as a post-processing tool for more general and recent versions of TF domain BSS methods as well.

Journal ArticleDOI
TL;DR: It is shown how the split Bregman algorithm can successfully resolve LWIR lidar data containing mixtures of bioaerosol simulants and interferents into their separate components, which can be classified as bio- or nonbioaerOSol by the SVM classifier.
Abstract: For more than a decade, the U.S. government has been developing laser-based sensors for detecting, locating, and classifying aerosols in the atmosphere at safe standoff ranges. The motivation for this work is the need to discriminate aerosols of biological origin from interferent materials such as smoke and dust using the backscatter from multiple wavelengths in the long wave infrared (LWIR) spectral region. Through previous work, algorithms have been developed for estimating the aerosol spectral dependence and concentration range dependence from these data. The range dependence is required for locating and tracking the aerosol plumes, and the backscatter spectral dependence is used for discrimination by a support vector machine classifier. Substantial progress has been made in these algorithms for the case of a single aerosol present in the lidar line-of-sight (LOS). Often, however, mixtures of aerosols are present along the same LOS overlapped in range and time. Analysis of these mixtures of aerosols presents a difficult inverse problem that cannot be successfully treated by the methods used for single aerosols. Fortunately, recent advances have been made in the analysis of inverse problems using shrinkage-based L1-regularization techniques. Of the several L1-regularization methods currently known, the split Bregman algorithm is straightforward to implement, converges rapidly, and is applicable to a broad range of inverse problems including our aerosol unmixing. In this paper, we show how the split Bregman algorithm can successfully resolve LWIR lidar data containing mixtures of bioaerosol simulants and interferents into their separate components. The individual components then can be classified as bio- or nonbioaerosol by our SVM classifier. We illustrate the approach through data collected in field tests over the past several years using the U.S. Army FAL sensor in testing at Dugway Proving Ground, UT.