scispace - formally typeset
Search or ask a question

Showing papers by "Stanley Osher published in 2010"


Journal ArticleDOI
TL;DR: The proposed general algorithm framework for inverse problem regularization with a single forward-backward operator step, namely, Bregmanized operator splitting (BOS), converges without fully solving the subproblems, and numerical results on deconvolution and compressive sensing illustrate the performance of nonlocal total variation regularization under the proposed algorithm framework.
Abstract: Bregman methods introduced in [S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, Multiscale Model. Simul., 4 (2005), pp. 460-489] to image processing are demonstrated to be an efficient optimization method for solving sparse reconstruction with convex functionals, such as the $\ell^1$ norm and total variation [W. Yin, S. Osher, D. Goldfarb, and J. Darbon, SIAM J. Imaging Sci., 1 (2008), pp. 143-168; T. Goldstein and S. Osher, SIAM J. Imaging Sci., 2 (2009), pp. 323-343]. In particular, the efficiency of this method relies on the performance of inner solvers for the resulting subproblems. In this paper, we propose a general algorithm framework for inverse problem regularization with a single forward-backward operator splitting step [P. L. Combettes and V. R. Wajs, Multiscale Model. Simul., 4 (2005), pp. 1168-1200], which is used to solve the subproblems of the Bregman iteration. We prove that the proposed algorithm, namely, Bregmanized operator splitting (BOS), converges without fully solving the subproblems. Furthermore, we apply the BOS algorithm and a preconditioned one for solving inverse problems with nonlocal functionals. Our numerical results on deconvolution and compressive sensing illustrate the performance of nonlocal total variation regularization under the proposed algorithm framework, compared to other regularization techniques such as the standard total variation method and the wavelet-based regularization method.

718 citations


Journal ArticleDOI
TL;DR: It is proved the convergence of the split Bregman iterations, where the number of inner iterations is fixed to be one, which gives a set of new frame based image restoration algorithms that cover several topics in image restorations.
Abstract: Split Bregman methods introduced in [T. Goldstein and S. Osher, SIAM J. Imaging Sci., 2 (2009), pp. 323–343] have been demonstrated to be efficient tools for solving total variation norm minimization problems, which arise from partial differential equation based image restoration such as image denoising and magnetic resonance imaging reconstruction from sparse samples. In this paper, we prove the convergence of the split Bregman iterations, where the number of inner iterations is fixed to be one. Furthermore, we show that these split Bregman iterations can be used to solve minimization problems arising from the analysis based approach for image restoration in the literature. We apply these split Bregman iterations to the analysis based image restoration approach whose analysis operator is derived from tight framelets constructed in [A. Ron and Z. Shen, J. Funct. Anal., 148 (1997), pp. 408–447]. This gives a set of new frame based image restoration algorithms that cover several topics in image restorations...

686 citations


Journal ArticleDOI
TL;DR: The primary purpose of this paper is to examine the effectiveness of “Split Bregman” techniques for solving image segmentation problems, and to compare this scheme with more conventional methods.
Abstract: Variational models for image segmentation have many applications, but can be slow to compute. Recently, globally convex segmentation models have been introduced which are very reliable, but contain TV-regularizers, making them difficult to compute. The previously introduced Split Bregman method is a technique for fast minimization of L1 regularized functionals, and has been applied to denoising and compressed sensing problems. By applying the Split Bregman concept to image segmentation problems, we build fast solvers which can out-perform more conventional schemes, such as duality based methods and graph-cuts. The convex segmentation schemes also substantially outperform conventional level set methods, such as the Chan-Vese level set-based segmentation algorithm. We also consider the related problem of surface reconstruction from unorganized data points, which is used for constructing level set representations in 3 dimensions. The primary purpose of this paper is to examine the effectiveness of "Split Bregman" techniques for solving these problems, and to compare this scheme with more conventional methods.

476 citations


Journal ArticleDOI
TL;DR: Two nonlocal regularizations for image recovery, which exploit the spatial interactions in images, are considered, which get superior results using preprocessed data as input for the weighted functionals.
Abstract: This paper considers two nonlocal regularizations for image recovery, which exploit the spatial interactions in images. We get superior results using preprocessed data as input for the weighted functionals. Applications discussed include image deconvolution and tomographic reconstruction. The numerical results show our method outperforms some previous ones.

333 citations


Journal Article
TL;DR: A novel hybrid algorithm based on combining two types of optimization iterations: one being very fast and memory friendly while the other being slower but more accurate is proposed, which has global convergence at a geometric rate (a Q-linear rate in optimization terminology).
Abstract: l1-regularized logistic regression, also known as sparse logistic regression, is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The use of l1 regularization attributes attractive properties to the classifier, such as feature selection, robustness to noise, and as a result, classifier generality in the context of supervised learning. When a sparse logistic regression problem has large-scale data in high dimensions, it is computationally expensive to minimize the non-differentiable l1-norm in the objective function. Motivated by recent work (Koh et al., 2007; Hale et al., 2008), we propose a novel hybrid algorithm based on combining two types of optimization iterations: one being very fast and memory friendly while the other being slower but more accurate. Called hybrid iterative shrinkage (HIS), the resulting algorithm is comprised of a fixed point continuation phase and an interior point phase. The first phase is based completely on memory efficient operations such as matrix-vector multiplications, while the second phase is based on a truncated Newton's method. Furthermore, we show that various optimization techniques, including line search and continuation, can significantly accelerate convergence. The algorithm has global convergence at a geometric rate (a Q-linear rate in optimization terminology). We present a numerical comparison with several existing algorithms, including an analysis using benchmark data from the UCI machine learning repository, and show our algorithm is the most computationally efficient without loss of accuracy.

79 citations


Book ChapterDOI
29 Nov 2010
TL;DR: The new proposed method based on the region-scalable model can draw upon intensity information in local regions at a controllable scale, so that it can segment images with intensity inhomogeneity.
Abstract: In this paper, we incorporate the global convex segmentation method and the split Bregman technique into the region-scalable fitting energy model. The new proposed method based on the region-scalable model can draw upon intensity information in local regions at a controllable scale, so that it can segment images with intensity inhomogeneity. Furthermore, with the application of the global convex segmentation method and the split Bregman technique, the method is very robust and efficient. By using a non-negative edge detector function to the proposed method, the algorithm can detect the boundaries more easily and achieve results that are very similar to those obtained through the classical geodesic active contour model. Experimental results for synthetic and real images have shown the robustness and efficiency of our method and also demonstrated the desirable advantages of the proposed method.

61 citations


Journal ArticleDOI
TL;DR: Two new algorithms for tomographic reconstruction which incorporate the technique of equally-sloped tomography (EST) and allow for the optimized and flexible implementation of regularization schemes, such as total variation constraints, and the incorporation of arbitrary physical constraints are developed.
Abstract: We develop two new algorithms for tomographic reconstruction which incorporate the technique of equally-sloped tomography (EST) and allow for the optimized and flexible implementation of regularization schemes, such as total variation constraints, and the incorporation of arbitrary physical constraints. The founding structure of the developed algorithms is EST, a technique of tomographic acquisition and reconstruction first proposed by Miao in 2005 for performing tomographic image reconstructions from a limited number of noisy projections in an accurate manner by avoiding direct interpolations. EST has recently been successfully applied to coherent diffraction microscopy, electron microscopy, and computed tomography for image enhancement and radiation dose reduction. However, the bottleneck of EST lies in its slow speed due to its higher computation requirements. In this paper, we formulate the EST approach as a constrained problem and subsequently transform it into a series of linear problems, which can be accurately solved by the operator splitting method. Based on these mathematical formulations, we develop two iterative algorithms for tomographic image reconstructions through EST, which incorporate Bregman and continuative regularization. Our numerical experiment results indicate that the new tomographic image reconstruction algorithms not only significantly reduce the computational time, but also improve the image quality. We anticipate that EST coupled with the novel iterative algorithms will find broad applications in X-ray tomography, electron microscopy, coherent diffraction microscopy, and other tomography fields.

50 citations


Proceedings ArticleDOI
03 Dec 2010
TL;DR: An alternating direction (aka split Bregman) method for solving problems of the form min u ∥Au − f∥2 + η∥u∥1 such that u ≥ 0, which works especially well for solving large numbers of small to medium overdetermined problems.
Abstract: We will describe an alternating direction (aka split Bregman) method for solving problems of the form min u ∥Au − f∥2 + η∥u∥ 1 such that u ≥ 0, where A is an m×n matrix, and η is a nonnegative parameter. The algorithm works especially well for solving large numbers of small to medium overdetermined problems (i.e. m > n) with a fixed A. We will demonstrate applications in the analysis of hyperspectral images.

41 citations


01 Jan 2010
TL;DR: This paper provides a users’ guide to a new, general finite difference method for the numerical solution of systems of convection dominated conservation laws, and includes both extensive motivation for the method design, as well as a detailed formulation suitable for direct implementation.
Abstract: This paper provides a users’ guide to a new, general finite difference method for the numerical solution of systems of convection dominated conservation laws We include both extensive motivation for the method design, as well as a detailed formulation suitable for direct implementation Essentially Non-Oscillatory (ENO) methods are a class of high accuracy, shock capturing numerical methods for hyperbolic systems of conservation laws, based on upwind biased differencing in local characteristic fields The earliest ENO methods used control volume discretizations, but subsequent work [12] has produced a simpler finite difference form of the ENO method While this method has achieved excellent results in a great variety of compressible flow problems, there are still special situations where noticeable spurious oscillations develop Why this occurs is not always understood, and there has been no elegant way to eliminate these problems Based on the extensive work of Donat and Marquina [1], it appears that these difficulties arise from using a single transformation to local characteristic This paper was presented in ”Solutions of PDE” Conference in honour of Prof Roe on the occassion of his 60th birthday, July 1998, Arachaon, France

30 citations


Patent
11 Aug 2010
TL;DR: In this paper, a method and apparatus for volumetric image analysis and processing is described, which is able to obtain geometrical information from multi-dimensional (3D or more) images.
Abstract: A method and apparatus for volumetric image analysis and processing is described. Using the method and apparatus, it is possible to obtain geometrical information from multi-dimensional (3D or more) images. As long as an object can be reconstructed as a 3D object, regardless of the source of the images, the method and apparatus can be used to segment the target (in 3D) from the rest of the structure and to obtain the target's geometric information, such as volume and curvature.

15 citations


Journal ArticleDOI
01 Aug 2010
TL;DR: In this paper, a new 3D numerical code, ALE-AMR, was developed through a joint collaboration between LLNL, CEA, and UC (UCSD, UCLA, and LBL) for debris and shrapnel modelling.
Abstract: The generation of neutron/gamma radiation, electromagnetic pulses (EMP), debris and shrapnel at mega-Joule class laser facilities (NIF and LMJ) impacts experiments conducted at these facilities. The complex 3D numerical codes used to assess these impacts range from an established code that required minor modifications (MCNP - calculates neutron and gamma radiation levels in complex geometries), through a code that required significant modifications to treat new phenomena (EMSolve - calculates EMP from electrons escaping from laser targets), to a new code, ALE-AMR, that is being developed through a joint collaboration between LLNL, CEA, and UC (UCSD, UCLA, and LBL) for debris and shrapnel modelling.

ReportDOI
01 Oct 2010
TL;DR: In this paper, a collaborative convex framework for factoring a data matrix X into a non-negative product AS, with a sparse coefficient matrix S, is introduced, which restricts the columns of the dictionary matrix A to coincide with certain columns of X, thereby guaranteeing a physically meaningful dictionary and dimensionality reduction.
Abstract: : A collaborative convex framework for factoring a data matrix X into a non-negative product AS, with a sparse coefficient matrix S, is introduced. We restrict the columns of the dictionary matrix A to coincide with certain columns of X, thereby guaranteeing a physically meaningful dictionary and dimensionality reduction. As an example, we show applications of the proposed framework on hyperspectral endmember and abundances identification.

Book ChapterDOI
29 Nov 2010
TL;DR: The Chan-Vese model is extended to hyperspectral image segmentation with shape and signal priors with the use of the Split Bregman algorithm, which makes the method very efficient compared to other existing segmentation methods incorporating priors.
Abstract: In this paper, we extend the Chan-Vese model for image segmentation in [1] to hyperspectral image segmentation with shape and signal priors. The use of the Split Bregman algorithm makes our method very efficient compared to other existing segmentation methods incorporating priors. We demonstrate our results on aerial hyperspectral images.

Book ChapterDOI
29 Nov 2010
TL;DR: In this article, a parallel algorithm for solving the l1- compressive sensing problem is presented, which takes advantage of shared memory, vectorized, parallel and many-core microprocessors such as GPUs and standard vectorized multi-core processors (e.g. quad-core CPUs).
Abstract: This paper describes a parallel algorithm for solving the l1- compressive sensing problem. Its design takes advantage of shared memory, vectorized, parallel and many-core microprocessors such as Graphics Processing Units (GPUs) and standard vectorized multi-core processors (e.g. quad-core CPUs). Experiments are conducted on these architectures, showing evidence of the efficiency of our approach.

Patent
22 Mar 2010
TL;DR: In this article, a method and apparatus for processing image data generated by bio-analytical devices, such as DNA sequencers, is described, which removes artifacts such as noise, blur, background, non-uniform illumination, lack of registration, and extract pixel signals back to DNA-beads in a way that de-mixes pixels that contain contributions from nearby beads.
Abstract: This invention relates to a method and apparatus for image processing, and more particularly, this invention relates to a method and apparatus for processing image data generated by bioanalytical devices, such as DNA sequencers. An object of the present invention is to remove artifacts such as noise, blur, background, non-uniform illumination, lack of registration, and extract pixel signals back to DNA-beads in a way that de-mixes pixels that contain contributions from nearby beads. In one aspect of the present invention, a system for optimizing an image comprises means for receiving an initial image which includes a plurality of microparticles with different intensities; a computing device, comprising a processor executing instructions to perform: generating an initial function denoting each microparticle's location and intensity in the initial image; determining an image processing operator adapted to determine an extent of point spread and blurriness in the initial image; computing an optimum function denoting each microparticle's location and intensity in an optimizing image; and producing the optimizing image with enhanced accuracy and density of the microparticles.

Proceedings ArticleDOI
26 Sep 2010
TL;DR: The proposed music noise reduction method is evaluated by both synthetic and room recorded speech and music data, and found to outperform existing musical noise reduction methods in terms of the objective and subjective measures.
Abstract: Musical noise often arises in the outputs of time-frequency binary mask based blind source separation approaches. Postprocessing is desired to enhance the separation quality. An efficient musical noise reduction method by time-domain sparse filters is presented using convex optimization. The sparse filters are sought by l1 regularization and the split Bregman method. The proposed musical noise reduction method is evaluated by both synthetic and room recorded speech and music data, and found to outperform existing musical noise reduction methods in terms of the objective and subjective measures. Index Terms: Musical noise, time-frequency mask, timedomain sparse filters, split Bregman method.

Journal ArticleDOI
TL;DR: In this article, a multiscale representation (MSR) for shapes via level set motions and PDEs is introduced, and a surface inpainting algorithm is designed to recover three-dimensional geometry of blood vessels.
Abstract: In this paper, we will first introduce a novel multiscale representation (MSR) for shapes via level set motions and PDEs. Based on the MSR, we will then design a surface inpainting algorithm to recover three-dimensional geometry of blood vessels. Because of the nature of irregular morphology in vessels and organs, both phantom and real inpainting scenarios were tested using our new algorithm. Successful vessel recoveries are demonstrated with numerical estimation of the degree of arteriosclerosis and vessel occlusion.

Journal ArticleDOI
TL;DR: A level set based surface capturing algorithm to first capture the aneurysms from the vascular tree is presented and applications to medical images are presented to show the accuracy, consistency and robustness of the method in capturing brain aneurYSms and volume quantification.
Abstract: Brain aneurysm rupture has been reported to be closely related to aneurysm size. The current method used to determine aneurysm size is to measure the dimension of the aneurysm dome and the width of the aneurysm neck. Since aneurysms usually have complicated shapes, using just the size of the aneurysm dome and neck may not be accurate and may overlook important geometrical information. In this paper we present a level set based surface capturing algorithm to first capture the aneurysms from the vascular tree. Since aneurysms are described by level set functions, volumes, curvatures and other geometric quantities of the aneurysm surface can easily be computed for medical studies. Experiments and comparisons with models used for capturing illusory contours in 2D images are performed. Applications to medical images are also presented to show the accuracy, consistency and robustness of our method in capturing brain aneurysms and volume quantification.

Journal Article
TL;DR: A parallel algorithm for solving the l1- compressive sensing problem takes advantage of shared memory, vectorized, parallel and many-core microprocessors such as Graphics Processing Units (GPUs) and standard vectorized multi-core processors (e.g. quad-core CPUs).
Abstract: This paper describes a parallel algorithm for solving the l1- compressive sensing problem. Its design takes advantage of shared memory, vectorized, parallel and many-core microprocessors such as Graphics Processing Units (GPUs) and standard vectorized multi-core processors (e.g. quad-core CPUs). Experiments are conducted on these architectures, showing evidence of the efficiency of our approach.

Journal ArticleDOI
TL;DR: Yin et al. as discussed by the authors introduced a nonlinear evolution partial differential equation (PDE) for sparse deconvolution problems, which has some interesting physical and geometric interpretations and can be used as a natural and helpful plug-in to some algorithms for sparse reconstruction problems.
Abstract: In this paper, we introduce a new nonlinear evolution partial differential equation (PDE) for sparse deconvolution problems. The proposed PDE has the form of a continuity equation that arises in various research areas, e.g., fluid dynamics and optimal transportation, and thus has some interesting physical and geometric interpretations. The underlying optimization model that we consider is the standard $\ell_1$ minimization with linear equality constraints, i.e., $\min_u\{\|u\|_1:Au=f\}$, with A being an undersampled convolution operator. We show that our PDE preserves the $\ell_1$ norm while lowering the residual $\|Au-f\|_2$. More importantly the solution of the PDE becomes sparser asymptotically, which is illustrated numerically. Therefore, it can be treated as a natural and helpful plug-in to some algorithms for $\ell_1$ minimization problems, e.g., Bregman iterative methods introduced for sparse reconstruction problems in [W. Yin, S. Osher, D. Goldfarb, and J. Darbon, SIAM J. Imaging Sci., 1 (2008), p...

Proceedings Article
01 Jan 2010
TL;DR: The FSE method is evaluated and found to outperform existing blind speech separation approaches on both synthetic and room recorded data in terms of the overall computational speed and separation quality.
Abstract: A fast speech extraction (FSE) method is presented using convex optimization made possible by pause detection of the speech sources. Sparse unmixing filters are sought by l1 regularization and the split Bregman method. A subdivided split Bregman method is developed for efficiently estimating long reverberations in real room recordings. The speech pause detection is based on a binary mask source separation method. The FSE method is evaluated and found to outperform existing blind speech separation approaches on both synthetic and room recorded data in terms of the overall computational speed and separation quality. Index Terms: convexity, sparse filters, split Bregman method, fast blind speech extraction.