scispace - formally typeset
Search or ask a question
Author

Yin Zhang

Bio: Yin Zhang is an academic researcher from Rice University. The author has contributed to research in topics: Computer science & Interior point method. The author has an hindex of 47, co-authored 124 publications receiving 13702 citations. Previous affiliations of Yin Zhang include Stony Brook University & The Chinese University of Hong Kong.


Papers
More filters
Journal ArticleDOI
TL;DR: An alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations is proposed.
Abstract: We propose, analyze, and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations. The per-iteration computational complexity of the algorithm is three fast Fourier transforms. We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or $q$-linear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several state-of-the-art algorithms. In particular, it runs orders of magnitude faster than the lagged diffusivity algorithm for TV-based deblurring. Some extensions of our algorithm are also discussed.

1,883 citations

Posted Content
TL;DR: In this article, alternating direction algorithms are used for solving the basis pursuit problem and the basis-pursuit denoising problem in both unconstrained and constrained forms, respectively.
Abstract: In this paper, we propose and study the use of alternating direction algorithms for several $\ell_1$-norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived from either the primal or the dual forms of the $\ell_1$-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an $\ell_1$-problem into one having partially separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms can be regarded as first-order primal-dual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical results in comparison with several state-of-the-art algorithms are given to demonstrate that the proposed algorithms are efficient, stable and robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the $\ell_1$-norm fidelity should be the fidelity of choice in compressive sensing.

1,062 citations

Journal ArticleDOI
TL;DR: This paper proposes and investigates two classes of algorithms derived from either the primal or the dual form of $\ell_1$-problems, and presents numerical results to emphasize two practically important but perhaps overlooked points: that algorithm speed should be evaluated relative to appropriate solution accuracy; and that when erroneous measurements possibly exist, the $ell-1-fidelity should generally be preferable to the $\ell-2$-f fidelity.
Abstract: In this paper, we propose and study the use of alternating direction algorithms for several $\ell_1$-norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basis pursuit denoising problems of both unconstrained and constrained forms, and others. We present and investigate two classes of algorithms derived from either the primal or the dual form of $\ell_1$-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an $\ell_1$-problem into one having blockwise separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the augmented Lagrangian function of the resulting problem. The derived alternating direction algorithms can be regarded as first-order primal-dual algorithms because both primal and dual variables are updated at every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical experiments are performed, using randomized partial Walsh-Hadamard sensing matrices, to demonstrate the versatility and effectiveness of the proposed approach. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points: (i) that algorithm speed should be evaluated relative to appropriate solution accuracy; and (ii) that when erroneous measurements possibly exist, the $\ell_1$-fidelity should generally be preferable to the $\ell_2$-fidelity.

983 citations

Journal ArticleDOI
TL;DR: The structure of optimal solution sets is studied, finite convergence for important quantities is proved, and $q$-linear convergence rates for the fixed-point algorithm applied to problems with $f(x)$ convex, but not necessarily strictly convex are established.
Abstract: We present a framework for solving the large-scale $\ell_1$-regularized convex minimization problem:\[ \min\|x\|_1+\mu f(x). \] Our approach is based on two powerful algorithmic ideas: operator-splitting and continuation. Operator-splitting results in a fixed-point algorithm for any given scalar $\mu$; continuation refers to approximately following the path traced by the optimal value of $x$ as $\mu$ increases. In this paper, we study the structure of optimal solution sets, prove finite convergence for important quantities, and establish $q$-linear convergence rates for the fixed-point algorithm applied to problems with $f(x)$ convex, but not necessarily strictly convex. The continuation framework, motivated by our convergence results, is demonstrated to facilitate the construction of practical algorithms.

912 citations

Journal ArticleDOI
TL;DR: A low-rank factorization model is proposed and a nonlinear successive over-relaxation (SOR) algorithm is constructed that only requires solving a linear least squares problem per iteration to improve the capacity of solving large-scale problems.
Abstract: The matrix completion problem is to recover a low-rank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclear-norm minimization which requires computing singular value decompositions—a task that is increasingly costly as matrix sizes and ranks increase. To improve the capacity of solving large-scale problems, we propose a low-rank factorization model and construct a nonlinear successive over-relaxation (SOR) algorithm that only requires solving a linear least squares problem per iteration. Extensive numerical experiments show that the algorithm can reliably solve a wide range of problems at a speed at least several times faster than many nuclear-norm minimization algorithms. In addition, convergence of this nonlinear SOR algorithm to a stationary point is analyzed.

801 citations


Cited by
More filters
Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Journal ArticleDOI
TL;DR: This tutorial gives an overview of the basic ideas underlying Support Vector (SV) machines for function estimation, and includes a summary of currently used algorithms for training SV machines, covering both the quadratic programming part and advanced methods for dealing with large datasets.
Abstract: In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.

10,696 citations

Journal ArticleDOI
TL;DR: A comprehensive description of the primal-dual interior-point algorithm with a filter line-search method for nonlinear programming is provided, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix.
Abstract: We present a primal-dual interior-point algorithm with a filter line-search method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several line-search options, and a comparison is provided with two state-of-the-art interior-point codes for nonlinear programming.

7,966 citations

Journal ArticleDOI
Jos F. Sturm1
TL;DR: This paper describes how to work with SeDuMi, an add-on for MATLAB, which lets you solve optimization problems with linear, quadratic and semidefiniteness constraints by exploiting sparsity.
Abstract: SeDuMi is an add-on for MATLAB, which lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This paper describes how to work with this toolbox.

7,655 citations