scispace - formally typeset
Search or ask a question
Topic

Convex optimization

About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.


Papers
More filters
Book
Sébastien Bubeck1
28 Oct 2015
TL;DR: This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms and provides a gentle introduction to structural optimization with FISTA, saddle-point mirror prox, Nemirovski's alternative to Nesterov's smoothing, and a concise description of interior point methods.
Abstract: This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent advances in structural optimization and stochastic optimization. Our presentation of black-box optimization, strongly influenced by the seminal book of Nesterov, includes the analysis of cutting plane methods, as well as accelerated gradient descent schemes. We also pay special attention to non-Euclidean settings relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging and discuss their relevance in machine learning. We provide a gentle introduction to structural optimization with FISTA to optimize a sum of a smooth and a simple non-smooth term, saddle-point mirror prox Nemirovski's alternative to Nesterov's smoothing, and a concise description of interior point methods. In stochastic optimization we discuss stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. We also briefly touch upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.

1,213 citations

Journal ArticleDOI
TL;DR: A new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer is proposed.
Abstract: We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.

1,211 citations

Book
01 Jan 1996
TL;DR: The Simplex Method in Matrix Notation and Duality Theory, and Applications: Foundations of Convex Programming.
Abstract: Preface. Part 1: Basic Theory - The Simplex Method and Duality. 1. Introduction. 2. The Simplex Method. 3. Degeneracy. 4. Efficiency of the Simplex Method. 5. Duality Theory. 6. The Simplex Method in Matrix Notation. 7. Sensitivity and Parametric Analyses. 8. Implementation Issues. 9. Problems in General Form. 10. Convex Analysis. 11. Game Theory. 12. Regression. Part 2: Network-Type Problems. 13. Network Flow Problems. 14. Applications. 15. Structural Optimization. Part 3: Interior-Point Methods. 16. The Central Path. 17. A Path-Following Method. 18. The KKT System. 19. Implementation Issues. 20. The Affine-Scaling Method. 21. The Homogeneous Self-Dual Method. Part 4: Extensions. 22. Integer Programming. 23. Quadratic Programming. 24. Convex Programming. Appendix A: Source Listings. Answers to Selected Exercises. Bibliography. Index.

1,194 citations

Journal ArticleDOI
TL;DR: It is shown that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques, and it is proved that the methodology is robust vis‐à‐vis additive noise.
Abstract: Suppose we wish to recover a signal \input amssym $\font\abc=cmmib10\def\bi#1{\hbox{\abc#1}} {\bi x} \in {\Bbb C}^n$ from m intensity measurements of the form , ; that is, from data in which phase information is missing. We prove that if the vectors are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program–-a trace-norm minimization problem; this holds with large probability provided that m is on the order of , and without any assumption about the signal whatsoever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis-a-vis additive noise. © 2012 Wiley Periodicals, Inc.

1,190 citations

Journal ArticleDOI
TL;DR: This work considers the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse, and presents two new algorithms for solving problems with at least a thousand nodes in the Gaussian case.
Abstract: We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright and Jordan, 2006), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for the binary case. We test our algorithms on synthetic data, as well as on gene expression and senate voting records data.

1,189 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
94% related
Robustness (computer science)
94.7K papers, 1.6M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Markov chain
51.9K papers, 1.3M citations
86% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023392
2022849
20211,461
20201,673
20191,677
20181,580