scispace - formally typeset
Search or ask a question
Topic

Convex optimization

About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.


Papers
More filters
Posted Content
TL;DR: A General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-conveX penalties and a detailed convergence analysis of the GIST algorithm is presented.
Abstract: Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

314 citations

Proceedings ArticleDOI
01 Jun 2016
TL;DR: This paper shows that the method based on orthogonal matching pursuit is both computationally efficient and guaranteed to give a subspace-preserving affinity under broad conditions and is the first one to handle 100,000 data points.
Abstract: Subspace clustering methods based on l1, l2 or nuclear norm regularization have become very popular due to their simplicity, theoretical guarantees and empirical success. However, the choice of the regularizer can greatly impact both theory and practice. For instance, l1 regularization is guaranteed to give a subspace-preserving affinity (i.e., there are no connections between points from different subspaces) under broad conditions (e.g., arbitrary subspaces and corrupted data). However, it requires solving a large scale convex optimization problem. On the other hand, l2 and nuclear norm regularization provide efficient closed form solutions, but require very strong assumptions to guarantee a subspace-preserving affinity, e.g., independent subspaces and uncorrupted data. In this paper we study a subspace clustering method based on orthogonal matching pursuit. We show that the method is both computationally efficient and guaranteed to give a subspace-preserving affinity under broad conditions. Experiments on synthetic data verify our theoretical analysis, and applications in handwritten digit and face clustering show that our approach achieves the best trade off between accuracy and efficiency. Moreover, our approach is the first one to handle 100,000 data points.

313 citations

Journal ArticleDOI
TL;DR: A computer program PENNON for the solution of problems of convex Nonlinear and Semidefinite Programming (NLP-SDP), a generalized version of the Augmented Lagrangian method, originally introduced by Ben-Tal and Zibulevsky for convex NLP problems.
Abstract: We introduce a computer program PENNON for the solution of problems of convex Nonlinear and Semidefinite Programming (NLP-SDP). The algorithm used in PENNON is a generalized version of the Augmented Lagrangian method, originally introduced by Ben-Tal and Zibulevsky for convex NLP problems. We present generalization of this algorithm to convex NLP-SDP problems, as implemented in PENNON and details of its implementation. The code can also solve second-order conic programming (SOCP) problems, as well as problems with a mixture of SDP, SOCP and NLP constraints. Results of extensive numerical tests and comparison with other optimization codes are presented. The test examples show that PENNON is particularly suitable for large sparse problems.

312 citations

Journal ArticleDOI
TL;DR: In this paper, the phase information of an object was recovered from intensity-only measurements, a problem which naturally appears in X-ray crystallography and related disciplines, where one can modulate the signal of interest and then collect the intensity of its diffraction pattern.

310 citations

Book
31 May 2000
TL;DR: In this paper, the authors propose a totally convex function for infinite dimensional optimization, where fixed points can be computed by infinite dif-ferentially optimising fixed points.
Abstract: Introduction. 1. Totally Convex Functions. 2. Computation of Fixed Points. 3. Infinite Dimensional Optimization. Bibliography. Index.

310 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
94% related
Robustness (computer science)
94.7K papers, 1.6M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Markov chain
51.9K papers, 1.3M citations
86% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023392
2022849
20211,461
20201,673
20191,677
20181,580