scispace - formally typeset
Search or ask a question
Topic

Convex optimization

About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.


Papers
More filters
Journal ArticleDOI
TL;DR: Semi-Stochastic Gradient Descent (S2GD) as mentioned in this paper runs for one or several epochs in each of which a single full gradient and a random number of stochastic gradients is computed, following a geometric law.
Abstract: In this paper we study the problem of minimizing the average of a large number of smooth convex loss functions. We propose a new method, S2GD (Semi-Stochastic Gradient Descent), which runs for one or several epochs in each of which a single full gradient and a random number of stochastic gradients is computed, following a geometric law. For strongly convex objectives, the method converges linearly. The total work needed for the method to output an epsilon-accurate solution in expectation, measured in the number of passes over data, is proportional to the condition number of the problem and inversely proportional to the number of functions forming the average. This is achieved by running the method with number of stochastic gradient evaluations per epoch proportional to conditioning of the problem. The SVRG method of Johnson and Zhang arises as a special case. To illustrate our theoretical results, S2GD only needs the workload equivalent to about 2.1 full gradient evaluations to find a 10e-6 accurate solution for a problem with 10e9 functions and a condition number of 10e3.

196 citations

01 Jan 2004
TL;DR: It is demonstrated that the solution of the convex program frequently coincides with the solutionof the original approximation problem, and comparable new results for a greedy algorithm, Orthogonal Matching Pursuit, are stated.
Abstract: Subset selection and sparse approximation problems request a good approximation of an input signal using a linear combination of elementary signals, yet they stipulate that the approximation may only involve a few of the elementary signals This class of problems arises throughout electrical engineering, applied mathematics and statistics, but small theoretical progress has been made over the last fifty years Subset selection and sparse approximation both admit natural convex relaxations, but the literature contains few results on the behavior of these relaxations for general input signals This report demonstrates that the solution of the convex program frequently coincides with the solution of the original approximation problem The proofs depend essentially on geometric properties of the ensemble of elementary signals The results are powerful because sparse approximation problems are combinatorial, while convex programs can be solved in polynomial time with standard software Comparable new results for a greedy algorithm, Orthogonal Matching Pursuit, are also stated This report should have a major practical impact because the theory applies immediately to many real-world signal processing problems

196 citations

Journal ArticleDOI
TL;DR: In this article, the authors present an automated implementation based on operator overloading for the global optimization of a wide class of algorithms via convex/affine relaxations, where subgradient propagation relies on the recursive application of a few rules, namely, the calculation of subgradients for addition, multiplication and composition operations.
Abstract: Theory and implementation for the global optimization of a wide class of algorithms is presented via convex/affine relaxations. The basis for the proposed relaxations is the systematic construction of subgradients for the convex relaxations of factorable functions by McCormick [Math. Prog., 10 (1976), pp. 147-175]. Similar to the convex relaxation, the subgradient propagation relies on the recursive application of a few rules, namely, the calculation of subgradients for addition, multiplication, and composition operations. Subgradients at interior points can be calculated for any factorable function for which a McCormick relaxation exists, provided that subgradients are known for the relaxations of the univariate intrinsic functions. For boundary points, additional assumptions are necessary. An automated implementation based on operator overloading is presented, and the calculation of bounds based on affine relaxation is demonstrated for illustrative examples. Two numerical examples for the global optimization of algorithms are presented. In both examples a parameter estimation problem with embedded differential equations is considered. The solution of the differential equations is approximated by algorithms with a fixed number of iterations.

196 citations

Posted Content
TL;DR: This monograph introduces the basic concepts of Online Learning through a modern view of Online Convex Optimization, and presents first-order and second-order algorithms for online learning with convex losses, in Euclidean and non-Euclidean settings.
Abstract: In this monograph, I introduce the basic concepts of Online Learning through a modern view of Online Convex Optimization. Here, online learning refers to the framework of regret minimization under worst-case assumptions. I present first-order and second-order algorithms for online learning with convex losses, in Euclidean and non-Euclidean settings. All the algorithms are clearly presented as instantiation of Online Mirror Descent or Follow-The-Regularized-Leader and their variants. Particular attention is given to the issue of tuning the parameters of the algorithms and learning in unbounded domains, through adaptive and parameter-free online learning algorithms. Non-convex losses are dealt through convex surrogate losses and through randomization. The bandit setting is also briefly discussed, touching on the problem of adversarial and stochastic multi-armed bandits. These notes do not require prior knowledge of convex analysis and all the required mathematical tools are rigorously explained. Moreover, all the proofs have been carefully chosen to be as simple and as short as possible.

196 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
94% related
Robustness (computer science)
94.7K papers, 1.6M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Markov chain
51.9K papers, 1.3M citations
86% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023392
2022849
20211,461
20201,673
20191,677
20181,580