scispace - formally typeset
Search or ask a question
Topic

Convex optimization

About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.


Papers
More filters
Journal ArticleDOI
TL;DR: Two new proximal point algorithms for minimizing a proper, lower-semicontinuous convex function f, which converges even if f has no minimizers or is unbounded from below, are introduced.
Abstract: This paper introduces two new proximal point algorithms for minimizing a proper, lower-semicontinuous convex function $f: \mathbf{R}^n \to R \cup \{ \infty \}$. Under this minimal assumption on f, ...

250 citations

Journal ArticleDOI
TL;DR: In this paper, the complexity of stochastic convex optimization in an oracle model of computation is studied and a new notion of discrepancy between functions is introduced, which can be used to reduce problems of convex optimisation to statistical parameter estimation, which is lower bounded using information-theoretic methods.
Abstract: Relative to the large literature on upper bounds on complexity of convex optimization, lesser attention has been paid to the fundamental hardn4516420ess of these problems. Given the extensive use of convex optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic convex optimization in an oracle model of computation. We introduce a new notion of discrepancy between functions, and use it to reduce problems of stochastic convex optimization to statistical parameter estimation, which can be lower bounded using information-theoretic methods. Using this approach, we improve upon known results and obtain tight minimax complexity estimates for various function classes.

249 citations

Journal ArticleDOI
TL;DR: Investigates robust filtering design problems in H/sub 2/ and H/ sub /spl infin// spaces for continuous-time systems subjected to parameter uncertainty belonging to a convex bounded-polyhedral domain and shows that both designs can be converted into convex programming problems written in terms of linear matrix inequalities.
Abstract: Investigates robust filtering design problems in H/sub 2/ and H/sub /spl infin// spaces for continuous-time systems subjected to parameter uncertainty belonging to a convex bounded-polyhedral domain. It is shown that, by a suitable change of variables, both designs can be converted into convex programming problems written in terms of linear matrix inequalities. The results generalize the ones available in the literature to date in several directions. First, all system matrices can be corrupted by parameter uncertainty and the admissible uncertainty may be structured. Then, assuming the order of the uncertain system is known, the optimal guaranteed performance H/sub 2/ and H/sub /spl infin// filters are proven to be of the same order as the order of the system. A numerical example illustrate the theoretical results.

249 citations

Journal ArticleDOI
TL;DR: The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming and proves an overallworst-case operation count of O(m5.5L1.5).
Abstract: We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worst-case analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interior-point methods the overall computational effort is therefore dominated by the least-squares system that must be solved in each iteration. A type of conjugate-gradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugate-gradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration. We describe in detail how the algorithm works for optimization problems withL Lyapunov inequalities, each of sizem. We prove an overallworst-case operation count of O(m 5.5L1.5). Theaverage-case complexity appears to be closer to O(m 4L1.5). This estimate is justified by extensive numerical experimentation, and is consistent with other researchers' experience with the practical performance of interior-point algorithms for linear programming. This result means that the computational cost of extending current control theory based on the solution of Lyapunov or Riccatiequations to a theory that is based on the solution of (multiple, coupled) Lyapunov or Riccatiinequalities is modest.

249 citations

Journal ArticleDOI
TL;DR: A classification of convex NP -optimization problems is introduced and is applied to study the combinatorial structure of several optimization problems associated to well-known NP -complete sets and it is shown that structurally isomorphic problems have similar approximability properties.

249 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
94% related
Robustness (computer science)
94.7K papers, 1.6M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Markov chain
51.9K papers, 1.3M citations
86% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023392
2022849
20211,461
20201,673
20191,677
20181,580