scispace - formally typeset
Search or ask a question
Topic

Convex optimization

About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel algorithm for the analysis of electrodermal activity (EDA) using methods of convex optimization is reported on, showing good performance of the proposed method and suggesting promising future applicability, e.g., in the field of affective computing.
Abstract: Goal: This paper reports on a novel algorithm for the analysis of electrodermal activity (EDA) using methods of convex optimization EDA can be considered as one of the most common observation channels of sympathetic nervous system activity, and manifests itself as a change in electrical properties of the skin, such as skin conductance (SC) Methods: The proposed model describes SC as the sum of three terms: the phasic component, the tonic component, and an additive white Gaussian noise term incorporating model prediction errors as well as measurement errors and artifacts This model is physiologically inspired and fully explains EDA through a rigorous methodology based on Bayesian statistics, mathematical convex optimization, and sparsity Results: The algorithm was evaluated in three different experimental sessions to test its robustness to noise, its ability to separate and identify stimulus inputs, and its capability of properly describing the activity of the autonomic nervous system in response to strong affective stimulation Significance: Results are very encouraging, showing good performance of the proposed method and suggesting promising future applicability, eg, in the field of affective computing

319 citations

Journal ArticleDOI
TL;DR: Qualitative and quantitative results show that the spatio-temporal approach leads to a rotationally invariant and time symmetric convex optimization problem and has a unique minimum that can be found in a stable way by standard algorithms such as gradient descent.
Abstract: Nonquadratic variational regularization is a well-known and powerful approach for the discontinuity-preserving computation of optic flow. In the present paper, we consider an extension of flow-driven spatial smoothness terms to spatio-temporal regularizers. Our method leads to a rotationally invariant and time symmetric convex optimization problem. It has a unique minimum that can be found in a stable way by standard algorithms such as gradient descent. Since the convexity guarantees global convergence, the result does not depend on the flow initialization. Two iterative algorithms are presented that are not difficult to implement. Qualitative and quantitative results for synthetic and real-world scenes show that our spatio-temporal approach (i) improves optic flow fields significantly, (ii) smoothes out background noise efficiently, and (iii) preserves true motion boundaries. The computational costs are only 50% higher than for a pure spatial approach applied to all subsequent image pairs of the sequence.

318 citations

Journal ArticleDOI
TL;DR: The disjunctive approach of Balas, Ceria, and Cornuéjols and devevlop a branch-and-cut method for solving 0-1 convex programming problems and shows that cuts can be generated by solving a single convex program.
Abstract: We generalize the disjunctive approach of Balas, Ceria, and Cornuejols [2] and devevlop a branch-and-cut method for solving 0-1 convex programming problems. We show that cuts can be generated by solving a single convex program. We show how to construct regions similar to those of Sherali and Adams [20] and Lovasz and Schrijver [12] for the convex case. Finally, we give some preliminary computational results for our method.

317 citations

Journal ArticleDOI
TL;DR: The ROA can be computed by solving a convex linear programming (LP) problem over the space of measures and this problem can be solved approximately via a classical converging hierarchy of convex finite-dimensional linear matrix inequalities (LMIs).
Abstract: We address the long-standing problem of computing the region of attraction (ROA) of a target set (e.g., a neighborhood of an equilibrium point) of a controlled nonlinear system with polynomial dynamics and semialgebraic state and input constraints. We show that the ROA can be computed by solving an infinite-dimensional convex linear programming (LP) problem over the space of measures. In turn, this problem can be solved approximately via a classical converging hierarchy of convex finite-dimensional linear matrix inequalities (LMIs). Our approach is genuinely primal in the sense that convexity of the problem of computing the ROA is an outcome of optimizing directly over system trajectories. The dual infinite-dimensional LP on nonnegative continuous functions (approximated by polynomial sum-of-squares) allows us to generate a hierarchy of semialgebraic outer approximations of the ROA at the price of solving a sequence of LMI problems with asymptotically vanishing conservatism. This sharply contrasts with the existing literature which follows an exclusively dual Lyapunov approach yielding either nonconvex bilinear matrix inequalities or conservative LMI conditions. The approach is simple and readily applicable as the outer approximations are the outcome of a single semidefinite program with no additional data required besides the problem description. The approach is demonstrated on several numerical examples.

316 citations

Journal ArticleDOI
TL;DR: In this article, the authors present the principles of primal?dual approaches while providing an overview of the numerical methods that have been proposed in different contexts, including convex analysis, discrete optimization, parallel processing, and nonsmooth optimization with an emphasis on sparsity issues.
Abstract: Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. However, deriving efficient strategies that jointly bring into play the primal and dual problems is a more recent idea that has generated many important new contributions in recent years. These novel developments are grounded in the recent advances in convex analysis, discrete optimization, parallel processing, and nonsmooth optimization with an emphasis on sparsity issues. In this article, we aim to present the principles of primal?dual approaches while providing an overview of the numerical methods that have been proposed in different contexts. Last but not least, primal?dual methods lead to algorithms that are easily parallelizable. Today, such parallel algorithms are becoming increasingly important for efficiently handling high-dimensional problems.

316 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
94% related
Robustness (computer science)
94.7K papers, 1.6M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Markov chain
51.9K papers, 1.3M citations
86% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023392
2022849
20211,461
20201,673
20191,677
20181,580