scispace - formally typeset
Search or ask a question
Topic

Convex optimization

About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that by invoking the redundancy properties induced by the descriptor formulation, combined with some convexifying techniques, the existence of the desired reliable controller can be explicitly determined by the solution of a convex optimization problem.
Abstract: This article studies the robust and reliable $\mathscr {H}_{\infty }$ static output feedback (SOF) control for nonlinear systems with actuator faults in a descriptor system framework. The nonlinear plant is characterized by a discrete-time Takagi-Sugeno (T-S) fuzzy affine model with parameter uncertainties, and the Markov chain is utilized to describe the actuator-fault behaviors. Specifically, by adopting a state-output augmentation approach, the original system is firstly reformulated into the descriptor fuzzy affine system. Based upon a novel piecewise Markovian Lyapunov function (LF), the $\mathscr {H}_{\infty }$ performance analysis condition for the underlying system is then presented, and furthermore the robust and reliable SOF controller synthesis is carried out. It is shown that by invoking the redundancy properties induced by the descriptor formulation, combined with some convexifying techniques, the existence of the desired reliable controller can be explicitly determined by the solution of a convex optimization problem. Finally, simulation studies are applied to confirm the effectiveness of the developed method.

316 citations

Journal ArticleDOI
TL;DR: The relation between RCPVs and chance-constrained problems (CCP) is explored, showing that the optimal objective of an RCPV with the generic constraint removal rule provides, with arbitrarily high probability, an upper bound on the optimal objectives of a corresponding CCP.
Abstract: Random convex programs (RCPs) are convex optimization problems subject to a finite number $N$ of random constraints. The optimal objective value $J^*$ of an RCP is thus a random variable. We study the probability with which $J^*$ is no longer optimal if a further random constraint is added to the problem (violation probability, $V^*$). It turns out that this probability rapidly concentrates near zero as $N$ increases. We first develop a theory for RCPs, leading to explicit bounds on the upper tail probability of $V^*$. Then we extend the setup to the case of RCPs with $r$ a posteriori violated constraints (RCPVs): a paradigm that permits us to improve the optimal objective value while maintaining the violation probability under control. Explicit and nonasymptotic bounds are derived also in this case: the upper tail probability of $V^*$ is upper bounded by a multiple of a beta distribution, irrespective of the distribution on the random constraints. All results are derived under no feasibility assumptions on the problem. Further, the relation between RCPVs and chance-constrained problems (CCP) is explored, showing that the optimal objective $J^*$ of an RCPV with the generic constraint removal rule provides, with arbitrarily high probability, an upper bound on the optimal objective of a corresponding CCP. Moreover, whenever an optimal constraint removal rule is used in the RCPVs, then appropriate choices of $N$ and $r$ exist such that $J^*$ approximates arbitrarily well the objective of the CCP.

315 citations

Journal ArticleDOI
TL;DR: A primal-dual gradient method is derived for a special class of structured nonsmooth optimization problems, which ensures a rate of convergence of order O(1/k), where k is the iteration count.
Abstract: In this paper we introduce a new primal-dual technique for convergence analysis of gradient schemes for nonsmooth convex optimization. As an example of its application, we derive a primal-dual gradient method for a special class of structured nonsmooth optimization problems, which ensures a rate of convergence of order O(1/k), where k is the iteration count. Another example is a gradient scheme, which minimizes a nonsmooth strongly convex function with known structure with rate of convergence O(1/k2). In both cases the efficiency of the methods is higher than the corresponding black-box lower complexity bounds by an order of magnitude.

315 citations

Posted Content
TL;DR: This work brings together and notably extends various types of structured monotone inclusion problems and their solution methods and the application to convex minimization problems is given special attention.
Abstract: We propose a primal-dual splitting algorithm for solving monotone inclusions involving a mixture of sums, linear compositions, and parallel sums of set-valued and Lipschitzian operators. An important feature of the algorithm is that the Lipschitzian operators present in the formulation can be processed individually via explicit steps, while the set-valued operators are processed individually via their resolvents. In addition, the algorithm is highly parallel in that most of its steps can be executed simultaneously. This work brings together and notably extends various types of structured monotone inclusion problems and their solution methods. The application to convex minimization problems is given special attention.

315 citations

Journal ArticleDOI
TL;DR: A chance-constrained approach that plans the future probabilistic distribution of the vehicle state so that the probability of failure is below a specified threshold, and introduces a customized solution method that returns almost-optimal solutions along with a hard bound on the level of suboptimality.
Abstract: Autonomous vehicles need to plan trajectories to a specified goal that avoid obstacles. For robust execution, we must take into account uncertainty, which arises due to uncertain localization, modeling errors, and disturbances. Prior work handled the case of set-bounded uncertainty. We present here a chance-constrained approach, which uses instead a probabilistic representation of uncertainty. The new approach plans the future probabilistic distribution of the vehicle state so that the probability of failure is below a specified threshold. Failure occurs when the vehicle collides with an obstacle or leaves an operator-specified region. The key idea behind the approach is to use bounds on the probability of collision to show that, for linear-Gaussian systems, we can approximate the nonconvex chance-constrained optimization problem as a disjunctive convex program. This can be solved to global optimality using branch-and-bound techniques. In order to improve computation time, we introduce a customized solution method that returns almost-optimal solutions along with a hard bound on the level of suboptimality. We present an empirical validation with an aircraft obstacle avoidance example.

314 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
94% related
Robustness (computer science)
94.7K papers, 1.6M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Markov chain
51.9K papers, 1.3M citations
86% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023392
2022849
20211,461
20201,673
20191,677
20181,580