scispace - formally typeset
Search or ask a question
Topic

Convex optimization

About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.


Papers
More filters
Journal ArticleDOI
TL;DR: A new approach to the design of robust adaptive beamforming is introduced to estimate the difference between the actual and presumed steering vectors and to use this difference to correct the erroneous presumed steering vector.
Abstract: A new approach to the design of robust adaptive beamforming is introduced. The essence of the new approach is to estimate the difference between the actual and presumed steering vectors and to use this difference to correct the erroneous presumed steering vector. The estimation process is performed iteratively where a quadratic convex optimization problem is solved at each iteration. Contrary to the worst-case performance-based and the probability-constrained-based approaches, our approach does not make any assumptions on either the norm of the mismatch vector or its probability distribution. Hence, it avoids the need for estimating their values.

186 citations

Journal ArticleDOI
TL;DR: A method for digital circuit optimization based on formulating the problem as a geometric program or generalized geometric program (GGP), which can be transformed to a convex optimization problem and then very efficiently solved.
Abstract: This paper concerns a method for digital circuit optimization based on formulating the problem as a geometric program (GP) or generalized geometric program (GGP), which can be transformed to a convex optimization problem and then very efficiently solved. We start with a basic gate scaling problem, with delay modeled as a simple resistor-capacitor (RC) time constant, and then add various layers of complexity and modeling accuracy, such as accounting for differing signal fall and rise times, and the effects of signal transition times. We then consider more complex formulations such as robust design over corners, multimode design, statistical design, and problems in which threshold and power supply voltage are also variables to be chosen. Finally, we look at the detailed design of gates and interconnect wires, again using a formulation that is compatible with GP or GGP.

186 citations

Journal ArticleDOI
TL;DR: The paper addresses a problem of design of distributed robust filters using the recent vector dissipativity theory and proposes a gradient descent type algorithm which allows the nodes to compute their estimator parameters in a decentralized manner.

186 citations

Journal ArticleDOI
TL;DR: It is shown that the solutions of the sequence of approximations converge to a Karush-Kuhn-Tucker (KKT) point of the JCCP under a certain asymptotic regime.
Abstract: When there is parameter uncertainty in the constraints of a convex optimization problem, it is natural to formulate the problem as a joint chance constrained program (JCCP), which requires that all constraints be satisfied simultaneously with a given large probability. In this paper, we propose to solve the JCCP by a sequence of convex approximations. We show that the solutions of the sequence of approximations converge to a Karush-Kuhn-Tucker (KKT) point of the JCCP under a certain asymptotic regime. Furthermore, we propose to use a gradient-based Monte Carlo method to solve the sequence of convex approximations.

186 citations

Journal ArticleDOI
TL;DR: In the general framework of inifinite-dimensional convex programming, two fundamental principles are demonstrated and used to derive several basic algorithms to solve a so-called "master" (constrained optimization) problem.
Abstract: In the general framework of inifinite-dimensional convex programming, two fundamental principles are demonstrated and used to derive several basic algorithms to solve a so-called "master" (constrained optimization) problem. These algorithms consist in solving an infinite sequence of "auxiliary" problems whose solutions converge to the master's optimal one. By making particular choices for the auxiliary problems, one can recover either classical algorithms (gradient, Newton-Raphson, Uzawa) or decomposition-coordination (two-level) algorithms. The advantages of the theory are that it clearly sets the connection between classical and two-level algorithms, It provides a framework for classifying the two-level algorithms, and it gives a systematic way of deriving new algorithms.

186 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
94% related
Robustness (computer science)
94.7K papers, 1.6M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Markov chain
51.9K papers, 1.3M citations
86% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023392
2022849
20211,461
20201,673
20191,677
20181,580