Topic
Convex optimization
About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: An optimization framework for computing optimal upper and lower bounds on functional expectations of distributions with special properties, given moment constraints is provided and generalizations of Chebyshev's inequality for symmetric and unimodal distributions are obtained.
Abstract: We provide an optimization framework for computing optimal upper and lower bounds on functional expectations of distributions with special properties, given moment constraints. Bertsimas and Popescu (Optimal inequalities in probability theory: a convex optimization approach. SIAM J. Optim. 2004. Forthcoming) have already shown how to obtain optimal moment inequalities for arbitrary distributions via semidefinite programming. These bounds are not sharp if the underlying distributions possess additional structural properties, including symmetry, unimodality, convexity, or smoothness. For convex distribution classes that are in some sense generated by an appropriate parametric family, we use conic duality to show how optimal moment bounds can be efficiently computed as semidefinite programs. In particular, we obtain generalizations of Chebyshev's inequality for symmetric and unimodal distributions and provide numerical calculations to compare these bounds, given higher-order moments. We also extend these results for multivariate distributions.
170 citations
••
TL;DR: Both structural and algebraic enhancements of decentralized feedback will be considered, with convex optimization as a common mathematical framework, which leads to computationally efficient design strategies that are well suited for large-scale applications.
170 citations
••
TL;DR: This paper analyzes the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic.
Abstract: In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic We also analyze a dual counterpart, the entropy minimization algorithm, which operates like the proximal minimization algorithm, except that it uses a logarithmic/entropy “proximal” term in place of a quadratic We strengthen substantially the available convergence results for these methods, and we derive the convergence rate of these methods when applied to linear programs
170 citations
••
TL;DR: A new method designed to globally minimize concave functions over linear polyhedra is described, and an example problem is solved, and computational considerations are discussed.
Abstract: A new method designed to globally minimize concave functions over linear polyhedra is described. Properties of the method are discussed, an example problem is solved, and computational considerations are discussed.
169 citations
••
TL;DR: This paper considers problem (P) of minimizing a quadratic function q(x)=xtQx+ctx of binary variables and devise two different preprocessing methods, which consist in computing the smallest eigenvalue of Q and vector u, both of which are classical SDP relaxation methods.
Abstract: In this paper, we consider problem (P) of minimizing a quadratic function q(x)=x tQx+ctx of binary variables. Our main idea is to use the recent Mixed Integer Quadratic Programming (MIQP) solvers. But, for this, we have to first convexify the objective function q(x). A classical trick is to raise up the diagonal entries of Q by a vector u until (Q+diag(u)) is positive semidefinite. Then, using the fact that xi2=xi, we can obtain an equivalent convex objective function, which can then be handled by an MIQP solver. Hence, computing a suitable vector u constitutes a preprocessing phase in this exact solution method. We devise two different preprocessing methods. The first one is straightforward and consists in computing the smallest eigenvalue of Q. In the second method, vector u is obtained once a classical SDP relaxation of (P) is solved.We carry out computational tests using the generator of (Pardalos and Rodgers, 1990) and we compare our two solution methods to several other exact solution methods. Furthermore, we report computational results for the max-cut problem.
169 citations