scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 2001"


MonographDOI
01 Jun 2001
TL;DR: The authors present the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming as well as their numerous applications in engineering.
Abstract: This is a book devoted to well-structured and thus efficiently solvable convex optimization problems, with emphasis on conic quadratic and semidefinite programming. The authors present the basic theory underlying these problems as well as their numerous applications in engineering, including synthesis of filters, Lyapunov stability analysis, and structural design. The authors also discuss the complexity issues and provide an overview of the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming. The book's focus on well-structured convex problems in conic form allows for unified theoretical and algorithmical treatment of a wide spectrum of important optimization problems arising in applications.

2,651 citations


Journal ArticleDOI
TL;DR: An algorithm involving convex optimization is proposed to design a controller guaranteeing a suboptimal maximal delay such that the system can be stabilized for all admissible uncertainties.
Abstract: This paper concerns a problem of robust stabilization of uncertain state-delayed systems. A new delay-dependent stabilization condition using a memoryless controller is formulated in terms of matrix inequalities. An algorithm involving convex optimization is proposed to design a controller guaranteeing a suboptimal maximal delay such that the system can be stabilized for all admissible uncertainties.

1,432 citations


Book
25 Sep 2001
TL;DR: In this paper, the authors define and define Convex functions, Sublinear Functions and Sublinearity and Support Functions of a Nonempty Set Correspondence between ConveX Sets and SubLinear Functions, and Subdifferentials of Finite Functions.
Abstract: Introduction: Notation, Elementary Results.- Convex Sets: Generalities Convex Sets Attached to a Convex Set Projection onto Closed Convex Sets Separation and Applications Conical Approximations of Convex Sets.- Convex Functions: Basic Definitions and Examples Functional Operations Preserving Convexity Local and Global Behaviour of a Convex Function First- and Second-Order Differentiation.- Sublinearity and Support Functions: Sublinear Functions The Support Function of a Nonempty Set Correspondence Between Convex Sets and Sublinear Functions.- Subdifferentials of Finite Convex Functions: The Subdifferential: Definitions and Interpretations Local Properties of the Subdifferential First Examples Calculus Rules with Subdifferentials Further Examples The Subdifferential as a Multifunction.- Conjugacy in Convex Analysis: The Convex Conjugate of a Function Calculus Rules on the Conjugacy Operation Various Examples Differentiability of a Conjugate Function.

1,235 citations


Proceedings ArticleDOI
07 Jul 2001
TL;DR: It is proved that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace, implying that the images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately with a low-dimensional linear sub space.
Abstract: We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that the images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately with a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce non-negative lighting functions.

806 citations


Journal ArticleDOI
TL;DR: A number of variants of incremental subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions are established, including some that are stochastic.
Abstract: We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method. In this paper, we establish the convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic. Based on the analysis and computational experiments, the methods appear very promising and effective for important classes of large problems. A particularly interesting discovery is that by randomizing the order of selection of component functions for iteration, the convergence rate is substantially improved.

611 citations


Journal ArticleDOI
TL;DR: The main contribution of the paper is that the AWBT controller synthesis, using static compensation, is cast as a convex optimization over linear matrix inequalities.

373 citations


Journal ArticleDOI
TL;DR: Qualitative and quantitative results show that the spatio-temporal approach leads to a rotationally invariant and time symmetric convex optimization problem and has a unique minimum that can be found in a stable way by standard algorithms such as gradient descent.
Abstract: Nonquadratic variational regularization is a well-known and powerful approach for the discontinuity-preserving computation of optic flow. In the present paper, we consider an extension of flow-driven spatial smoothness terms to spatio-temporal regularizers. Our method leads to a rotationally invariant and time symmetric convex optimization problem. It has a unique minimum that can be found in a stable way by standard algorithms such as gradient descent. Since the convexity guarantees global convergence, the result does not depend on the flow initialization. Two iterative algorithms are presented that are not difficult to implement. Qualitative and quantitative results for synthetic and real-world scenes show that our spatio-temporal approach (i) improves optic flow fields significantly, (ii) smoothes out background noise efficiently, and (iii) preserves true motion boundaries. The computational costs are only 50% higher than for a pure spatial approach applied to all subsequent image pairs of the sequence.

318 citations


Book ChapterDOI
13 Jun 2001
TL;DR: This work considers the general nonlinear optimization problem in 0- 1 variables and provides an explicit equivalent convex positive semidefinite program in 2n - 1 variables that is equivalent to the optimal values of both problems.
Abstract: We consider the general nonlinear optimization problem in 0- 1 variables and provide an explicit equivalent convex positive semidefinite program in 2n - 1 variables. The optimal values of both problems are identical. From every optimal solution of the former one easily find an optimal solution of the latter and conversely, from every solution of the latter one may construct an optimal solution of the former.

287 citations


Book ChapterDOI
01 Jan 2001
TL;DR: An incremental approach to minimizing a convex function that consists of the sum of a large number of component functions is considered, which has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks.
Abstract: We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method.

277 citations


Journal ArticleDOI
TL;DR: The main result is that a minimal confidence ellipsoid for the state, consistent with the measured output and the uncertainty description, may be recursively computed in polynomial time, using interior-point methods for convex optimization.
Abstract: This note presents a new approach to finite-horizon guaranteed state prediction for discrete-time systems affected by bounded noise and unknown-but-bounded parameter uncertainty. Our framework handles possibly nonlinear dependence of the state-space matrices on the uncertain parameters. The main result is that a minimal confidence ellipsoid for the state, consistent with the measured output and the uncertainty description, may be recursively computed in polynomial time, using interior-point methods for convex optimization. With n states, l uncertain parameters appearing linearly in the state-space matrices, with rank-one matrix coefficients, the worst-case complexity grows as O(l(n + l)/sup 3.5/) With unstructured uncertainty in all system matrices, the worst-case complexity reduces to O(n/sup 3.5/).

277 citations


Journal ArticleDOI
TL;DR: Investigates robust filtering design problems in H/sub 2/ and H/ sub /spl infin// spaces for continuous-time systems subjected to parameter uncertainty belonging to a convex bounded-polyhedral domain and shows that both designs can be converted into convex programming problems written in terms of linear matrix inequalities.
Abstract: Investigates robust filtering design problems in H/sub 2/ and H/sub /spl infin// spaces for continuous-time systems subjected to parameter uncertainty belonging to a convex bounded-polyhedral domain. It is shown that, by a suitable change of variables, both designs can be converted into convex programming problems written in terms of linear matrix inequalities. The results generalize the ones available in the literature to date in several directions. First, all system matrices can be corrupted by parameter uncertainty and the admissible uncertainty may be structured. Then, assuming the order of the uncertain system is known, the optimal guaranteed performance H/sub 2/ and H/sub /spl infin// filters are proven to be of the same order as the order of the system. A numerical example illustrate the theoretical results.

Journal ArticleDOI
TL;DR: In this paper, robust H/spl infin/ filtering for continuous-time uncertain linear systems with multiple time-varying delays in the state variables is investigated, where the uncertain parameters are supposed to belong to a given convex bounded polyhedral domain.
Abstract: The problem of robust H/spl infin/ filtering for continuous-time uncertain linear systems with multiple time-varying delays in the state variables is investigated. The uncertain parameters are supposed to belong to a given convex bounded polyhedral domain. The aim is to design a stable linear filter assuring asymptotic stability and a prescribed H/spl infin/ performance level for the filtering error system, irrespective of the uncertainties and the time delays. Sufficient conditions for the existence of such a filter are established in terms of linear matrix inequalities, which can be efficiently solved by means of powerful convex programming tools with global convergence assured. An example illustrates the proposed methodology.

Journal ArticleDOI
TL;DR: A generalized entropy criterion for solving the rational Nevanlinna-Pick problem for n+1 interpolating conditions and the degree of interpolants bounded by n is presented, which requires a selection of a monic Schur polynomial of degree n.
Abstract: We present a generalized entropy criterion for solving the rational Nevanlinna-Pick problem for n+1 interpolating conditions and the degree of interpolants bounded by n. The primal problem of maximizing this entropy gain has a very well-behaved dual problem. This dual is a convex optimization problem in a finite-dimensional space and gives rise to an algorithm for finding all interpolants which are positive real and rational of degree at most n. The criterion requires a selection of a monic Schur polynomial of degree n. It follows that this class of monic polynomials completely parameterizes all such rational interpolants, and it therefore provides a set of design parameters for specifying such interpolants. The algorithm is implemented in a state-space form and applied to several illustrative problems in systems and control, namely sensitivity minimization, maximal power transfer and spectral estimation.

Proceedings Article
03 Jan 2001
TL;DR: An equivalence is derived between AdaBoost and the dual of a convex optimization problem, showing that the only difference between minimizing the exponential loss used by Ada boost and maximum likelihood for exponential models is that the latter requires the model to be normalized to form a conditional probability distribution over labels.
Abstract: We derive an equivalence between AdaBoost and the dual of a convex optimization problem, showing that the only difference between minimizing the exponential loss used by AdaBoost and maximum likelihood for exponential models is that the latter requires the model to be normalized to form a conditional probability distribution over labels. In addition to establishing a simple and easily understood connection between the two methods, this framework enables us to derive new regularization procedures for boosting that directly correspond to penalized maximum likelihood. Experiments on UCI datasets support our theoretical analysis and give additional insight into the relationship between boosting and logistic regression.

Journal ArticleDOI
TL;DR: The main result of the paper is to prove that this iterative algorithm provides a controller which quadratically stabilizes the uncertain system with probability one in a finite number of steps.

Journal ArticleDOI
TL;DR: In this paper, the authors consider topology optimization of elastic continua, where the elasticity tensor is assumed to depend linearly on the design function (density) as in the variable thickness sheet problem.

Journal ArticleDOI
TL;DR: Numerical experiments show that the solutions to the sequence of convex programs converge to the same design point for widely varying initial guesses, suggesting that the approach is capable of determining the globally optimal solution to the CMOS op-amp circuit sizing problem.
Abstract: The problem of CMOS op-amp circuit sizing is addressed here. Given a circuit and its performance specifications, the goal is to automatically determine the device sizes in order to meet the given performance specifications while minimizing a cost function, such as a weighted sum of the active area and power dissipation. The approach is based on the observation that the first order behavior of a MOS transistor in the saturation region is such that the cost and the constraint functions for this optimization problem can be modeled as posynomial in the design variables. The problem is then solved efficiently as a convex optimization problem. Second order effects are then handled by formulating the problem as one of solving a sequence of convex programs. Numerical experiments show that the solutions to the sequence of convex programs converge to the same design point for widely varying initial guesses. This strongly suggests that the approach is capable of determining the globally optimal solution to the problem. Accuracy of performance prediction in the sizing program (implemented in MATLAB) is maintained by using a newly proposed MOS transistor model and verified against detailed SPICE simulation.

Journal ArticleDOI
TL;DR: An algorithm for computing the global minimum of the problem by means of an interior-point method for convex programs is proposed.
Abstract: We consider the problem of minimizing the sum of a convex function and of p≥1 fractions subject to convex constraints. The numerators of the fractions are positive convex functions, and the denominators are positive concave functions. Thus, each fraction is quasi-convex. We give a brief discussion of the problem and prove that in spite of its special structure, the problem is \cN\cP-complete even when only p=1 fraction is involved. We then show how the problem can be reduced to the minimization of a function of p variables where the function values are given by the solution of certain convex subproblems. Based on this reduction, we propose an algorithm for computing the global minimum of the problem by means of an interior-point method for convex programs.

Journal ArticleDOI
TL;DR: The general subgradient projection method for minimizing a quasiconvex objective subject to a convex set constraint in a Hilbert space is studied, finding ε-solutions with an efficiency estimate of O(ε-2), thus being optimal in the sense of Nemirovskii.
Abstract: We study a general subgradient projection method for minimizing a quasiconvex objective subject to a convex set constraint in a Hilbert space. Our setting is very general: the objective is only upper semicontinuous on its domain, which need not be open, and various subdifferentials may be used. We extend previous results by proving convergence in objective values and to the generalized solution set for classical stepsizes t k →0, ∑t k =∞, and weak or strong convergence of the iterates to a solution for {t k }∈l2∖l1 under mild regularity conditions. For bounded constraint sets and suitable stepsizes, the method finds e-solutions with an efficiency estimate of O(e-2), thus being optimal in the sense of Nemirovskii.

Journal ArticleDOI
TL;DR: This work considers large-scale topology optimization of elastic continua in 3D using the regularized intermediate density control introduced in [1] using the nested approach, i.e., equilibrium is solved at each iteration.

Journal ArticleDOI
TL;DR: This paper discusses Markov random fields problems in the context of a representative application---the image segmentation problem and presents an algorithm that solves the problem in polynomial time when the deviation function is convex and separation function is linear.
Abstract: Problems of statistical inference involve the adjustment of sample observations so they fit some a priori rank requirements, or order constraints. In such problems, the objective is to minimize the deviation cost function that depends on the distance between the observed value and the modify value. In Markov random field problems, there is also a pairwise relationship between the objects. The objective in Markov random field problem is to minimize the sum of the deviation cost function and a penalty function that grows with the distance between the values of related pairs---separation function.We discuss Markov random fields problems in the context of a representative application---the image segmentation problem. In this problem, the goal is to modify color shades assigned to pixels of an image so that the penalty function consisting of one term due to the deviation from the initial color shade and a second term that penalizes differences in assigned values to neighboring pixels is minimized. We present here an algorithm that solves the problem in polynomial time when the deviation function is convex and separation function is linear; and in strongly polynomial time when the deviation cost function is linear, quadratic or piecewise linear convex with few pieces (where “few” means a number exponential in a polynomial function of the number of variables and constraints). The complexity of the algorithm for a problem on n pixels or variables, m adjacency relations or constraints, and range of variable values (colors) U, is O(T(n,m) + n log U) where T(n,m) is the complexity of solving the minimum s, t cut problem on a graph with n nodes and m arcs. Furthermore, other algorithms are shown to solve the problem with convex deviation and convex separation in running time O(mn log n log nU) and the problem with nonconvex deviation and convex separation in running time O(T(nU, mU). The nonconvex separation problem is NP-hard even for fixed value of U.For the family of problems with convex deviation functions and linear separation function, the algorithm described here runs in polynomial time which is demonstrated to be fastest possible.

Proceedings ArticleDOI
01 Sep 2001
TL;DR: A theoretical analysis shows that the proposed method provides better or at least the same results of the methods presented in the literature, and the proposed design method is applied in the control of an inverted pendulum.
Abstract: Relaxed conditions for the stability study of nonlinear, continuous systems given by fuzzy models are presented. A theoretical analysis shows that the proposed method provides better or at least the same results of the methods presented in the literature. Digital simulations exemplify this fact. These results are also used for the fuzzy regulators and observers design. The nonlinear systems are represented by the fuzzy models proposed by Takagi and Sugeno. The stability analysis and the design of controllers are described by LMIs (Linear Matrix Inequalities), that can be solved efficiently by convex programming techniques. The specification of the decay rate, constraints on control input and output are also described by LMIs. Finally, the proposed design method is applied in the control of an inverted pendulum.

Journal ArticleDOI
TL;DR: A provably good convex quadratic programming relaxation of strongly polynomial size is proposed for this problem of scheduling unrelated parallel machines subject to release dates so as to minimize the total weighted completion time of jobs.
Abstract: We consider the problem of scheduling unrelated parallel machines subject to release dates so as to minimize the total weighted completion time of jobs. The main contribution of this paper is a provably good convex quadratic programming relaxation of strongly polynomial size for this problem. The best previously known approximation algorithms are based on LP relaxations in time- or interval-indexed variables. Those LP relaxations, however, suffer from a huge number of variables. As a result of the convex quadratic programming approach we can give a very simple and easy to analyze 2-approximation algorithm which can be further improved to performance guarantee 3/2 in the absence of release dates. We also consider preemptive scheduling problems and derive approximation algorithms and results on the power of preemption which improve upon the best previously known results for these settings. Finally, for the special case of two machines we introduce a more sophisticated semidefinite programming relaxation and apply the random hyperplane technique introduced by Goemans and Williamson for the MaxCut problem; this leads to an improved 1.2752-approximation.

Journal ArticleDOI
TL;DR: This paper deals with convex half-quadratic criteria and associated minimization algorithms for the purpose of image restoration and brings a number of original elements within a unified mathematical presentation based on convex duality.
Abstract: This paper deals with convex half-quadratic criteria and associated minimization algorithms for the purpose of image restoration. It brings a number of original elements within a unified mathematical presentation based on convex duality. Firstly, the Geman and Yang (1995) and Geman and Reynolds (1992) constructions are revisited, with a view to establishing the convexity properties of the resulting half-quadratic augmented criteria, when the original nonquadratic criterion is already convex. Secondly, a family of convex Gibbsian energies that incorporate interacting auxiliary variables is revealed as a potentially fruitful extension of the Geman and Reynolds construction.

Proceedings ArticleDOI
25 Jun 2001
TL;DR: A Model Predictive Control scheme is described here that navigates a vehicle with nonlinear dynamics through a vector of known way-points to a goal, and manages constraints, in a receding-horizon optimal control scheme for autonomous trajectory generation and flight control of an unmanned air vehicle in urban terrain.
Abstract: This paper describes a receding-horizon optimal control scheme for autonomous trajectory generation and flight control of an unmanned air vehicle in urban terrain. In such environments, the mission objective or terrain may be dynamic, and the vehicle may change dynamics mid-flight due to sensor or actuator failure; thus off-line pre-planned flight trajectories axe limiting and insufficient. This technology is aimed at supporting guidance and control for future missions that will require vehicles with increased autonomy in dangerous situations and with tight maneuvering and operational capability e.g., missions in urban environments. A Model Predictive Control (MPC) scheme is described here that navigates a vehicle with nonlinear dynamics through a vector of known way-points to a goal, and manages constraints. In this MPC-based approach to trajectory planning with constraints, a feedforward nominal trajectory is used to convert the nonconvex, nonlinear optimal control problem into a time-varying linear, convex optimization or quadratic programming problem. The nonconvex, admissible path space is converted to a sequence of overlapping, convex spaces. The feedforward control that produces the nominal trajectory is found from the vehicle's differentially flat outputs. MPC is used to determine the optimal perturbations to the nominal control that will suitably navigate the vehicle through a constrained input/output space while minimizing actuation effort. Simulation results with a non-real-time, online MPC controller for a UAV in a planar urban terrain are included to support the proposed approach.

Journal ArticleDOI
TL;DR: A new algorithm, ordered subsets mirror descent, is developed and implemented, and it is demonstrated that it is well suited for solving the PET reconstruction problem.
Abstract: We describe an optimization problem arising in reconstructing three-dimensional medical images from positron emission tomography (PET). A mathematical model of the problem, based on the maximum likelihood principle, is posed as a problem of minimizing a convex function of several million variables over the standard simplex. To solve a problem of these characteristics, we develop and implement a new algorithm, ordered subsets mirror descent, and demonstrate, theoretically and computationally, that it is well suited for solving the PET reconstruction problem.

Journal ArticleDOI
TL;DR: In this article, the authors present sufficient optimality conditions and duality results for a class of nonlinear fractional programming problems based on the properties of sublinear functionals and generalized convex functions.
Abstract: In this paper, we present sufficient optimality conditions and duality results for a class of nonlinear fractional programming problems. Our results are based on the properties of sublinear functionals and generalized convex functions.

Journal ArticleDOI
TL;DR: The asynchronous multi-rate sampled-data H"~ synthesis problem is addressed, and the problem is shown to be equivalent to a convex optimization problem expressed in the form of linear operator inequalities.

Journal ArticleDOI
TL;DR: An algorithm for solving the covariance extension problem, as well as a constructive proof of Georgiou's existence result and his conjecture are obtained, a generalized version of which is recently resolved using geometric methods.
Abstract: The trigonometric moment problem is a classical moment problem with numerous applications in mathematics, physics, and engineering. The rational covariance extension problem is a constrained version of this problem, with the constraints arising from the physical realizability of the corresponding solutions. Although the maximum entropy method gives one well-known solution, in several applications a wider class of solutions is desired. In a seminal paper, Georgiou derived an existence result for a broad class of models. In this paper, we review the history of this problem, going back to Carath{eodory, as well as applications to stochastic systems and signal processing. In particular, we present a convex optimization problem for solving the rational covariance extension problem with degree constraint. Given a partial covariance sequence and the desired zeros of the shaping filter, the poles are uniquely determined from the unique minimum of the corresponding optimization problem. In this way we obtain an algorithm for solving the covariance extension problem, as well as a constructive proof of Georgiou's existence result and his conjecture, a generalized version of which we have recently resolved using geometric methods. We also survey recent related results on constrained Nevanlinna--Pick interpolation in the context of a variational formulation of the general moment problem.

Journal ArticleDOI
TL;DR: The construction of the bound uses a semidefinite programming representation of a basic eigenvalue bound for QAP, and appears to be competitive with existing bounds in the trade-off between bound quality and computational effort.
Abstract: We describe a new convex quadratic programming bound for the quadratic assignment problem (QAP). The construction of the bound uses a semidefinite programming representation of a basic eigenvalue bound for QAP. The new bound dominates the well-known projected eigenvalue bound, and appears to be competitive with existing bounds in the trade-off between bound quality and computational effort.