scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1995"


Journal ArticleDOI
TL;DR: Extensions of H/sub /spl infin// synthesis techniques to allow for controller dependence on time-varying but measured parameters are discussed and simple heuristics are proposed to compute robust time-invariant controllers.
Abstract: An important class of linear time-varying systems consists of plants where the state-space matrices are fixed functions of some time-varying physical parameters /spl theta/. Small gain techniques can be applied to such systems to derive robust time-invariant controllers. Yet, this approach is often overly conservative when the parameters /spl theta/ undergo large variations during system operation. In general, higher performance can be achieved by control laws that incorporate available measurements of /spl theta/ and therefore "adjust" to the current plant dynamics. This paper discusses extensions of H/sub /spl infin// synthesis techniques to allow for controller dependence on time-varying but measured parameters. When this dependence is linear fractional, the existence of such gain-scheduled H/sub /spl infin// controllers is fully characterized in terms of linear matrix inequalities. The underlying synthesis problem is therefore a convex program for which efficient optimization techniques are available. The formalism and derivation techniques developed here apply to both the continuous- and discrete-time problems. Existence conditions for robust time-invariant controllers are recovered as a special case, and extensions to gain-scheduling in the face of parametric uncertainty are discussed. In particular, simple heuristics are proposed to compute such controllers. >

1,229 citations


Journal ArticleDOI
TL;DR: The proposed branch and bound type algorithm attains finiteε-convergence to the global minimum through the successive subdivision of the original region and the subsequent solution of a series of nonlinear convex minimization problems.
Abstract: A branch and bound global optimization method,αBB, for general continuous optimization problems involving nonconvexities in the objective function and/or constraints is presented. The nonconvexities are categorized as being either of special structure or generic. A convex relaxation of the original nonconvex problem is obtained by (i) replacing all nonconvex terms of special structure (i.e. bilinear, fractional, signomial) with customized tight convex lower bounding functions and (ii) by utilizing the α parameter as defined in [17] to underestimate nonconvex terms of generic structure. The proposed branch and bound type algorithm attains finitee-convergence to the global minimum through the successive subdivision of the original region and the subsequent solution of a series of nonlinear convex minimization problems. The global optimization method,αBB, is implemented in C and tested on a variety of example problems.

442 citations


Journal ArticleDOI
TL;DR: A number of new variants of bundle methods for nonsmooth unconstrained and constrained convex optimization, convex—concave games and variational inequalities are described.
Abstract: In this paper we describe a number of new variants of bundle methods for nonsmooth unconstrained and constrained convex optimization, convex—concave games and variational inequalities. We outline the ideas underlying these methods and present rate-of-convergence estimates.

419 citations


Journal ArticleDOI
TL;DR: The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming and proves an overallworst-case operation count of O(m5.5L1.5).
Abstract: We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worst-case analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interior-point methods the overall computational effort is therefore dominated by the least-squares system that must be solved in each iteration. A type of conjugate-gradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugate-gradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration. We describe in detail how the algorithm works for optimization problems withL Lyapunov inequalities, each of sizem. We prove an overallworst-case operation count of O(m 5.5L1.5). Theaverage-case complexity appears to be closer to O(m 4L1.5). This estimate is justified by extensive numerical experimentation, and is consistent with other researchers' experience with the practical performance of interior-point algorithms for linear programming. This result means that the computational cost of extending current control theory based on the solution of Lyapunov or Riccatiequations to a theory that is based on the solution of (multiple, coupled) Lyapunov or Riccatiinequalities is modest.

249 citations


Journal ArticleDOI
TL;DR: A new approach is proposed for finding allε-feasible solutions for certain classes of nonlinearly constrained systems of equations by introducing slack variables and taking advantage of the properties of products of univariate functions.
Abstract: A new approach is proposed for finding alle-feasible solutions for certain classes of nonlinearly constrained systems of equations. By introducing slack variables, the initial problem is transformed into a global optimization problem (P) whose multiple global minimum solutions with a zero objective value (if any) correspond to all solutions of the initial constrained system of equalities. Alle-globally optimal points of (P) are then localized within a set of arbitrarily small disjoint rectangles. This is based on a branch and bound type global optimization algorithm which attains finitee-convergence to each of the multiple global minima of (P) through the successive refinement of a convex relaxation of the feasible region and the subsequent solution of a series of nonlinear convex optimization problems. Based on the form of the participating functions, a number of techniques for constructing this convex relaxation are proposed. By taking advantage of the properties of products of univariate functions, customized convex lower bounding functions are introduced for a large number of expressions that are or can be transformed into products of univariate functions. Alternative convex relaxation procedures involve either the difference of two convex functions employed in αBB [23] or the exponential variable transformation based underestimators employed for generalized geometric programming problems [24]. The proposed approach is illustrated with several test problems. For some of these problems additional solutions are identified that existing methods failed to locate.

210 citations


Journal ArticleDOI
TL;DR: This paper extends the theory of trust region subproblems in two ways: (i) it allows indefinite inner products in the quadratic constraint, and (ii) it uses a two-sided (upper and lower bound) quadratics constraint.
Abstract: This paper extends the theory of trust region subproblems in two ways: (i) it allows indefinite inner products in the quadratic constraint, and (ii) it uses a two-sided (upper and lower bound) quadratic constraint. Characterizations of optimality are presented that have no gap between necessity and sufficiency. Conditions for the existence of solutions are given in terms of the definiteness of a matrix pencil. A simple dual program is introduced that involves the maximization of a strictly concave function on an interval. This dual program simplifies the theory and algorithms for trust region subproblems. It also illustrates that the trust region subproblems are implicit convex programming problems, and thus explains why they are so tractable.The duality theory also provides connections to eigenvalue perturbation theory. Trust region subproblems with zero linear term in the objective function correspond to eigenvalue problems, and adding a linear term in the objective function is seen to correspond to a p...

201 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider LTI systems perturbed by parametric uncertainties, modeled as white noise disturbances, and show how to maximize, via statefeedback control, the smallest norm of the noise intensity vector producing instability in the mean square sense, using convex optimization over linear matrix inequalities.

190 citations


Journal ArticleDOI
TL;DR: In this paper, a linear time-invariant system with several disturbance inputs and controlled outputs is considered, and the authors show how to minimize the nominal H/sub 2/-norm performance in one channel while keeping bounds on the H/ sub 2/Ω(H/sub /spl infin//-norm performance (implying robust stability) in other channels.
Abstract: For a linear time-invariant system with several disturbance inputs and controlled outputs, we show how to minimize the nominal H/sub 2/-norm performance in one channel while keeping bounds on the H/sub 2/-norm or H/sub /spl infin//-norm performance (implying robust stability) in the other channels. This multiobjective H/sub 2//H/sub /spl infin//-problem in an infinite dimensional space is reduced to sequences of finite dimensional convex optimization problems. We show how to compute the optimal value and how to numerically detect the existence of a rational optimal controller. If it exists, we reveal how the novel trick of optimizing the trace norm of the Youla parameter over certain convex constraints allows one to design a nearly optimal controller whose Youla parameter is of the same order as the optimal one. >

168 citations


Journal ArticleDOI
TL;DR: This work gives a new algorithm that finds a feasible point inS in cases where an oracle is available, and uses the analytic center of a polytope as test point, and successively modifies thepolytope with the separating hyperplanes returned by the oracle.
Abstract: Anoracle for a convex setS ⊂ ℝ n accepts as input any pointz in ℝ n , and ifz ∈S, then it returns ‘yes’, while ifz ∉S, then it returns ‘no’ along with a separating hyperplane. We give a new algorithm that finds a feasible point inS in cases where an oracle is available. Our algorithm uses the analytic center of a polytope as test point, and successively modifies the polytope with the separating hyperplanes returned by the oracle. The key to establishing convergence is that hyperplanes judged to be ‘unimportant’ are pruned from the polytope. If a ball of radius 2−L is contained inS, andS is contained in a cube of side 2 L+1, then we can show our algorithm converges after O(nL 2) iterations and performs a total of O(n 4 L 3+TnL 2) arithmetic operations, whereT is the number of arithmetic operations required for a call to the oracle. The bound is independent of the number of hyperplanes generated in the algorithm. An important application in which an oracle is available is minimizing a convex function overS.

161 citations


Journal ArticleDOI
TL;DR: It is shown that convergence properties of the decomposition method are heavily dependent on sparsity of the linking constraints and application to large-scale linear programming and stochastic programming is discussed.
Abstract: A decomposition method for large-scale convex optimization problems with block-angular structure and many linking constraints is analysed. The method is based on a separable approximation of the augmented Lagrangian function. Weak global convergence of the method is proved and speed of convergence analysed. It is shown that convergence properties of the method are heavily dependent on sparsity of the linking constraints. Application to large-scale linear programming and stochastic programming is discussed.

150 citations


Journal ArticleDOI
TL;DR: A deterministic method is proposed for the global optimization of mathematical programs that involve the sum of linear fractional and/or bilinear terms and it is shown that additional estimators can be obtained through projections of the feasible region.
Abstract: In this paper a deterministic method is proposed for the global optimization of mathematical programs that involve the sum of linear fractional and/or bilinear terms. Linear and nonlinear convex estimator functions are developed for the linear fractional and bilinear terms. Conditions under which these functions are nonredundant are established. It is shown that additional estimators can be obtained through projections of the feasible region that can also be incorporated in a convex nonlinear underestimator problem for predicting lower bounds for the global optimum. The proposed algorithm consists of a spatial branch and bound search for which several branching rules are discussed. Illustrative examples and computational results are presented to demonstrate the efficiency of the proposed algorithm.

Journal ArticleDOI
TL;DR: In this article, the authors consider two finite procedures for determining the step size of the steepest descent method for unconstrained optimization, without performing exact one-dimensional minimizations, and prove that for a convex objective, convergence of the whole sequence to a minimizer without any level set boundedness assumption and without any Lipschitz condition.
Abstract: Several finite procedures for determining the step size of the steepest descent method for unconstrained optimization, without performing exact one-dimensional minimizations, have been considered in the literature. The convergence analysis of these methods requires that the objective function have bounded level sets and that its gradient satisfy a Lipschitz condition, in order to establish just stationarity of all cluster points. We consider two of such procedures and prove, for a convex objective, convergence of the whole sequence to a minimizer without any level set boundedness assumption and, for one of them, without any Lipschitz condition.

Journal Article
TL;DR: Based on a recent convex programming algorithm for simultaneous stabilization by linear state feedback, two types of control law for stabilizing a family of systems are proposed, when either a simultaneously stabilizing state feedback gain or a simultaneous stabilizing output injection matrix exists, and complete state information is not available.
Abstract: Based on a recent convex programming algorithm for simultaneous stabilization by linear state feedback, we propose two types of control law for stabilizing a family of systems, when either a simultaneously stabilizing state feedback gain or a simultaneously stabilizing output injection matrix exists, and complete state information is not available. The proposed control laws are illustrated by a numerical example.

Journal ArticleDOI
TL;DR: These methods require bounded storage in contrast to the original level methods of Lemaréchal, Nemirovskii and Nesterov and give extensions for solving convex constrained problems, convex saddle-point problems and variational inequalities with monotone operators.
Abstract: We study proximal level methods for convex optimization that use projections onto successive approximations of level sets of the objective corresponding to estimates of the optimal value. We show that they enjoy almost optimal efficiency estimates. We give extensions for solving convex constrained problems, convex-concave saddle-point problems and variational inequalities with monotone operators. We present several variants, establish their efficiency estimates, and discuss possible implementations. In particular, our methods require bounded storage in contrast to the original level methods of Lemarechal, Nemirovskii and Nesterov.

Journal ArticleDOI
TL;DR: It is shown that most robust performance analysis and synthesis problems under this strongly robust H ∞ performance criterion can be transformed into linear matrix inequality problems, and can be solved through finite-dimensional convex programming.

Journal ArticleDOI
TL;DR: In this article, a parametrization of all finite-dimensional, linear-time-invariant controllers which asymptotically stabilize a given finite dimensional, linear time invariant system is presented.
Abstract: This paper presents a parametrization of all finite-dimensional, linear time-invariant controllers which asymptotically stabilize a given finite-dimensional, linear time-invariant system. Both continuous-time and discrete-time systems are considered. A potential advantage over existing parametrization schemes in the frequency domain is that the controller order can be fixed. Consequently, necessary and sufficient conditions for stabilizability via static output feedback controller are obtained and stated by the existence of a quadratic Lyapunov functionV(x):=x T Px such thatP satisfies a linear matrix inequality (LMI), whileP −1 satisfies another LMI. If the controller order is not fixed a priori, then the resulting computational problem can be made convex, and a controller of order less than or equal to the plant order may always be constructed.

Proceedings ArticleDOI
01 Dec 1995
TL;DR: This work transforms the gate and multilayer wire sizing problem into a convex programming problem for the Elmore delay approximation, and describes an approach for efficient computation of the RC circuit delay sensitivities.
Abstract: With an ever-increasing portion of the delay in high-speed CMOS chips attributable to the interconnect, interconnect-circuit design automation continues to grow in importance. By transforming the gate and multilayer wire sizing problem into a convex programming problem for the Elmore delay approximation, we demonstrate the efficacy of a sequential quadratic programming (SQP) solution method. For cases where accuracy greater than that provided by the Elmore delay approximation is required, we apply SQP to the gate and wire sizing problem with more accurate delay models. Since efficient calculation of sensitivities is of paramount importance during SQP, we describe an approach for efficient computation of the RC circuit delay sensitivities.

Journal ArticleDOI
TL;DR: In this paper, a unified approach for deriving many old and new error estimates for linear programs, linear complementarity problems, convex quadratic programs, and affine variational inequality problems is presented.
Abstract: In this paper, we establish a local error estimate for feasible solutions of a piecewise convex quadratic program and a global error estimate for feasible solutions of a convex piecewise quadratic program. These error estimates provide a unified approach for deriving many old and new error estimates for linear programs, linear complementarity problems, convex quadratic programs, and affine variational inequality problems. The approach reveals the fact that each error estimate is a consequence of some reformulation of the original problem as a piecewise convex quadratic program or a convex piecewise quadratic program. In a sense, even Robinson's result on the upper Lipschitz continuity of a polyhedral mapping can be considered as a special case of error estimates for approximate solutions of a piecewise convex quadratic program. As an application, we derive new (global) error estimates for iterates of the proximal point algorithm for solving a convex piecewise quadratic program.

Journal ArticleDOI
TL;DR: A proximal bundle method is presented for minimizing a nonsmooth convex function and, when applied to the Lagrangian decomposition of convex programs, allows for inexact solutions of decomposed subproblems; yet, increasing their required accuracy automatically, it asymptotically finds both the primal and dual solutions.
Abstract: A proximal bundle method is presented for minimizing a nonsmooth convex functionf. At each iteration, it requires only one approximate evaluation off and its e-subgradient, and it finds a search direction via quadratic programming. When applied to the Lagrangian decomposition of convex programs, it allows for inexact solutions of decomposed subproblems; yet, increasing their required accuracy automatically, it asymptotically finds both the primal and dual solutions. It is an implementable approximate version of the proximal point algorithm. Some encouraging numerical experience is reported.

Proceedings ArticleDOI
21 Jun 1995
TL;DR: In this paper, a linear, finite-dimensional plant, with state-space parameter dependence, is controlled using a parameter-dependent controller, whose parameters whose values are in a compact set, are known in real time.
Abstract: A linear, finite-dimensional plant, with state-space parameter dependence, is controlled using a parameter-dependent controller. The parameters whose values are in a compact set, are known in real time. Their rates of variation are bounded and known in real time also. The goal of control is to stabilize the parameter-dependent closed-loop system, and provide disturbance/error attenuation as measured in induced L/sub 2/ norm. The authors' approach uses a parameter-dependent Lyapunov function, and solves the control synthesis problem by reformulating the existence conditions into an semi-infinite dimensional convex optimization. The authors propose finite dimensional approximations to get sufficient conditions for successful controller design.

Book ChapterDOI
01 Jan 1995
TL;DR: A recent review of recent algorithmic developments in multiplicative programming can be found in this article, where the authors discuss the types of multiplicative problems that can be solved in a practical sense.
Abstract: This chapter reviews recent algorithmic developments in multiplicative programming. The multiplicative programming problem is a class of minimization problems containing a product of several convex functions either in its objective or in its constraints. It has various practical applications in such areas as microeconomics, geometric optimization, multicriteria optimization and so on. A product of convex functions is in general not (quasi)convex, and hence the problem can have multiple local minima. However, some types of multiplicative problems can be solved in a practical sense. The types to be discussed in this chapter are minimization of a product of p convex functions over a convex set, minimization of a sum of p convex multiplicative functions, and minimization of a convex function subject to a constraint on a product of p convex functions. If p is less than four or five, it is shown that parametric simplex algorithms or global optimization algorithms work very well for these problems.

Journal ArticleDOI
TL;DR: An entropy-like proximal method for the minimization of a convex function subject to positivity constraints is extended to an interior algorithm in two directions, to general linearly constrained convex minimization problems and to variational inequalities on polyhedra.
Abstract: In this paper, an entropy-like proximal method for the minimization of a convex function subject to positivity constraints is extended to an interior algorithm in two directions. First, to general linearly constrained convex minimization problems and second, to variational inequalities on polyhedra. For linear programming, numerical results are presented and quadratic convergence is established.

Journal ArticleDOI
TL;DR: In this article, a globally convergent and locally superlinearly convergent method for solving a convex minimization problem whose objective function has a semismooth but non-differentiable gradient is presented.
Abstract: This paper presents a globally convergent and locally superlinearly convergent method for solving a convex minimization problem whose objective function has a semismooth but nondifferentiable gradient. Applications to nonlinear minimax problems, stochastic programs with recourse, and their extensions are discussed.

Proceedings ArticleDOI
21 Jun 1995
TL;DR: In this paper, the stability analysis of a negative feedback interconnection of a multivariable linear time-invariant system and a structured time invariant incrementally sector bounded nonlinearity is addressed.
Abstract: This paper addresses the stability analysis of a negative feedback interconnection of a multivariable linear time-invariant system and a structured time-invariant incrementally sector bounded nonlinearity. The classic Zames-Falb multiplier (1968) is extended to the multivariable case and is approximated arbitrarily closely by linear matrix inequalities. The problem of finding the multiplier that provides the largest stability bound then becomes a convex optimization problem over state space parameters. The method is also applied to symmetric incrementally sector bounded structured nonlinearities and provides an upper bound for the generalized structured singular value. Numerical examples are provided to demonstrate the effectiveness of this method.

Journal ArticleDOI
TL;DR: In this paper, a number of standard robustness tests can be reinterpreted as special cases of the application of the passivity theorem with the appropriate choice of multipliers, using convex optimization over linear matrix inequalities.

Journal ArticleDOI
TL;DR: Two numerical methods are developed to design a variable structure control that satisfies the reaching condition using static output feedback and it is shown how the resulting control law can be modified to be robust in the presence of parameter uncertainty or a disturbance.

Journal ArticleDOI
TL;DR: In this article, partial extensions of Attouch's Theorem to functions more general than convex are presented, called primal-lower-nice functions, which are defined in terms of the epi-convergence or graph convergence of certain difference quotient mappings.
Abstract: Attouch's Theorem, which gives on a reflexive Banach space the equivalence between the Mosco epi-convergence of a sequence of convex functions and the graph convergence of the associated sequence of subgradients, has many important applications in convex optimization. In particular, generalized derivatives have been defined in terms of the epi-convergence or graph convergence of certain difference quotient mappings, and Attouch's Theorem has been used to relate these various generalized derivatives. These relations can then be used to study the stability of the solution mapping associated with a parameterized family of optimization problems. We prove in a Hilbert space several "partial extensions" of Attouch's Theorem to functions more general than convex; these functions are called primal-lower-nice. Furthermore, we use our extensions to derive a relationship between the second-order epi-derivatives of primal-lower-nice functions and the proto-derivative of their associated subgradient mappings.

Journal ArticleDOI
TL;DR: This paper identifies a class of φ-divergences for which superlinear convergence is attained both for optimization problems with strongly convex objectives at the optimum and linear programming problems, when the regularization parameters tend to zero.
Abstract: The φ-divergence proximal method is an extension of the proximal minimization algorithm, where the usual quadratic proximal term is substituted by a class of convex statistical distances, called φ-divergences. In this paper, we study the convergence rate of this nonquadratic proximal method for convex and particularly linear programming. We identify a class of φ-divergences for which superlinear convergence is attained both for optimization problems with strongly convex objectives at the optimum and linear programming problems, when the regularization parameters tend to zero. We prove also that, with regularization parameters bounded away from zero, convergence is at least linear for a wider class of φ-divergences, when the method is applied to the same kinds of problems. We further analyze the associated class of augmented Lagrangian methods for convex programming with nonquadratic penalty terms, and prove convergence of the dual generated by these methods for linear programming problems under a weak nondegeneracy assumption.

Journal ArticleDOI
TL;DR: In this paper, a robust control oriented identification problem is studied in the framework of H/sub /spl infin// identification, where the available experimental information consists of a corrupt finite output time series obtained in response to a known nonzero but otherwise arbitrary input.
Abstract: In this paper we study a worse-case, robust control oriented identification problem. This problem is in the framework of H/sub /spl infin// identification, but the formulation here is more general. The available a priori information in our problem consists of a lower bound on the relative stability of the plant, an upper bound on a certain gain associated with the plant, and an upper bound on the noise level. The plant to be identified is assumed to lie in a certain subset in the space of H/sub /spl infin//, characterized by the assumed a priori information. The available experimental information consists of a corrupt finite output time series obtained in response to a known nonzero but otherwise arbitrary input. The proposed algorithm is in the class of interpolatory algorithms which are known to possess desirable optimality properties in reducing the identification error. This algorithm is obtained by solving an extended Caratheodory-Fejer problem via standard convex programming methods. Both the algorithm and error bounds ran be applied to l/sub 1/ identification problems as well. >

Proceedings ArticleDOI
13 Dec 1995
TL;DR: In this article, the authors apply the passivity theorem with appropriate choice of multipliers to develop sufficient conditions for stability of the AWBT framework presented in Kothare et al. They show that these tests can be performed using convex optimization over linear matrix inequalities.
Abstract: Applies the passivity theorem with appropriate choice of multipliers to develop sufficient conditions for stability of the anti-windup bumpless transfer (AWBT) framework presented in Kothare et al. (1994). For particular choices of the multipliers, the authors show that these tests can be performed using convex optimization over linear matrix inequalities (LMIs). The sufficient conditions are complemented by necessary conditions for internal stability of the AWBT compensated system.