scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1990"


Book
01 Jan 1990
TL;DR: In this paper, the authors present a probabilistic model for convexity in the Euclidean plane and a model of the massless damped spring, which is based on the Euler-Lagrange equations.
Abstract: 1. Probabilistic Modelling: Pros and Cons. Preliminary considerations. Probabilistic modelling in mechanics. Reliability of structures. Sensitivity of failure probability. Some quotations on the limitations of probabilistic methods. 2. Mathematics of Convexity. Convexity and Uncertainty. What is convexity? Geometric convexity in the Euclidean plane. Algebraic convexity in Euclidean space. Convexity in function spaces. Set-convexity and function-convexity. The structure of convex sets. Extreme points and convex hulls. Extrema of linear functions on convex sets. Hyperplane separation of convex sets. Convex models. 3. Uncertain Excitations. Introductory examples. The massless damped spring. Excitation sets. Maximum responses. Measurement optimization. Vehicle vibration. Introduction. The vehicle model. Uniformly bounded substrate profiles. Extremal responses on uniformly bounded substrates. Duration of acceleration excursions on uniformly bounded substrates. Substrate profiles with bounded slopes. Isochronous obstacles. Solution of the Euler-Lagrange equations. Seismic excitation. Vibration measurements. Introduction. Damped vibrations: full measurement. Example: 2-dimensional measurement. Damped vibrations: partial measurement. Transient vibrational acceleration. 4. Geometric Imperfections. Dynamics of thin bars. Introduction. Analytical formulation. Maximum deflection. Duration above a threshold. Maximum integral displacements. Impact loading of thin shells. Introduction. Basic equations. Extremal displacement. Numerical example. Buckling of thin shells. Introduction. Bounded Fourier coefficients: first-order analysis. Bounded Fourier coefficients: second-order analysis. Uniform bounds on imperfections. Envelope bounds on imperfections. Estimates of the knockdown factor. First and second-order analyses. 5. Concluding Remarks. Bibliography. Index.

801 citations


Journal ArticleDOI
TL;DR: A technique is given for choosing {uk} adaptively that eliminates sensitivity to objective scaling and some encouraging numerical experience is reported.
Abstract: Proximal bundle methods for minimizing a convex functionf generate a sequence {x k } by takingx k+1 to be the minimizer of $$\hat f^k (x) + u^k |x - x^k |^2 /2$$ , where $$\hat f^k $$ is a sufficiently accurate polyhedral approximation tof andu k > 0. The usual choice ofu k = 1 may yield very slow convergence. A technique is given for choosing {u k } adaptively that eliminates sensitivity to objective scaling. Some encouraging numerical experience is reported.

454 citations


Journal ArticleDOI
TL;DR: In this article, a localization property of convex viscosity solutions to the Monge-Ampere inequality 0 1−(2/n) is shown, where the solution is strictly convex.
Abstract: The purpose of this note is to show a localization property of convex viscosity solutions to the Monge-Ampere inequality 0 1−(2/n)) are strictly convex

348 citations


Journal ArticleDOI
TL;DR: In this article, pure first-order characterizations of various types of generalized convex functions are obtained for gradient maps and generalized monotonicity properties of the underlying function are related to the convexity properties of gradient maps.
Abstract: Known as well as new types of monotone and generalized monotone maps are considered. For gradient maps, these generalized monotonicity properties can be related to generalized convexity properties of the underlying function. In this way, pure first-order characterizations of various types of generalized convex functions are obtained.

320 citations


Journal ArticleDOI
TL;DR: A general-purpose algorithm for converting procedures that solves linear programming problems that is polynomial for constraint matrices with polynomially bounded subdeterminants and an algorithm for finding a ε-accurate optimal continuous solution to the nonlinear problem.
Abstract: The polynomiality of nonlinear separable convex (concave) optimization problems, on linear constraints with a matrix with “small” subdeterminants, and the polynomiality of such integer problems, provided the inteter linear version of such problems ins polynomial, is proven. This paper presents a general-purpose algorithm for converting procedures that solves linear programming problems. The conversion is polynomial for constraint matrices with polynomially bounded subdeterminants. Among the important corollaries of the algorithm is the extension of the polynomial solvability of integer linear programming problems with totally unimodular constraint matrix, to integer-separable convex programming. An algorithm for finding a e-accurate optimal continuous solution to the nonlinear problem that is polynomial in log(1/e) and the input size and the largest subdeterminant of the constraint matrix is also presented. These developments are based on proximity results between the continuous and integral optimal solutions for problems with any nonlinear separable convex objective function. The practical feature of our algorithm is that is does not demand an explicit representation of the nonlinear function, only a polynomial number of function evaluations on a prespecified grid.

256 citations


Journal ArticleDOI
TL;DR: This work shows that existing convergence results for this projection algorithm follow from one given by Gabay for a splitting algorithm for finding a zero of the sum of two maximal monotone operators, and obtains a decomposition method that can simultaneously dualize the linear constraints and diagonalize the cost function.
Abstract: A classical method for solving the variational inequality problem is the projection algorithm. We show that existing convergence results for this algorithm follow from one given by Gabay for a splitting algorithm for finding a zero of the sum of two maximal monotone operators. Moreover, we extend the projection algorithm to solveany monotone affine variational inequality problem. When applied to linear complementarity problems, we obtain a matrix splitting algorithm that is simple and, for linear/quadratic programs, massively parallelizable. Unlike existing matrix splitting algorithms, this algorithm converges under no additional assumption on the problem. When applied to generalized linear/quadratic programs, we obtain a decomposition method that, unlike existing decomposition methods, can simultaneously dualize the linear constraints and diagonalize the cost function. This method gives rise to highly parallelizable algorithms for solving a problem of deterministic control in discrete time and for computing the orthogonal projection onto the intersection of convex sets.

139 citations


Journal ArticleDOI
01 Mar 1990
TL;DR: An approach to the analysis and design of linear control systems based on numerical convex optimization over closed-loop maps is presented, and it is shown that many performance specifications have natural and useful geometric interpretations, and the notion of a closed- loop convex design specification is defined.
Abstract: In this tutorial, an approach to the analysis and design of linear control systems based on numerical convex optimization over closed-loop maps is presented. Convexity makes numerical solution effective: it is possible to determine whether or not there is a controller that achieves a given set of specifications. Thus, the limit of achievable performance can be computed. To provide a context for the material presented, a brief overview of control engineering is given. A broad outline of various approaches to control design for linear and time-invariant systems is presented, including their advantages and disadvantages, for purposes of comparison with the approach presented. It is shown that many performance specifications have natural and useful geometric interpretations, and the notion of a closed-loop convex design specification is defined. The performance requirement that the closed-loop system be stable is discussed. It is shown that many performance specifications can be expressed as convex constraints on closed-loop performance specifications, and how some of these can be expressed as convex constraints on closed-loop transfer matrices is examined. >

101 citations


Journal ArticleDOI
TL;DR: A block coordinate ascent method is presented for solving (P) that contains as special cases both dual coordinate ascent methods and dual gradient methods.
Abstract: Consider problems of the form \[({\text{P}})\qquad min \{ . {f(x)} |Ex \geqq b\} ,\] where f is a strictly convex (possibly nondifferentiable) function and E and b are, respectively, a matrix and a vector. A popular method for solving special cases of (P) (e.g., network flow, entropy maximization, quadratic program) is to dualize the constraints $Ex \geqq b$ to obtain a differentiable maximization problem and then apply an iterative ascent method to solve it. This method is simple and can exploit sparsity, thus making it ideal for large-scale optimization and, in certain cases, for parallel computation. Despite its simplicity, however, convergence of this method has been shown only under certain very restrictive conditions and only for certain special cases of (P). In this paper a block coordinate ascent method is presented for solving (P) that contains as special cases both dual coordinate ascent methods and dual gradient methods. It is shown, under certain mild assumptions on f and (P), that this method...

90 citations


Journal ArticleDOI
TL;DR: Study of multiplicative iterative algorithms for the minimization of a differentiable, convex function defined on the positive orthant of R N and the convergence is nearly monotone in the sense of Kullback-Leibler divergence.

87 citations


Journal ArticleDOI
TL;DR: In this paper, the lower limit of the images by a convex set-valued mapping of a family of convex sets is studied, and applications to variational convergence are presented.
Abstract: The lower limit of the images by a convex set-valued mapping of a family of convex sets is studied. Topological and metric results are given. Applications to variational convergence, in particular to epi-convergence of convex functions, are presented.

83 citations


Proceedings ArticleDOI
01 Jan 1990
TL;DR: In this paper, an on-line two-dimensional dynamic programming algorithm for the prediction of RNA secondary structure is presented. But the complexity of the algorithm is not the same as the one presented in this paper.
Abstract: An on-line problem is a problem where each input is available only after certain outputs have been calculated. The usual kind of problem, where all inputs are available at all times, is referred to as an off-line problem. We present an efficient algorithm for Waterman's problem, an on-line two-dimensional dynamic programming problem that is used for the prediction of RNA secondary structure. Our algorithm uses as a module an algorithm for solving a certain on-line one-dimensional dynamic programming problem. The time complexity of our algorithm is n times the complexity of the on-line one-dimensional dynamic programming problem. For the concave case, we present a linear time algorithm for on-line searching in totally monotone matrices which is a generalization of the on-line one-dimensional problem. This yields an optimal O(n2) time algorithm for the on-line two-dimensional concave problem. The constants in the time complexity of this algorithm are fairly small, which make it practical. For the convex case, we use an O(nα(n)) time algorithm for the on-line one-dimensional problem, where α(·) is the functional inverse of Ackermann's function. This yields an O(n2α(n)) time algorithm for the on-line two-dimensional convex problem. Our techniques can be extended to solve the sparse version of Waterman's problem. We obtain an O(n + h log min {h, n 2 h }) time algorithm for the sparse concave case, and an O(n + hα(h)) log min {h, n 2 h }) time algorithm for the sparse convex case, where h is the number of possible base pairs in the RNA structure. All our algorithms improve on previously known algorithms.

Journal ArticleDOI
TL;DR: In this paper, the relationships between various constraint qualifications for infinite-dimensional convex programs are investigated, using Robinson's refinement of the duality result of Rockafellar, and it is demonstrated that the constraint qualification proposed by Robinson provides a systematic mechanism for comparing many constraint qualifications as well as establishing new results in different topological environments.
Abstract: In this paper the relationships between various constraint qualifications for infinite-dimensional convex programs are investigated. Using Robinson’s refinement of the duality result of Rockafellar, it is demonstrated that the constraint qualification proposed by Rockafellar provides a systematic mechanism for comparing many constraint qualifications as well as establishing new results in different topological environments.

01 Jan 1990
TL;DR: This paper presents a new, simple, massively parallel algorithm for linear programming, called the alternating step method, which derives from an extension of the alternating direction method of multipliers for convex programming, giving a new algorithm for monotropic programming in the course of the development.
Abstract: This paper presents a new, simple, massively parallel algorithm for linear programming, called the alternating step method. The algorithm is unusual in that it does not maintain primal feasibility, dual feasibility, or complementary slackness; rather, all these conditions are gradually met as the method proceeds. We derive the algorithm from an extension of the alternating direction method of multipliers for convex programming, giving a new algorithm for monotropic programming in the course of the development. Concentrating on the linear programming case, we give a proof that, under a simple condition on the algorithm parameters, the method converges at a globally linear rate. Finally, we give some preliminary computational results.

Journal ArticleDOI
TL;DR: In this article, the second-order epi-derivatives of extended-real-valued functions are applied to convex functions on Rin and shown to be closely tied to proto-differentiation of the corresponding subgradient multifunctions.
Abstract: The theory of second-order epi-derivatives of extended-real-valued functions is applied to convex functions on Rin and shown to be closely tied to proto-differentiation of the corresponding subgradient multifunctions, as well as to second-order epi-differentiation of conjugate functions. An extension is then made to saddle functions, which by definition are convex in one argument and concave in another. For this case a concept of epi-hypo-differentiability is introduced. The saddle function results provide a foundation for the sensitivity analysis of primal and dual optimal solutions to general finite-dimensional problems in convex optimization, since such solutions are characterized as saddlepoints of a convex-concave Lagrangian function, or equivalently as subgradients of the saddle function conjugate to the Lagrangian.

Journal ArticleDOI
TL;DR: In this article, the authors describe the application of proximal point methods to the linear programming problem, and describe the convergence results of the two-metric gradient-projection approach.
Abstract: We describe the application of proximal point methods to the linear programming problem. Two basic methods are discussed. The first, which has been investigated by Mangasarian and others, is essentially the well-known method of multipliers. This approach gives rise at each iteration to a weakly convex quadratic program which may be solved inexactly using a point-SOR technique. The second approach is based on the proximal method of multipliers, originally proposed by Rockafellar, for which the quadratic program at each iteration is strongly convex. A number of techniques are used to solve this subproblem, the most promising of which appears to be a two-metric gradient-projection approach. Convergence results are given, and some numerical experience is reported.

Journal ArticleDOI
Reiner Horst1
TL;DR: A brief survey of both some of the most promising methods and new fields of application in deterministic global optimization, which comprise branch and bound and outer approximation as well as combinations of branch and Bound with outer approximation.
Abstract: Recent developments in deterministic global optimization methods have considerably enlarged the fields of optimization where those methods can be successfully applied. It is the purpose of the present article to give a brief survey of both some of the most promising methods and new fields of application. The methods considered comprise branch and bound and outer approximation as well as combinations of branch and bound with outer approximation. The fields of applications to be discussed include concave minimization, reverse convex programming, d.c. programming, Lipschitzian optimization, systems of equations, and (or) inequalities and global integer programming.

Journal ArticleDOI
TL;DR: It is shown that the duality gap is reduced at each iteration by a factor of 1-I/√n, where I´ is positive and depends on some parameters associated with the objective function.
Abstract: We describe a primal-dual interior point algorithm for a class of convex separable programming problems subject to linear constraints. Each iteration updates a penalty parameter and finds a Newton step associated with the Karush-Kuhn-Tucker system of equations which characterizes a solution of the logarithmic barrier function problem for that parameter. It is shown that the duality gap is reduced at each iteration by a factor of 1-I´/√n, where I´ is positive and depends on some parameters associated with the objective function.


Journal ArticleDOI
TL;DR: This paper considers an extension to the situation of stochastic programming of the Auxiliary Problem Principle formerly introduced in a deterministic setting to serve as a general framework for decomposition/coordination optimization algorithms.
Abstract: This paper considers an extension to the situation of stochastic programming of the Auxiliary Problem Principle formerly introduced in a deterministic setting to serve as a general framework for decomposition/coordination optimization algorithms The idea is based upon that of the stochastic gradient, that is, independent noise realizations are considered successively along the iterations As a consequence, deterministic subproblems are solved at each iteration whereas iterations fulfill the two tasks of coordination and stochastic approximation at the same time Coupling cost function (expectation of some performance index) and deterministic coupling constraints are considered Price (dual) decomposition (encompassing extensions of the Uzawa and Arrow–Hurwicz algorithms to this stochastic case) are studied as well as resource allocation (primal decomposition)

Journal ArticleDOI
TL;DR: In this paper, it was shown that in the case of m convex quadratic constraints, a two-sided ellipsoidal approximation for the feasible set (intersection of m ellipseids) whose tightness depends only onm is obtained.
Abstract: This paper deals with some problems of algorithmic complexity arising when solving convex programming problems by following the path of analytic centers (i.e., the trajectory formed by the minimizers of the logarithmic barrier function). We prove that in the case ofm convex quadratic constraints we can obtain in a simple constructive way a two-sided ellipsoidal approximation for the feasible set (intersection ofm ellipsoids), whose tightness depends only onm. This can be used for the early identification of those constraints which are active at the optimum, and it also explains the efficiency of Newton's method used as a corrector when following the central path. Various parametrizations of the central path are studied. This also leads to an extrapolation (predictor) algorithm which can be regarded as a generalization of the method of conjugate gradients.

Journal ArticleDOI
01 Apr 1990
TL;DR: In this paper, the authors dealt with the inequalities involving logarithmically convex functions of several variables and provided generalizations of inequalities for univariate functions obtained by Dragomir and Mond.
Abstract: This paper deals with the inequalities involving logarithmically convex functions of several variables. The results here provide generalizations of inequalities for univariate functions obtained by Dragomir and Dragomir and Mond. Mathematics subject classification (2010): 26D15, 26B25.

Journal ArticleDOI
TL;DR: The convergence is established by showing that the approximate MVA equations are the gradient vector of a convex function, and by using results from convex programming and the convex duality theory.
Abstract: This paper is concerned with the properties of nonlinear equations associated with the Scheweitzer-Bard (S-B) approximate mean value analysis (MVA) heuristic for closed product-form queuing networks. Three forms of nonlinear S-B approximate MVA equations in multiclass networks are distinguished: Schweitzer, minimal, and the nearly decoupled forms. The approximate MVA equations have enabled us to: (a) derive bounds on the approximate throughput; (b) prove the existence and uniqueness of the S-B throughput solution, and the convergence of the S-B approximation algorithm for a wide class of monotonic, single-class networks; (c) establish the existence of the S-B solution for multiclass, monotonic networks; and (d) prove the asymptotic (i.e., as the number of customers of each class tends to ∞) uniqueness of the S-B throughput solution, and (e) the convergence of the gradient projection and the primal-dual algorithms to solve the asymptotic versions of the minimal, the Schweitzer, and the nearly decoupled forms of MVA equations for multiclass networks with single server and infinite server nodes. The convergence is established by showing that the approximate MVA equations are the gradient vector of a convex function, and by using results from convex programming and the convex duality theory.

Journal ArticleDOI
TL;DR: In this article, the problem of generic Gateaux-differentiability of convex functions on small sets has been studied, i.e., sets without interior points, where the convex function is defined on convex sets with nonempty interior.


Journal ArticleDOI
TL;DR: This paper presents parallel bundle-based decomposition algorithms to solve a class of structured large-scale convex optimization problems, and presents computational experience with block-angular linear programming problems.
Abstract: In this paper, we present parallel bundle-based decomposition algorithms to solve a class of structured large-scale convex optimization problems. An example in this class of problems is the block-angular linear programming problem. By dualizing, we transform the original problem to an unconstrained nonsmooth concave optimization problem which is in turn solved by using a modified bundle method. Further, this dual problem consists of a collection of smaller independent subproblems which give rise to the parallel algorithms. We discuss the implementation on the CRYSTAL multi-computer. Finally, we present computational experience with block-angular linear programming problems and observe that more than 70% efficiency can be obtained using up to eleven processors for one group of test problems, and more than 60% efficiency can be obtained for relatively smaller problems using up to five processors for another group of problems.

Journal ArticleDOI
TL;DR: It is shown that optimization problems used by Bacharach (1970), Bachem and Korte (1979), Eaves et al. (1985), Marshall and Olkin (1968) and Rothblum and Schneider (1989) to study scaling problems can be derived as special cases of the dual problem for truncated scaling.
Abstract: We present a matrix scaling problem calledtruncated scaling and describe applications arising in economics, urban planning, and statistics. We associate a dual pair of convex optimization problems to the scaling problem and prove that the existence of a solution for the truncated scaling problem is characterized by the attainment of the infimum in the dual optimization problem. We show that optimization problems used by Bacharach (1970), Bachem and Korte (1979), Eaves et al. (1985), Marshall and Olkin (1968) and Rothblum and Schneider (1989) to study scaling problems can be derived as special cases of the dual problem for truncated scaling. We present computational results for solving truncated scaling problems using dual coordinate descent, thereby showing that truncated scaling provides a framework for modeling and solving large-scale matrix scaling problems.

Journal ArticleDOI
TL;DR: A new potential function and a sequence of ellipsoids in the path-following algorithm for convex quadratic programming that contains all of the optimal primal and dual slack vectors.
Abstract: We describe a new potential function and a sequence of ellipsoids in the path-following algorithm for convex quadratic programming Each ellipsoid in the sequence contains all of the optimal primal and dual slack vectors Furthermore, the volumes of the ellipsoids shrink at the ratio $$2^{ - \Omega (\sqrt n )} $$ , in comparison to 2−Ω(1) in Karmarkar's algorithm and 2−Ω(1/n) in the ellipsoid method We also show how to use these ellipsoids to identify the optimal basis in the course of the algorithm for linear programming

Journal ArticleDOI
TL;DR: A first coarse grid is successively refined in such a way that the solution on the foregoing grids can be used on the one hand as starting points for the subsequent grids to considerably reduce the number of constraints which have to be considered in the subsequent problems.
Abstract: For convex quadratic semi-infinite programming problems aFortran-package is described. A first coarse grid is successively refined in such a way that the solution on the foregoing grids can be used on the one hand as starting points for the subsequent grids and on the other hand to considerably reduce the number of constraints which have to be considered in the subsequent problems. This enables an efficient treatment of large problems with moderate storage requirements. Powell's (1983) numerically stable convex quadratic programming implementation is used to solve the QP-subproblems.

Book ChapterDOI
01 Feb 1990
TL;DR: The existence of such pairs where the sides of the outer rectangle have length at most double the length of the inner rectangle is shown, thereby solving the problem of approximating a convex figure in the plane by a pair of homothetic rectangles.
Abstract: We consider the problem of approximating a convex figure in the plane by a pair (τ, R) of homothetic (i.e. similar and parallel) rectangles with τ⊂C⊂R. We show the existence of such pairs where the sides of the outer rectangle have length at most double the length of the inner rectangle, thereby solving a problem posed by Polya and Szegő.

Journal ArticleDOI
TL;DR: In this paper, a necessary and sufficient condition for the stability of a linearly constrained convex quadratic program under perturbations of the linear part of the data, including the constraint matrix, is established.
Abstract: This paper establishes a simple necessary and sufficient condition for the stability of a linearly constrained convex quadratic program under perturbations of the linear part of the data, including the constraint matrix. It also establishes results on the continuity and differentiability of the optimal objective value of the program as a function of a parameter specifying the magnitude of the perturbation. The results established herein directly generalize well-known results on the stability of linear programs.