scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1989"


Book
01 Jan 1989
TL;DR: In this paper, the existence theorem for non-quasiconvex Integrands in the Scalar case has been established in the Vectorial case, where the objective function is to find the minimum of the minimum for a non-convex function.
Abstract: Introduction.- Convex Analysis and the Scalar Case.- Convex Sets and Convex Functions.- Lower Semicontinuity and Existence Theorems.- The one Dimensional Case.- Quasiconvex Analysis and the Vectorial Case.- Polyconvex, Quasiconvex and Rank one Convex Functions.- Polyconvex, Quasiconvex and Rank one Convex Envelopes.- Polyconvex, Quasiconvex and Rank one Convex Sets.- Lower Semi Continuity and Existence Theorems in the Vectorial Case.- Relaxation and Non Convex Problems.- Relaxation Theorems.- Implicit Partial Differential Equations.- Existence of Minima for Non Quasiconvex Integrands.- Miscellaneous.- Function Spaces.- Singular Values.- Some Underdetermined Partial Differential Equations.- Extension of Lipschitz Functions on Banach Spaces.- Bibliography.- Index.- Notations.

2,250 citations


Journal ArticleDOI
TL;DR: A primal-dual interior point algorithm for convex quadratic programming problems which requires a total of O(n3L) number of iterations, whereL is the input size.
Abstract: We describe a primal-dual interior point algorithm for convex quadratic programming problems which requires a total of\(O\left( {\sqrt n L} \right)\) number of iterations, whereL is the input size. Each iteration updates a penalty parameter and finds an approximate Newton direction associated with the Karush-Kuhn-Tucker system of equations which characterizes a solution of the logarithmic barrier function problem. The algorithm is based on the path following idea. The total number of arithmetic operations is shown to be of the order of O(n3L).

322 citations


Book ChapterDOI
TL;DR: Boyd and Yang as mentioned in this paper showed that many system stability and robustness problems can be reduced to the question of when there is a quadratic Lyapunov function of a certain structure which establishes stability of x = Ax for some appropriate A.
Abstract: It is shown that many system stability and robustness problems can be reduced to the question of when there is a quadratic Lyapunov function of a certain structure which establishes stability of x = Ax for some appropriate A. The existence of such a Lyapunov function can be determined by solving a convex program. We present several numerical methods for these optimization problems. A simple numerical example is given. Proofs can be found in Boyd and Yang [BY88].

259 citations


Journal ArticleDOI
TL;DR: In this article, a special purpose dual optimizer is proposed to solve the explicit subproblem generated by the convex linearization strategy, where the maximum of the dual function is sought in a sequence of dual subspaces of variable dimensionality.
Abstract: The Convex Linearization method (CONLIN) exhibits many interesting features and it is applicable to a broad class of structural optimization problems. The method employs mixed design variables (either direct or reciprocal) in order to get first order, conservative approximations to the objective function and to the constraints. The primary optimization problem is therefore replaced with a sequence of explicit approximate problems having a simple algebraic structure. The explicit subproblems are convex and separable, and they can be solved efficiently by using a dual method approach. In this paper, a special purpose dual optimizer is proposed to solve the explicit subproblem generated by the CONLIN strategy. The maximum of the dual function is sought in a sequence of dual subspaces of variable dimensionality. The primary dual problem is itself replaced with a sequence of approximate quadratic subproblems with non-negativity constraints on the dual variables. Because each quadratic subproblem is restricted to the current subspace of non zero dual variables, its dimensionality is usually reasonably small. Clearly, the Hessian matrix does not need to be inverted (it can in fact be singular), and no line search process is necessary. An important advantage of the proposed maximization method lies in the fact that most of the computational effort in the iterative process is performed with reduced sets of primal variables and dual variables. Furthermore, an appropriate active set strategy has been devised, that yields a highly reliable dual optimizer.

235 citations


Journal ArticleDOI
TL;DR: An extension of Karmarkar's linear programming algorithm for solving a more general group of optimization problems: convex quadratic programs, based on the iterated application of the objective augmentation and the projective transformation, followed by optimization over an inscribing ellipsoid centered at the current solution.
Abstract: We present an extension of Karmarkar's linear programming algorithm for solving a more general group of optimization problems: convex quadratic programs. This extension is based on the iterated application of the objective augmentation and the projective transformation, followed by optimization over an inscribing ellipsoid centered at the current solution. It creates a sequence of interior feasible points that converge to the optimal feasible solution in O(Ln) iterations; each iteration can be computed in O(Ln3) arithmetic operations, wheren is the number of variables andL is the number of bits in the input. In this paper, we emphasize its convergence property, practical efficiency, and relation to the ellipsoid method.

211 citations


Journal ArticleDOI
TL;DR: In this article, a block-iterative version of the Agmon-Motzkin-Schoenberg relaxation method for solving systems of linear inequalities is derived, and it is shown that any sequence of iterations generated by the algorithm converges if the intersection of the given family of convex sets is nonempty and that the limit point of the sequence belongs to this intersection under mild conditions on the sequence of weight functions.

178 citations


Book ChapterDOI
01 Jan 1989
TL;DR: In this paper, the main incentive comes from modelling in Applied Mathematics and Operations Research, where one may be faced with optimization problems like: minimizing (globally) a difference of convex functions, maximizing a convex function over convex sets, minimizing an indefinite quadratic form over a polyhedral convex set, etc.
Abstract: Nonconvex minimization problems form an old subject which has received a growing interest in the recent years. The main incentive comes from modelling in Applied Mathematics and Operations Research, where one may be faced with optimization problems like: minimizing (globally) a difference of convex functions, maximizing a convex function over a convex set, minimizing an indefinite quadratic form over a polyhedral convex set, etc.

164 citations


Journal ArticleDOI
TL;DR: It is assumed that the robot and workspace solid geometry are represented as a collection of convex polyhedrons, and an efficient numerical algorithm for determining the minimum distance between two such polyhedron is presented.
Abstract: Computational methods used for the automatic generation of robot paths must be fully developed if truly automated manu facturing systems are to become a reality. An important requirement for determining feasible robot paths is the ability to compute the distance between the various elements of the robot and the workspace fixtures, jigs, and machinery. In this research, it is assumed that the robot and workspace solid geometry are represented as a collection of convex polyhe drons, and an efficient numerical algorithm for determining the minimum distance between two such polyhedrons is presented. In addition to determining the minimum distance between solids, the algorithm can also be used to efficiently ascertain whether a collision has occurred.The numerical technique presented uses a sequence of constrained minimizations to obtain the closest three-dimen sional points on any two solid objects. Computational effi ciency is achieved with the algorithm presented because only the collection of planes (point...

114 citations


Journal ArticleDOI
TL;DR: This paper considers multiproduct manufacturing systems modeled by open networks of queues and formulate the targeting TP and balancing BP problems as nonlinear programs based on parametric decomposition methods for estimating performance measures in open queueing networks.
Abstract: In this paper, we introduce the notions of tradeoff curves, targeting and balancing in manufacturing systems to describe the relationship between variables such as work-in-process, lead-time and capacity. We consider multiproduct manufacturing systems modeled by open networks of queues and formulate the targeting TP and balancing BP problems as nonlinear programs. These formulations are based primarily on parametric decomposition methods for estimating performance measures in open queueing networks. Since TP and BP typically are hard to solve, we show that under fairly realistic conditions they can be approximated by easily solvable convex programs. We present heuristics to obtain approximate solutions to these problems and to derive tradeoff curves. We also provide bounds on the performance of the heuristics, relative to the approximation problems, and show that they are asymptotically optimal under mild conditions.

111 citations


Journal ArticleDOI
TL;DR: In this paper, the concept of Pareto optimum was used to formulate duality for non-linear programs, including convex functions, pseudoconvex functions and weakly convex (or strongly convex) functions.

85 citations


Journal ArticleDOI
TL;DR: The problem of scaling a matrix so that it has given row and column sums is transformed into a convex minimization problem and this transformation is used to characterize the existence of such scaling or corresponding approximations.

Journal ArticleDOI
TL;DR: It is shown that for convex programs the number of iterations required to achieve a given accuracy of solution increases at most linearly in the dimension of the problem.
Abstract: Pure adaptive search constructs a sequence of points uniformly distributed within a corresponding sequence of nested regions of the feasible space. At any stage, the next point in the sequence is chosen uniformly distributed over the region of feasible space containing all points that are equal or superior in value to the previous points in the sequence. We show that for convex programs the number of iterations required to achieve a given accuracy of solution increases at most linearly in the dimension of the problem. This compares to exponential growth in iterations required for pure random search.

Journal ArticleDOI
TL;DR: In this article, an alternative to the usual dynamic programming approach to the optimal control of Markov processes is considered, which is based on duality of convex analysis, and the control problem is embedded in a convex mathematical programming problem on a space of measures.
Abstract: An alternative to the usual dynamic programming approach to the optimal control of Markov processes is considered. It is based on duality of convex analysis. The control problem is embedded in a convex mathematical programming problem on a space of measures. The dual problem is to find the supremum of smooth subsolutions to the Hamilton–Jacobi–Bellman equation.

Journal ArticleDOI
TL;DR: The primary goal of this paper is to show how second derivative information can be used in an effective way in structural optimization problems, and a primal–dual approach is employed, that can be interpreted as a sequential quadratic programming method.
Abstract: The primary goal of this paper is to show how second derivative information can be used in an effective way in structural optimization problems. The basic idea is to generate such an information at the expense of only one more ‘virtual load case’ in the sensitivity analysis part of the finite element code. To achieve this goal a primal–dual approach is employed, that can also be interpreted as a sequential quadratic programming method. Another objective is to relate the proposed method to the well known family of approximation concepts techniques, where the primary optimization problem is transformed into a sequence of non-linear explicit subproblems. When restricted to diagonal second derivatives, the new approach can be viewed as a recursive convex programming method, similar to the ‘Convex Linearization’ method (CONLIN), and to its recent generalization, the ‘Method of Moving Asymptotes’ (MMA). This new method has been successfully tested on simple problems that can be solved in closed form, as well as on sizing optimization of trusses. In all cases the method converges faster than CONLIN, MMA or other approximation techniques based on reciprocal variables.

Journal ArticleDOI
TL;DR: This paper investigates the combinatorial and computational aspects of certain extremal geometric problems in two and three dimensions and is able to prove sharp bounds on the asymptotic behavior of the corresponding extremal functions.
Abstract: This paper investigates the combinatorial and computational aspects of certain extremal geometric problems in two and three dimensions. Specifically, we examine the problem of intersecting a convex subdivision with a line in order to maximize the number of intersections. A similar problem is to maximize the number of intersected facets in a cross-section of a three-dimensional convex polytope. Related problems concern maximum chains in certain families of posets defined over the regions of a convex subdivision. In most cases we are able to prove sharp bounds on the asymptotic behavior of the corresponding extremal functions. We also describe polynomial algorithms for all the problems discussed.

Journal ArticleDOI
TL;DR: In this paper, a primal-dual conjugate subgradient algorithm for solving convex programming problems is presented, which coordinates a primal penalty function and a Lagrangian dual function, in order to generate a (geometrically) convergent sequence of primal and dual iterates.
Abstract: This paper presents a primal-dual conjugate subgradient algorithm for solving convex programming problems. The motivation, however, is to employ it for solving specially structured or decomposable linear programming problems. The algorithm coordinates a primal penalty function and a Lagrangian dual function, in order to generate a (geometrically) convergent sequence of primal and dual iterates. Several refinements are discussed to improve the performance of the algorithm. These are tested on some network problems, with side constraints and variables, faced by the Freight Equipment Management Program of the Association of American Railroads, and suggestions are made for implementation.

Journal ArticleDOI
TL;DR: In this article, it was shown that globally convex functions behave "convexly" on triples of collinear points that are widely dispersed, i.e., they have nice properties with respect to both minimization and maximization.
Abstract: Convex functions have nice properties with respect to both minimization and maximization. Similar properties are established here for functions that are permitted to have bad local behavior but are globally convex in the sense that they behave “convexly” on triples of collinear points that are widely dispersed. The results illustrate a development that seems desirable in the interest of more realistic mathematical modeling: the “globalization” of important function properties. In connection with the maximization of globally convex functions over convex bodies in a given finite-dimensional normed space E, there is interest in estimating the maximum, for points c of bodies $C \subset E$, of the ratio between two measures of how close c comes to being an extreme point of C. Good estimates are obtained for the cases in which E is Euclidean or has the “max” norm.

Journal ArticleDOI
TL;DR: Convergence of the continuous version of this projective SUMT method is proved and an acceleration procedure based on the nonvanishing of the Jacobian of the Karush-Kuhn-Tucker system at a minimizer is shown to converge quadratically.
Abstract: An algorithm for solving convex programming problems is derived from the differential equation characterizing the trajectory of unconstrained minimizers of the classical logarithmic barrier function. Convergence of the continuous version of this projective SUMT method is proved under minimal assumptions. Extension of the algorithm to a form which handles linear equality constraints produces a differential equation analogue of Karmarkar's method for linear programming. The discrete version uses the same method of search and finds the step size by minimizing the logarithmic method of centers function. An acceleration procedure based on the nonvanishing of the Jacobian of the Karush-Kuhn-Tucker system at a minimizer is shown to converge quadratically. When the problem variables are bounded, dual feasible points are available and the algorithm produces at each iteration lower and upper bounds on the global minimum. A matrix approximation is given which greatly reduces the traditional problems in inverting the...

Journal ArticleDOI
TL;DR: This paper presents a dual active set method for minimizing a sum of piecewise linear functions and a strictly convex quadratic function, subject to linear constraints, and an efficient implementation is described extending the Goldfarb and Idnani algorithm, which includes Powell's refinements.
Abstract: This paper presents a dual active set method for minimizing a sum of piecewise linear functions and a strictly convex quadratic function, subject to linear constraints. It may be used for direction finding in nondifferentiable optimization algorithms and for solving exact penalty formulations of (possibly inconsistent) strictly convex quadratic programming problems. An efficient implementation is described extending the Goldfarb and Idnani algorithm, which includes Powell's refinements. Numerical results indicate excellent accuracy of the implementation.

Book ChapterDOI
01 Jan 1989
TL;DR: In this paper, the authors give a complexity analysis concerning the global convergence of the method of analytic centers for solving generalized smooth convex programs, and prove that the analytic center of the feasible set provides a two-sided ellipsoidal approximation of this set, whose tightness, as well as the global rate of convergence, only depends on the number of constraints and on a relative Lipschitz constant of the Hessian matrices of the constraint functions.
Abstract: We give a complexity analysis concerning the global convergence of the method of analytic centers for solving generalized smooth convex programs. We prove that the analytic center of the feasible set provides a two-sided ellipsoidal approximation of this set, whose tightness, as well as the global rate of convergence of the algorithm, only depends on the number of constraints and on a relative Lipschitz constant of the Hessian matrices of the constraint functions, but not on the data of the constraint functions. This work extends the results in [5] where the solution of problems with convex quadratic constraint functions has been discussed.

Journal ArticleDOI
TL;DR: In this article, a method of successive approximations for finding critical points of a function which can be written as the difference of two convex functions is presented, based on using a non-convex duality theory.
Abstract: This paper describes, and analyzes, a method of successive approximations for finding critical points of a function which can be written as the difference of two convex functions. The method is based on using a non-convex duality theory. At each iteration one solves a convex, optimization problem. This alternates between the primal and the dual variables. Under very general structural conditions on the problem, we prove that the resulting sequence is a descent sequence, which converges to a critical point of the problem. To illustrate the method, it is applied to some weighted eigenvalue problems, to a problem from astrophysics, and to some semilinear elliptic equations.

Journal ArticleDOI
TL;DR: A parallel, two-level optimization algorithm is presented, where the the high level problem is solved by Newton's method, and low level subproblems are solved by the Differential Dynamic Programming technique.

Journal ArticleDOI
TL;DR: This paper considers the reconstruction of support-limited signals from noisy samples of their Fourier Transform and Iterative algorithms devised to solve these problems by using a convex programming method.

Journal ArticleDOI
TL;DR: The optimal algorithm for the circuit routing problem is obtained as a limiting case of the sequence of the optimal routing strategies for the corresponding smooth optimization problems, which is capable of efficiently handling networks with a large number of commodities.
Abstract: Consideration is given to the optimal circuit routing problem in an existing circuit-switched network. The objective is to find circuit routing which accommodates a given circuit demand while maximizing the residual capacity of the network. In addition, the cost of accommodating the circuit demand should not exceed a given amount. Practical considerations require that a solution be robust to the variations in circuit demand and cost. The objective function for the optimal circuit routing problem is not a smooth one. In order to overcome the difficulties of nonsmooth optimization, the objective function is approximated by smooth concave functions. The optimization algorithm for the circuit routing problem is obtained as a limiting case of the sequence of optimal routing strategies for the corresponding smooth convex optimization problems, and the proof of its convergence to the optimal solution is given. An approach to calculating the optimal multicommodity flow is presented. The optimization algorithm efficiently handles networks with a large number of commodities, satisfies the robustness requirements, and can be used to solve circuit routing problems for large networks. >

Journal ArticleDOI
TL;DR: A Maximum Aposteriori Probability (MAP-) approach is considered for reconstructing an image from noisy projections and an algorithm to solve the optimization problem based on Bregman's convex programming scheme is derived.

Book
Manfred Walk1
01 Feb 1989
TL;DR: In this article, Convex sets and convex functions have been studied in the dual problems of optimization, where the duality of convex sets has been studied for the first time.
Abstract: Contents: Convex sets and convex functions.- Theory of duality.- Dual problems of optimization.- Literature.- Notation Index.- Subject Index.

Journal ArticleDOI
TL;DR: In this paper, the authors present a bundle method of descent for minimizing a convex (possibly nonsmooth) function f of several variables, which finds a trial point by minimizing a polyhedral model of f subject to an ellipsoid trust region constraint.
Abstract: This paper presents a bundle method of descent for minimizing a convex (possibly nonsmooth) function f of several variables. At each iteration the algorithm finds a trial point by minimizing a polyhedral model of f subject to an ellipsoid trust region constraint. The quadratic matrix of the constraint, which is updated as in the ellipsoid method, is intended to serve as a generalized “Hessian” to account for “second-order” effects, thus enabling faster convergence. The interpretation of generalized Hessians is largely heuristic, since so far this notion has been made precise by J. L. Goffin only in the solution of linear inequalities. Global convergence of the method is established and numerical results are given.

Book ChapterDOI
01 Jan 1989
TL;DR: Exact penalty functions for nonlinear programming problems neither convex nor smooth, have been considered in this paper, where locally lipschitz problems are dealt with. But as pointed out in [22], virtually all the published literature on exact penalty functions treats one of two cases: either the non-linear programming problem is a convex problem or it is a smooth problem.
Abstract: In recent years an increasing attention has been devoted to the use of nondifferentiable exact penalty functions for the solution of nonlinear programming problems. However, as pointed out in [22], virtually all the published literature on exact penalty functions treats one of two cases: either the nonlinear programming problem is a convex problem (see, e.g., [2], [18], [23]), or it is a smooth problem (see, e.g., [1], [3–5], [10–13], [16], [18–20]). Exact penalty functions for nonlinear programming problems neither convex nor smooth, have been considered in [6], [21], [22], where locally lipschitz problems are dealt with.


Journal ArticleDOI
TL;DR: In this article, the authors present a maximum theorem under convex structures but with weaker continuity requirements, and illustrate the usefulness of their results by an application to a problem encountered in the theory of optimal intertemporal allocation.