scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1981"


Journal ArticleDOI
TL;DR: The iterative primal-dual method of Bregman for solving linearly constrained convex programming problems, which utilizes nonorthogonal projections onto hyperplanes, is represented in a compact form, and a complete proof of convergence is given for an almost cyclic control of the method.
Abstract: The iterative primal-dual method of Bregman for solving linearly constrained convex programming problems, which utilizes nonorthogonal projections onto hyperplanes, is represented in a compact form, and a complete proof of convergence is given for an almost cyclic control of the method. Based on this, a new algorithm for solving interval convex programming problems, i.e., problems of the form minf(x), subject to γ≤Ax≤δ, is proposed. For a certain family of functionsf(x), which includes the norm ∥x∥ and thex logx entropy function, convergence is proved. The present row-action method is particularly suitable for handling problems in which the matrixA is large (or huge) and sparse.

482 citations


Journal ArticleDOI
TL;DR: The main feature of row-action methods is that they are iterative procedures which, without making any changes to the original matrix A, use the rows of A, one row at a time as discussed by the authors.
Abstract: This paper brings together and discusses theory and applications of methods, identified and labelled as row-action methods, for linear feasibility problems (find $x \in {\bf R}^n $, such that $Ax \leqq b$), linearly constrained optimization problems (minimize $f(x)$, subject to $Ax \leqq b$) and some interval convex programming problems (minimize $f(x)$, subject to $c \leqq Ax \leqq b$).The main feature of row-action methods is that they are iterative procedures which, without making any changes to the original matrix A, use the rows of A, one row at a time. Such methods are important and have demonstrated effectiveness for problems with large or huge matrices which do not enjoy any detectable or usable structural pattern, apart from a high degree of sparaseness.Fields of application where row-action methods are used in various ways include image reconstruction from projection, operations research and game theory, learning theory, pattern recognition and transportation theory. A row-action method for the ...

472 citations


Journal ArticleDOI
TL;DR: In this article, a class of probability density estimates can be obtained by penalizing the likelihood by a functional which depends on the roughness of the logarithm of the density.
Abstract: : A class of probability density estimates can be obtained by penalizing the likelihood by a functional which depends on the roughness of the logarithm of the density. The limiting case of the estimates as the amount of smoothing increasing has a natural form which makes the method attractive for data analysis and which provides a rationale for a particular choice of roughness penalty. The estimates are shown to be the solution of an unconstrained convex optimization problem, and mild natural conditions are given for them to exist. Rates of consistency in various norms and conditions for asymptotic normality and approximation by a Gaussian process are given, thus breaking new ground in the theory of maximum penalized likelihood density estimation. (Author)

306 citations



Journal ArticleDOI
TL;DR: This paper provides a recursive procedure to solve knapsack problems and differs from classical optimization algorithms of convex programming in that it determines at each iteration the optimal value of at least one variable.
Abstract: The allocation of a specific amount of a given resource among competitive alternatives can often be modelled as a knapsack problem. This model formulation is extremely efficient because it allows convex cost representation with bounded variables to be solved without great computational efforts. Practical applications of this problem abound in the fields of operations management, finance, manpower planning, marketing, etc. In particular, knapsack problems emerge in hierarchical planning systems when a first level of decisions need to be further allocated among specific activities which have been previously treated in an aggregate way. In this paper we provide a recursive procedure to solve such problems. The method differs from classical optimization algorithms of convex programming in that it determines at each iteration the optimal value of at least one variable. Applications and computational results are presented.

177 citations


Journal ArticleDOI
TL;DR: A method is described for globally minimizing concave functions over convex sets whose defining constraints may be nonlinear that allows the objective function to be lower semicontinuous and nonseparable, and is guaranteed to converge to the global solution.
Abstract: A method is described for globally minimizing concave functions over convex sets whose defining constraints may be nonlinear. The algorithm generates linear programs whose solutions minimize the convex envelope of the original function over successively tighter polytopes enclosing the feasible region. The algorithm does not involve cuts of the feasible region, requires only simplex pivot operations and univariate search computations to be performed, allows the objective function to be lower semicontinuous and nonseparable, and is guaranteed to converge to the global solution. Computational aspects of the algorithm are discussed.

121 citations


Book
01 Jan 1981
TL;DR: First-order optimality conditions for convex programming are developed using a feasible directions approach and prove useful also in studying the stability of perturbed convex programs.
Abstract: First-order optimality conditions for convex programming are developed using a feasible directions approach. Numerical implementations and applications are discussed. The concepts of constancy directions and minimal index set of binding constraints, central to our theory, prove useful also in studying the stability of perturbed convex programs.

83 citations


Journal ArticleDOI
TL;DR: In this paper, a method for finding the minimum for a class of nonconvex and non-differentiable functions consisting of the sum of a convex function and a continuously differentiable function is presented.
Abstract: This paper presents a method for finding the minimum for a class of nonconvex and nondifferentiable functions consisting of the sum of a convex function and a continuously differentiable function. The algorithm is a descent method which generates successive search directions by solving successive convex subproblems. The algorithm is shown to converge to a critical point.

74 citations


Journal ArticleDOI
TL;DR: In this paper, the authors formulate and prove various separation principles for convex relations taking values in an order complete vector space, and these principles subsume the standard ones, and prove that these separation principles are optimal.
Abstract: We formulate and prove various separation principles for convex relations taking values in an order complete vector space. These principles subsume the standard ones.

69 citations


Journal ArticleDOI
TL;DR: A duality theory for the problem of finding the properly efficient values of a multiple objective convex program is obtained using arguments and concepts analogous to those used by Rockafellar for scalar valued problems.
Abstract: A duality theory for the problem of finding the properly efficient values of a multiple objective convex program is obtained using arguments and concepts analogous to those used by Rockafellar for scalar valued problems. A key result is that xI„ minimizes a convex, vector valued function f if and only if there is a “zero-like” subgradient of f at xI„. Consequently, if a vector valued convex program with a given perturbation is represented by a convex bifunction, then the properly efficient values of the program are equal to the perturbation function evaluated at zero-like perturbations. A dual program is defined in terms of the adjoint bifunction. Certain closure conditions insure the absence of a duality gap. The dual variables are matrices with the interpretation that the i, j-th element is the value of the ith resource in terms of the jth objective. A concept of optimality called Isermann efficiency is introduced and compared with efficiency and proper efficiency. Isermann efficiency generalizes a concept from Isermann's work on multiple objective linear programs.

60 citations


Journal ArticleDOI
Layne T. Watson1
TL;DR: The Chow-Yorke algorithm as discussed by the authors is a scheme for developing homotopy methods that are globally convergent with probability one, which has been successfully applied to a wide range of engineering problems, particularly those for which quasi-Newton and locally convergent iterative techniques are inadequate.

Journal ArticleDOI
TL;DR: In this paper, the convergence of sets, functions and subdifferentials is studied in finite-dimensional settings and conditions under which this convergence is preserved under various basic operations, including those of addition and infimal convolution in the case of functions.
Abstract: We study a convergence notion which has particular relevance for convex analysis and lends itself quite naturally to successive approximation schemes in a variety of areas. Motivated particularly by problems in optimization subject to constraints, we develop technical tools necessary for systematic use of this convergence in finite-dimensional settings. Simple conditions are established under which this convergence for sequences of sets, functions and subdifferentials is preserved under various basic operations, including, for example, those of addition and infimal convolution in the case of functions.

Journal ArticleDOI
TL;DR: In this article, it was shown that the level curves of the first eigenfunction in the clamped membrane problem for a convex region Ω are convex, and an alternate proof of the above convexity theorem which makes use of the maximum principle was given.
Abstract: It follows from a result of Brascamp and Lieb that the level curves of the first eigenfunction in the clamped membrane problem for a convex regionΩ are convex. This paper gives an alternate proof of the above convexity theorem which makes use of the maximum principle — a method which also yields pointwise bounds for the curvature of the level curve through an arbitrary point inΩ. The convexity theorem is then used to establish the existence of convex solutions to a related free boundary problem.

Book ChapterDOI
TL;DR: In this article, a branch and bound method for solving non-separable non-convex programming problems where the nonlinearities are piecewise linearly approximated using the standard simplicial subdivision of the hypercube is proposed.
Abstract: This paper suggests a branch and bound method for solving non-separable non-convex programming problems where the nonlinearities are piecewise linearly approximated using the standard simplicial subdivision of the hypercube. The method is based on the algorithm for Special Ordered Sets, used with separable problems, but involves using two different types of branches to achieve valid approximations.

Journal ArticleDOI
TL;DR: In this article, a method is presented which attempts to minimize the weight of a 3-dimensional truss structure subject to displacement-, stress and buckling constraints under multiple load conditions, both the cross section areas of the bars and the geometry (but not the topology) of the structure are permitted to vary during the optimization.

Journal ArticleDOI
TL;DR: In this article, a connection of these results to the theorem of Hardy, Littlewood and Polya on rearrangement of functions is discussed, and by means of the results on the ordering of probability measures a generalization of a theorem on doubly stochastic linear operators due to Ryff is proved.
Abstract: Some characterizations of semiorders defined on the set of all probability measures on $R^n$ by the set of Schur-convex functions and by some subsets of all convex functions are proved. A connection of these results to the theorem of Hardy, Littlewood and Polya on the rearrangement of functions is discussed. Furthermore, by means of the results on the ordering of probability measures a generalization of a theorem on doubly stochastic linear operators due to Ryff is proved.

Journal ArticleDOI
TL;DR: It is shown that a semi-infinite quasi-convex program with certain regularity conditions possesses finitely constrained subprograms with the same optimal value.
Abstract: We show that a semi-infinite quasi-convex program with certain regularity conditions possesses finitely constrained subprograms with the same optimal value. This result is applied to various problems.

Book ChapterDOI
01 Jan 1981
TL;DR: In this article, the ellipsoid algorithm of N Z Shor and L G Khachiyan can be applied to solve convex quadratic programming problems with integer data in polynomially bounded time.
Abstract: We show that the ellipsoid algorithm of N Z Shor and L G Khachiyan, can be applied to solve convex quadratic programming problems with integer data in polynomially bounded time

Journal ArticleDOI
TL;DR: In this paper, a survey of linear model theory for variance covariance component estimation is presented, and a characterisation of the non-negative definite analogue of the MINQUE procedure is presented.
Abstract: Geometric aspects of linear model theory are surveyed as they bear on mean estimation, or variance covariance component estimation. It is outlined that notions associated with linear subspaces suffice for those of the customary procedures which are solely based on linear, or multilinear algebra. While conceptually simple, these methods do not always respect convexity constraints which naturally arise in variance component estimation. Previous work on negative estimates of variance is reviewed, followed by a more detailed study of the non-negative definite analogue of the MINQUE procedure. Some characterizations are proposed which are based on convex duality theory. Optimal estimators now correspond to (non-linear) projections onto closed convex cones, they are easy to visualise, but hard to compute. No ultimate solution can be recommended, instead the paper concludes with a list of open problems.

Journal ArticleDOI
TL;DR: In this article, the problem of finding the minimum value of the upper hull of n convex functionals on a Hilbert space, subject to convex constraints, is reformulated as the minimum of the "worst" convex combination of these functionals, which eventually yields a saddle-point problem.
Abstract: We consider the problem of finding the minimum value of the upper hull ofn convex functionals on a Hilbert space, subject to convex constraints. The problem is reformulated as that of finding the minimum of the "worst" convex combination of these functionals, which eventually yields a saddle-point problem. We propose a new algorithm to solve this problem that simplifies the task of updating the dual variables. Simultaneously, the constraints can be dualized by introducing other dual multipliers. Convergence proofs are given and a concrete example shows the practical and computational advantages of the proposed algorithm and approach.

Journal ArticleDOI
TL;DR: The perturbational Lagrangian equation established by Jeroslow in convex semi-infinite programming is derived from Helly's theorem and some prior results on one-dimensional perturbations of convex programs as mentioned in this paper.
Abstract: The perturbational Lagrangian equation established by Jeroslow in convex semi-infinite programming is derived from Helly's theorem and some prior results on one-dimensional perturbations of convex programs.

Book ChapterDOI
01 Jan 1981
TL;DR: The methods discussed are based on local piecewise-linear secant approximations to continuous convex objective functions, which are easily constructed and require only function evaluations rather than derivatives.
Abstract: The methods discussed are based on local piecewise-linear secant approximations to continuous convex objective functions. Such approximations are easily constructed and require only function evaluations rather than derivatives. Several related iterative procedures are considered for the minimization of separable objectives over bounded closed convex sets. Computationally, the piecewise-linear approximation of the objective is helpful in the case that the original problem has only linear constraints, since the subproblems in this case will be linear programs. At each iteration, upper and lower bounds on the optimal value are derived from the piecewise-linear approximations. Convergence to the optimal value of the given problem is established under mild hypotheses. The method has been successfully tested on a variety of problems, including a water supply problem with more than 900 variables and 600 constraints.

Journal ArticleDOI
Eric Rosenberg1
TL;DR: Solving a minimization convex program by sequentially solving a minimizations convex approximating subproblem and then executing a line search on an exact penalty function generates a sequence of estimates that converges to a solution of the given problem.
Abstract: We consider solving a minimization convex program by sequentially solving a minimization convex approximating subproblem and then executing a line search on an exact penalty function. Each subproblem is constructed from the current estimate of a solution of the given problem, possibly together with other information. Under mild conditions, solving the current subproblem generates a descent direction for the exact penalty function. Minimizing the exact penalty function along the current descent direction provides a new estimate of a solution, and a new subproblem is formed. For any arbitrary starting estimate, this scheme generates a sequence of estimates that converges to a solution of the given problem. Moreover, the functions defining the given problem and each subproblem need not be differentiable.

Journal ArticleDOI
TL;DR: In this paper, a minor modification of the usual Lagrangian function (unlike that of the augmented Lagrangians), plus a limiting operation, allows one to close duality gaps even in the absence of a Kuhn-Tucker vector.
Abstract: For convex optimization inRn,we show how a minor modification of the usual Lagrangian function (unlike that of the augmented Lagrangians), plus a limiting operation, allows one to close duality gaps even in the absence of a Kuhn-Tucker vector [see the introductory discussion, and see the discussion in Section 4 regarding Eq. (2)]. The cardinality of the convex constraining functions can be arbitrary (finite, countable, or uncountable).


Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of minimizing a linear functional p on a closed, convex subset of Euclidean m-space, and investigated upper semicontinuity of the set of optimal solutions and the continuous dependence of the optimal value on p.
Abstract: In this paper, we treat optimization problems of the following type: minimize a linear functional p on a closed, convex subset of Euclidean m-space. (Semi- infinite linear optimization problems are of this type.) We investigate upper semicontinuity of the set of optimal solutions and the continuous dependence of the optimal value on p. Thereby we essentially work with the concept of characteristic cones of closed, convex sets.



Journal ArticleDOI
TL;DR: In this article, conditions for admissibility of spectral synthesis were formulated in terms of the domains of the original convex domains, and conditions for spectral synthesis in the case of unbounded domains were formulated.
Abstract: The problem of approximating solutions of systems of homogeneous convolution equations and the more general problem of spectral synthesis are considered in the situation where the original convex domains are unbounded. Conditions for admissibility of spectral synthesis are formulated in terms of the domains . Bibliography: 8 titles.

Journal ArticleDOI
TL;DR: A duality approach is developed, based on objective function parametrizations, to characterize this difference under rather general circumstances, and it is shown that nonstandard polynomial Kuhn-Tucker vectors exist for any convex program having finite value.
Abstract: Unlike elementary finite linear programming, the optimal program value of a convex optimization problem is generally different from the vector product of the marginal price vector and the resource right-hand side vector. In this paper, a duality approach is developed, based on objective function parametrizations, to characterize this difference under rather general circumstances. The approach generalizes the concept of Kuhn-Tucker vectors of a convex program. It is shown that nonstandard polynomial Kuhn-Tucker vectors exist for any convex program having finite value. Two examples illustrate the procedure.