scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1973"



Journal ArticleDOI
TL;DR: For nonlinear programming problems with equality constraints, Hestenes and Powell as discussed by the authors showed that the rate of convergence is linear if one starts with a sufficiently high penalty factor and sufficiently near to a local solution satisfying the usual second-order sufficient conditions for optimality.
Abstract: For nonlinear programming problems with equality constraints, Hestenes and Powell have independently proposed a dual method of solution in which squares of the constraint functions are added as penalties to the Lagrangian, and a certain simple rule is used for updating the Lagrange multipliers after each cycle. Powell has essentially shown that the rate of convergence is linear if one starts with a sufficiently high penalty factor and sufficiently near to a local solution satisfying the usual second-order sufficient conditions for optimality. This paper furnishes the corresponding method for inequality-constrained problems. Global convergence to an optimal solution is established in the convex case for an arbitrary penalty factor and without the requirement that an exact minimum be calculated at each cycle. Furthermore, the Lagrange multipliers are shown to converge, even though the optimal multipliers may not be unique.

674 citations


01 Jan 1973
TL;DR: Several convex mappings of linear operators on a Hilbert space into the real numbers were derived in this article, including the Wigner-Yanase-Dyson conjecture and the strong subadditivity of quantum mechanical entropy.
Abstract: Several convex mappings of linear operators on a Hilbert space into the real numbers are derived, an example being A → — Tr exp(L + In A). Some of these have applications to physics, specifically to the Wigner—Yanase—Dyson conjecture which is proved here and to the strong subadditivity of quantum mechanical entropy which will be proved elsewhere.

176 citations


Journal ArticleDOI
TL;DR: In this article, duality conditions necessary and sufficient for infinite horizon optimality are derived under a set of general axioms, and dual prices with the required properties are inductively constructed in each period as supports to the state evaluation function.
Abstract: Often it is desirable to formulate certain decision problems without specifying a cut-off date and terminal conditions which are sometimes felt to be arbitrary. This paper examines the duality theory that goes along with the kind of open-ended convex programming models frequently encountered in mathematical economics and operations research. Under a set of general axioms, duality conditions necessary and sufficient for infinite horizon optimality are derived. The proof emphasizes the close connection between duality theory for infinite horizon convex models and dynamic programming. Dual prices with the required properties are inductively constructed in each period as supports to the state evaluation function.

145 citations


Journal ArticleDOI
TL;DR: This study generalizes the formulation of symmetric duality to include the case where the constraints of the inequality type are defined via closed convex cones and their polars, and shows that every strongly convex function achieves a minimum value over anyclosed convex cone at a unique point.
Abstract: In this study we generalize the formulation of symmetric duality introduced by Dantzig, Eisenberg, and Cottle to include the case where the constraints of the inequality type are defined via closed convex cones and their polars. The new formulation retains the symmetric properties of the original programs. Under suitable convexity/concavity assumptions we generalize the known results about symmetric duality. The case where the function involved is strongly convex/strongly concave is also treated and Karamardian's result in this case is generalized. As a result, we show that every strongly convex function achieves a minimum value over any closed convex cone at a unique point. Some special cases of symmetric programs are then considered, leading to generalizations of Wolfe's duality as well as generalizations of quadratic and linear programming formulations.

135 citations


Journal ArticleDOI
TL;DR: A new algorithm is proposed, the $\varepsilon $-subradient method, a large step, double iterative algorithm which converges rapidly under very general assumptions and contains as a special case a minimax algorithm due to Pshenichnyi.
Abstract: In this paper we consider the numerical solution of convex optimization problems with nondifferentiable cost functionals. We propose a new algorithm, the $\varepsilon $-subradient method, a large step, double iterative algorithm which converges rapidly under very general assumptions. We discuss the application of the algorithm in some problems of nonlinear programming and optimal control and we show that the $\varepsilon $-subgradient method contains as a special case a minimax algorithm due to Pshenichnyi [5].

117 citations


Journal ArticleDOI
TL;DR: The characterization of directional derivatives for three major types of extremal-value functions is reviewed and the characterization for the completely convex case is used to construct a robust and convergent feasible direction algorithm.
Abstract: Several techniques in mathematical programming involve the constrained optimization of an extremal-value function. Such functions are defined as the extremal value of a related parameterized optimization problem. This paper reviews and extends the characterization of directional derivatives for three major types of extremal-value functions. The characterization for the completely convex case is then used to construct a robust and convergent feasible direction algorithm. Such an algorithm has applications to the optimization of large-scale nonlinear decomposable systems.

86 citations


Journal ArticleDOI
TL;DR: The GDDS method as discussed by the authors can be used to solve minimax problems in convex programming by reducing it to the solution of a sequence of convex inequalities or by using the method of penalty functions with expansion schemes.
Abstract: As we see from Theorem 2, if the GDDS process has the given properties, it ensures convergence to the minimum of the functional like that of a geometrical progression. In particular, the necessary properties of the GDDS process are ensured by the conditions of Theorem 1 if m=m* (see Theorems 2, 3 of [1]). When we reduce a problem of solving a system of equations or inequalities to a minimization problem, m* is unknown. Thus, the GDDS method must be effective in solving systems of nonlinear equations and inequalities. The essential feature of this method is that the guaranteed denominator of the geometrical progression $$q = {1 \mathord{\left/ {\vphantom {1 {\sqrt[n]{\alpha }}}} \right. \kern- ulldelimiterspace} {\sqrt[n]{\alpha }}}$$ is independent of the degree of pittedness off(x), which makes it possible to successfully use it to solve systems of equations which are nearly degenerate. As a increases the effectiveness of the method in general decreases, both in connection with the fact that 1 becomes close to unity and because a large memory is required to store the matrix Bk. The modification proposed when m* is unknown can be used to solve minimax problems in the general problem of convex programming by reducing it to the solution of a sequence of convex inequalities or by using the method of penalty functions with expansion schemes.

72 citations


Journal ArticleDOI
TL;DR: A saddle point theory in terms of extended Lagrangian functions is presented for nonconvex programs and the results parallel those for convex programs conjoined with the usuallagrangian formulation.
Abstract: A saddle point theory in terms of extended Lagrangian functions is presented for nonconvex programs. The results parallel those for convex programs conjoined with the usual Lagrangian formulation.

70 citations


Journal ArticleDOI
TL;DR: In this paper, a branch and bound technique is used to identify the global minimum extreme point of a convex polyhedron and a linear undrestimator for the constrained concave objective function is developed.
Abstract: A general algorithm is developed for minimizing a well defined concave function over a convex polyhedron. The algorithm is basically a branch and bound technique which utilizes a special cutting plane procedure to' identify the global minimum extreme point of the convex polyhedron. The indicated cutting plane method is based on Glover's general theory for constructing legitimate cuts to identify certain points in a given convex polyhedron. It is shown that the crux of the algorithm is the development of a linear undrestimator for the constrained concave objective function. Applications of the algorithm to the fixed-charge problem, the separable concave programming problem, the quadratic problem, and the 0-1 mixed integer problem are discussed. Computer results for the fixed-charge problem are also presented.

46 citations


Proceedings ArticleDOI
01 Dec 1973
TL;DR: A combined primal-dual and penalty method is given for solving the nonlinear programming problem and the method is shown to be superior to ordinary penalty methods.
Abstract: A combined primal-dual and penalty method is given for solving the nonlinear programming problem. The algorithm generalizes the "method of multipliers" and is applicable to problems with both equality and inequality constraints. The algorithm is defined for a broad class of "penalized Lagrangians," and is shown to be globally convergent when applied to the convex programming problem. The duality aspects are explored, leading to geometrical interpretations of the method and its relationship to generalized Lagrange multipliers. The rate of convergence is given and the method is shown to be superior to ordinary penalty methods.

Journal ArticleDOI
TL;DR: In this article, Kuhn-Tucker necessary and sufficient conditions for the nonlinear programming problem are applied to the project cost-duration analysis problem for project networks with convex costs, which gives an optimality curve for the problem.
Abstract: Kuhn-Tucker necessary and sufficient conditions for the nonlinear programming problem are applied to the project cost-duration analysis problem for project networks with convex costs. These conditions give an optimality curve for the problem. A solution is optimal if and only if when the values for activities are plotted on their optimality diagram, the values lie on the optimality curve. An algorithm is given here when the cost is convex and quadratic. The algorithm is also generalized to the case when the cost is convex and piecewise quadratic. The algorithm can be used to solve problems with convex cost functions by approximating them by piecewise quadratic functions.

Journal ArticleDOI
TL;DR: Some transposition theorems for real convex functions on real finite-dimensional spaces, with inequality ordering, are extended to convex function mapping real Banach spaces into Banach space, with partial orderings and convexity defined by closed convex cones.
Abstract: Some transposition theorems for real convex functions on real finite-dimensional spaces, with inequality ordering, are extended to convex functions mapping real Banach spaces into Banach spaces, with partial orderings and convexity defined by closed convex cones.Applications to optimization and optimal control are discussed.

Journal ArticleDOI
TL;DR: The present approach uses the support planes of the constraint region to transform the standard convex program into an equivalent linear program, and the duality theory of infinite linear programming shows how to construct a new dual program of bilinear type.
Abstract: The theme of this paper is the application of linear analysis to simplify and extend convex analysis. The central problem treated is the standard convex program — minimize a convex function subject to inequality constraints on other convex functions. The present approach uses the support planes of the constraint region to transform the convex program into an equivalent linear program. Then the duality theory of infinite linear programming shows how to construct a new dual program of bilinear type. When this dual program is transformed back into the convex function formulation it concerns the minimax of an unconstrained Lagrange function. This result is somewhat similar to the Kuhn—Tucker theorem. However, no constraint qualifications are needed and yet perfect duality maintains between the primal and dual programs.

Journal ArticleDOI
TL;DR: In this paper, an algorithm consisting of a sequence of approximating convex programs which converges to a Kuhn-Tucker point is described, and the solution of non-convex nonlinear programs with sums of r-concave functions is considered.
Abstract: The solution of nonconvex nonlinear programs with sums ofr-convex functions is considered. An algorithm consisting of a sequence of approximating convex programs which converges to a Kuhn-Tucker point is described.

Journal ArticleDOI
TL;DR: In this paper, the authors present saddle value optimality and stationary optimality criteria for convex programs under suitable constraint qualification and obtain a generalized form of the Kuhn-Tucker conditions.

01 Nov 1973
TL;DR: In this article, it was shown that concave-convex fractional programs can also be represented by a single convex program, and basic duality theorems of convex programming can be extended to concaveconcave fractional program.
Abstract: : Recently concave-convex fractional programs were related to parametric convex programs by Jagannathan, Dinkelbach and Geoffrion. It will be shown that these problems can also be represented by a single convex program. Thus basic duality theorems of convex programming can be extended to concave-convex fractional programs. In a more particular case an extension of a converse duality theorem of quadratic programming can be proved. Finally, for Dinkelbach's algorithm solving the equivalent parametric program, the rate of convergence as well as error-estimates are determined. Some modifications using duality also are proposed. (Author)

Journal ArticleDOI
TL;DR: In this article, the Radon measures on the locally compact Hausdorff space T were studied and the subfamilies of measures of compact support and of positive measures were studied.
Abstract: F(s, t) = wn(s)w0(t) dtiw^) dt2w2(t2) • • •

Journal ArticleDOI
Norman Zadeh1
01 Jan 1973-Networks
TL;DR: It is observed that sample cost functions for the pipeline problem are discretely convex, i.e., their graphs are discrete subsets of graphs of convex functions, and it is shown that in certain instances, only a subset of the data for each cost function is relevant.
Abstract: It is observed that sample cost functions for the pipeline problem [1] are discretely convex, i.e., their graphs are discrete subsets of graphs of convex functions. As a result, it is shown that the pipeline problem may be attacked by using either a minimum cost flow approach, or a combination of dynamic programming and sorting. Problems with concave cost functions are shown to be relatively easy. Even for problems which are neither convex nor concave, it is shown that in certain instances, only a subset of the data for each cost function is relevant. Error bounds are presented when approximations to the original cost functions are used. Results obtained in [4] for a related problem are summarized.

Journal ArticleDOI
01 Feb 1973
TL;DR: In this paper, it was shown that the sum of two star-like functions may have an infinite number of zeros, and that two convex functions may be at least three-valent.
Abstract: In this paper it is shown that the sum of two starlike functions may have an infinite number of zeros and that the sum of two convex functions may be at least three-valent. Furthermore, the convex sum of two odd convex functions is studied.

Journal ArticleDOI
TL;DR: The calculation of linear one-sided approximations is considered, using the discrete L, norm, which gives rise to a linear programming problem, and for 1 < p < a, to a convex programming problem.
Abstract: The calculation of linear one-sided approximations is considered, using the discrete L, norm. For p = 1 and p = a, this gives rise to a linear programming problem, and for 1 < p < a, to a convex programming problem. Numerical results are presented, including some applications to the approximate numerical solution of ordinary differential equations, with error bounds.



Journal ArticleDOI
TL;DR: In this paper, it was shown that if the quasi-convex function defined on an open convex set satisfies some mild restrictions, then it is convex function on a convex compact subset if only C is larg e enough.
Abstract: In the paper we show that if the quasi-convex function defined on an open convex set satisfies some mild restrictions then is a convex function on a convex compact subset if only C is larg e enough. We investigate specidly the casa when separable, i.e. when . Our results make possible the application of the methods of FIACCO and MCCORMICK in non-linear programming problems, where the constraints are quasi-convex.

Journal ArticleDOI
TL;DR: In this paper, the authors developed regularity conditions for a class of convex programming problems with convex objective functions and linear constraints and applied them to nonlinear stochastic programs with recourse.
Abstract: This paper develops regularity conditions for a class of convex programming problems convex objective functions and linear constraints. The objective functions considered are lower semicontinuous and have bounded level sets. The constraint set may be unbounded. Results pertaining to the solvability, stability and dualizability of such programs are obtained. The results are then applied to a class of nonlinear stochastic programs with recourse.



Journal ArticleDOI
TL;DR: A convex programming model allocating submarine-launched ballistic missiles SLBMs to launch areas and providing simultaneously an optimal targeting pattern against a specified set of bomber bases is formulates.
Abstract: This paper formulates a convex programming model allocating submarine-launched ballistic missiles SLBMs to launch areas and providing simultaneously an optimal targeting pattern against a specified set of bomber bases. Flight times of missiles from launch areas to bases vary and targets decrease in value over time. A nonseparable concave objective function is given for expected destruction of bombers. An example is presented.

Proceedings ArticleDOI
27 Aug 1973
TL;DR: This presentation reports some preliminary computation al experience with an experimental code of the Central Cutting Plane Algorithm, an algorithm for solving the convex programming problem.
Abstract: The Central Cutting Plane Algorithm is an algorithm for solving the convex programming problem. Its convergence and other properties have been established elsewhere [2,6]; here, we will state, but not prove, the major theoretical results concerning the algorithm. The purpose of this presentation is to report some preliminary computation al experience with an experimental code of the algorithm.