scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1968"


Journal ArticleDOI
TL;DR: In this paper, the conjugates of convex integral functionals on Banach spaces of continuous vector-valued functions were derived for the existence theory and duality theory for various optimization problems, and these formulas imply the weak compactness of certain convex sets of summable functions.
Abstract: Formulas are derived in this paper for the conjugates of convex integral functionals on Banach spaces of measurable or continuous vector-valued functions. These formulas imply the weak compactness of certain convex sets of summable functions, and they thus have applications in the existence theory and duality theory for various optimization problems. They also yield formulas for the subdifferentials of integral functionals, as well as characterizations of supporting hyperplanes and normal cones.

548 citations



Journal ArticleDOI
TL;DR: In this paper, Fenchel and Moreau proved duality theorems for real linear topological spaces and their application to mathematical programming and the calculus of variations, as well as saddle point and optimal control problems.
Abstract: Let be a real linear topological space and its conjugate. We denote by the value of the linear functional on the element . For real functions on we introduce two operations: the ordinary sum and the convolution and also the transformation associating with its dual function on which is obtained from by the formula The following propositions hold.1) The operation is involutory: if and only if is a convex function and lower semicontinuous on .2) .3) Under certain additional assumptions These theorems were proved for a finite-dimensional space by Fenchel [93] and in the general case by Moreau [60].Chapter I is concerned with proving these theorems and generalizations of them.Chapter II is concerned with their application to mathematical programming and the calculus of variations. Proofs are given of very general duality theorems of mathematical programming and saddle point theorems. Constructions are then given which lead to extensions of optimal control problems, and an existence theorem is proved for these problems.Chapter III contains an investigation of problems of approximating and the set by an approximating set using methods of the theory of duality of convex functions. Duality theorems for some geometric characteristics of sets in are derived at the end of the chapter.

87 citations


Journal ArticleDOI
TL;DR: In this article, the necessary and sufficient conditions for optimality of convex programs in abstract spaces were given, and the results on optimality for linear problems in the calculus of variations and optimal control theory were given.

72 citations



Journal ArticleDOI
TL;DR: The main result proved is that the ratio of the square of a nonnegative convex function to a strictly positive concave function is convex over a convex domain.
Abstract: The main result proved in this paper is that the ratio of the square of a nonnegative convex function to a strictly positive concave function is convex over a convex domain. Some particular cases of this result and a few applications to mathematical programming are also considered.

37 citations






Journal ArticleDOI
TL;DR: The utility of the normed-space formulation is illustrated by some simple applications and some extensions of classical convex programming to problems formulated on an abstract normed space.
Abstract: This paper describes some extensions of classical convex programming to problems formulated on an abstract normed space. The utility of the normed-space formulation is illustrated by some simple applications.


Journal ArticleDOI
TL;DR: In this article, a simple iterative procedure for approximating one convex function, relative to a given constraint set, by another convex functions, having the same constraints, plus an appropriate linear function is described.
Abstract: : The report describes a simple iterative procedure for approximating one convex function, relative to a given constraint set, by another convex function, having the same constraint set, plus an appropriate linear function. This procedure is particularly useful when efficient digital computer programs are already available for minimizing functions that differ from some other convex function by a linear function. A theorem is presented that gives sufficient conditions for such a procedure to succeed. (Author)


01 Jul 1968
TL;DR: A duality theorem is proved and the convergence of a solution algorithm modeled on theDuality theorem and the simplex method of linear programming respectively are proved.
Abstract: : Let K be a closed convex set in E superscript (m + 1) and L = (P = (P sub 0, ..., P sub m): P sub 1 = P sub 2 = ...P sub m = 0). Then for the simple problem: Minimize P sub 0 Subject to P = (P sub 0, P sub 1, ..., P sub m) epsilon the intersection of K and L, we prove a duality theorem and the convergence of a solution algorithm modeled on the duality theorem and the simplex method of linear programming respectively. Specialization of this general model to linear programming, convex programming, generalized programming, control theory, and the decomposition approach to mathematical programming yield the appropriate duality theorems and solution algorithms in each case.



02 May 1968
TL;DR: In this article, a parametric programming algorithm is developed which allows one to solve a sequence of closely related convex programming problems sequentially, which is an extension to the general convex parametric algorithm developed by Hartley and Hocking (1963).
Abstract: : A parametric programming algorithm is developed which will allow one to solve a sequence of closely related convex programming problems sequentially. This algorithm is developed in such a manner as to become an extension to the general convex programming algorithm developed by Hartley and Hocking (1963). In terms of statistical response surface analysis, convex parametric problems arise when it is desirable to simultaneously optimize a number of response surface functions within a given experimental region. In general, this is not possible, however, one may choose to optimize a specified one of these response surface functions subject to the conditions that the others achieve certain desired activity levels. In many situations these activity levels are not clearly defined and a parametric study over several combinations of values of meaningful activity levels is indicated. Extensions of the parametric procedure allows for the solution of non-convex quadratic programming problems with quadratic restrictions in the sense that at least one and usually several local optima are obtained. Combining the parametric study with the ability to handle non-convex functions will allow a thorough analysis of the multiple response problem. (Author)

Book ChapterDOI
01 Jan 1968
TL;DR: In this article, the authors present some work on inequalities, which Professor D. E. Daykin of the University of Malaya and I have recently completed, which we refer the reader to.
Abstract: In this paper I would like to present some work on inequalities, which Professor D. E. Daykin of the University of Malaya and I have recently completed.


Journal ArticleDOI
Abstract: By supplying regularity conditions, two new connection theorems relating con-strained maximization problems are proven.