scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1983"




Journal ArticleDOI
TL;DR: This paper studies two classes of sets, strongly and weakly convex sets, and studies a class of functions, denoted (rho)-convex, which satisfy for arbitrary points x 1 and x 2 and any value (lambda) [0, 1] the classical inequality of convex functions up to a term.
Abstract: In this paper we study two classes of sets, strongly and weakly convex sets. For each class we derive a series of properties which involve either the concept of supporting ball, an obvious extension of the concept of supporting hyperplane, or the normal cone to the set. We also study a class of functions, denoted ρ-convex, which satisfy for arbitrary points x1 and x2 and any value λ ∈ [0, 1] the classical inequality of convex functions up to a term ρ(1 − λ) λ‖x1 − x2‖2. Depending on the sign of the constant ρ the function is said to be strongly or weakly convex. We provide characteristic properties of this class of sets and we relate it to strongly and weakly convex sets via the epigraph and the level sets. Finally, we give three applications: a separation theorem, a sufficient condition for global optimum of a nonconvex programming problem, and a sufficient geometrical condition for a set to be a manifold.

416 citations


Book ChapterDOI
01 Dec 1983
TL;DR: This chapter presents the new methods of solving algebraic problems with high accuracy, that is, error control is performed automatically by the computer without any requirement on the part of the user, such as estimating spectral radii.
Abstract: Publisher Summary This chapter presents the new methods of solving algebraic problems with high accuracy. Examples of such problems are the solving of linear systems, eigenvalue/eigenvector determination, computing zeros of polynomials, sparse matrix problems, computation of the value of an arbitrary arithmetic expression, in particular, the value of a polynomial at a point, nonlinear systems, linear, quadratic, and convex programming over the field of real or complex numbers as well as over the corresponding interval spaces. All the algorithms based on new methods have some key properties in common: (1) every result is automatically verified to be correct by the algorithm; (2) the results are of high accuracy, that is, the error of every component of the result is of the magnitude of the relative rounding error unit; (3) the solution of the given problem is automatically shown to exist and to be unique within the given error bounds; and (4) the computing time is of the same order as comparable floating-point algorithm. The key property of the algorithms is that error control is performed automatically by the computer without any requirement on the part of the user, such as estimating spectral radii. The error bounds for all components of the inverse of the Hilbert 15 × 15 matrix are as small as possible, that is, left and right bounds differ only by one in the 12 place of the mantissa of each component. It is called least significant bit accuracy.

220 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied uniformly convex functions and their properties and characterizations at a point, and gave some examples and applications of these types of functions, as well as their properties.

172 citations


Journal ArticleDOI
TL;DR: A class of implementable algorithms is described for minimizing any convex, not necessarily differentiable, functionf of several variables that have flexible storage requirements and computational effort per iteration that can be controlled by a user.
Abstract: A class of implementable algorithms is described for minimizing any convex, not necessarily differentiable, functionf of several variables The methods require only the calculation off and one subgradient off at designated points They generalize Lemarechal's bundle method More specifically, instead of using all previously computed subgradients in search direction finding subproblems that are quadratic programming problems, the methods use an aggregate subgradient which is recursively updated as the algorithms proceed Each algorithm yields a minimizing sequence of points, and iff has any minimizers, then this sequence converges to a solution of the problem Particular members of this algorithm class terminate whenf is piecewise linear The methods are easy to implement and have flexible storage requirements and computational effort per iteration that can be controlled by a user

172 citations


Journal ArticleDOI
TL;DR: It is shown how the ε-optimality conditions given in this paper can be mechanized into a bundle algorithm for solving nondifferentiable convex programming problems with linear inequality constraints.
Abstract: In this paper we present e-optimality conditions of the Kuhn-Tucker type for points which are within e of being optimal to the problem of minimizing a nondifferentiable convex objective function subject to nondifferentiable convex inequality constraints, linear equality constraints and abstract constraints. Such e-optimality conditions are of interest for theoretical consideration as well as from the computational point of view. Some illustrative applications are made. Thus we derive an expression for the e-subdifferential of a general convex ‘max function’. We also show how the e-optimality conditions given in this paper can be mechanized into a bundle algorithm for solving nondifferentiable convex programming problems with linear inequality constraints.

98 citations


Journal ArticleDOI
TL;DR: In this paper, the necessary conditions of Fritz John and Kuhn-Tucker type for Pareto optimality are derived by first reducing a vector minimization problem (multiobjective programming) to a system of scalar minimization problems and then using known results in convex programming.
Abstract: Necessary conditions of Fritz John and Kuhn-Tucker type for Pareto optimality are derived by first reducing a vector minimization problem (multiobjective programming) to a system of scalar minimization problems and then using known results in convex programming.

79 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply the method of alternating projections onto convex sets to the problem of restoring a signal from the phase of its Fourier transform and describe a method of improving convergence by adaptively varying a set of relaxation parameters in the restoration algorithm.
Abstract: We apply the method of alternating projections onto convex sets to the problem of restoring a signal from the phase of its Fourier transform. A method of improving convergence by adaptively varying a set of relaxation parameters in the restoration algorithm is described. The advantages of using the method of convex projections over other iterative restoration algorithms are discussed and illustrated.

73 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that a minimum solution of an exact penalty function for a single value of the penalty parameter which exceeds a certain threshold, is also a solution of the convex program associated with the penalty function.
Abstract: By employing a recently obtained error bound for differentiable convex inequalities, it is shown that, under appropriate constraint qualifications, a minimum solution of an exact penalty function for a single value of the penalty parameter which exceeds a certain threshold, is also a solution of the convex program associated with the penalty function. No a priori assumption is made regarding the solvability of the convex program. If such a solvability assumption is made, then we show that a threshold value of the penalty parameter can be used which is smaller than both the above-mentioned value and that of Zangwill. These various threshold values of the penalty parameter also apply to the well-known big-M method of linear programming.

62 citations


Journal ArticleDOI
TL;DR: It is proved that Slater's qualification implies those qualifications that give the necessary conditions for optimality, local and global constraint qualifications are established, based on the property of Farkas-Minkowski.
Abstract: This paper gives characterizations of optimal solutions to the nondifferentiable convex semi-infinite programming problem, which involve the notion of Lagrangian saddlepoint With the aim of giving the necessary conditions for optimality, local and global constraint qualifications are established These constraint qualifications are based on the property of Farkas-Minkowski, which plays an important role in relation to certain systems obtained by linearizing the feasible set It is proved that Slater's qualification implies those qualifications

Journal ArticleDOI
TL;DR: The concept of convex dependence between two attributes, where the change of shapes of conditional utility functions is considered, is introduced, and theorems which show how to decompose a two-attribute utility function using the concept of conveyance dependence are established.
Abstract: We describe a method of assessing von Neumann-Morgenstern utility functions on a two-attribute space and its extension to n-attribute spaces. First, we introduce the concept of convex dependence between two attributes, where we consider the change of shapes of conditional utility functions. Then, we establish theorems which show how to decompose a two-attribute utility function using the concept of convex dependence. This concept covers a wide range of situations involving trade-offs. The convex decomposition includes as special cases Keeney's additive/multiplicative decompositions, Fishburn's bilateral decomposition, and Bell's decomposition under the interpolation independence. Moreover, the convex decomposition is an exact grid model which was axiomatized by Fishburn and Farquhar. Finally, we extend the convex decomposition theorem from two attributes to an arbitrary number of attributes.

Proceedings ArticleDOI
01 Apr 1983
TL;DR: The method of alternating projections onto convex sets is applied to the problem of restoring a signal from the phase of its Fourier transform to improve convergence by adapting a set relaxation parameters in the restoration algorithm.
Abstract: We apply the method of alternating projections onto convex sets to the problem of restoring a signal from the phase of its Fourier transform. A method of improving convergence by adaptively varying a set relaxation parameters in the restoration algorithm is described. The advantages of using the method of convex projections are discussed.

Journal ArticleDOI
TL;DR: In this article, an extension of Beckenbach's classical generalized convexity notion, called F-convexity, was proposed, in such a manner that much of modern convexities of functions on the Euclidean R n (as log-, ω-, [explicit] quasi- or K- convexicity can be included).
Abstract: This paper deals with an extension of Beckenbach's classical generalized convexity notion, called F-convexity, in such a manner that much of, “modern”convexities of functions on the Euclidean R n(as log-, ω-, [explicit] quasi- or K- convexicity can be included. General theorems concerning connections between several kinds of convexity, convexity properties of composite functions, first and second order characterizations of differentiable F-convex functions are presented.

Book ChapterDOI
01 Jul 1983
TL;DR: In this paper, it was shown that the reliability of a k+l-out-of-n system is convex with respect to a k-out of n system.
Abstract: : Hardy, Littlewood and Polya (1934) introduced the notion of one function being convex with respect to a second function and developed some inequalities concerning the means of the functions. We use this notion to establish a partial order called convex-ordering among functions. In particular, the distribution functions encountered in many parametric families in reliability theory are convex-ordered. We have formulated some inequalities which can be used for testing whether a sample comes from F or G, when F and G are within the same convex family. Performance characteristics of different coherent structures can also be compared with respect to this partial ordering. For example, we will show that the reliability of a k+l-out-of-n system is convex with respect to the reliability of a k-out-of-n system. When F is convex with respect to G, the tail of the distribution F is heavier than that of G; therefore, our convex ordering implies stochastic ordering. The ordering is also related to total positivity and monotone likelihood ratio families. This provides us a tool to obtain some useful results in reliability and mathematical statistics.

Journal ArticleDOI
TL;DR: It is shown in this paper that the minimum distance between two finite planar sets of n points can be computer in O(n log n) worst-case running time and that this is optimal to within a constant factor.

Journal ArticleDOI
TL;DR: In this paper, the problem of optimal experimental design for estimating parameters in linear regression models is placed in a general convex analysis setting and duality results are obtained using two approaches, one based on subgradients and the other on Lagrangian theory.
Abstract: The problem of optimal experimental design for estimating parameters in linear regression models is placed in a general convex analysis setting. Duality results are obtained using two approaches, one based on subgradients and the other on Lagrangian theory. The subgradient concept is also used to derive a potentially useful equivalence theorm for establishing the optimality of a singular design and, finally, general versions of the original equivalence theorems of Kiefer and Wolfowitz (1960) are obtained.

Journal ArticleDOI
TL;DR: This constructive procedure gives a way to solve the infinite problem by solving a finite problem by establishing conditions under which a sequence of finite horizon convex programs monotonically increases in value to the value of the infinite program.
Abstract: We establish conditions under which a sequence of finite horizon convex programs monotonically increases in value to the value of the infinite program; a subsequence of optimal solutions converges to the optimal solution of the infinite problem. If the conditions we impose fail, then (roughtly) the optimal value of the infinite horizon problem is an improper convex function. Under more restrictive conditions we establish the necessary and sufficient conditions for optimality. This constructive procedure gives us a way to solve the infinite (long range) problem by solving a finite (short range) problem. It appears to work well in practice.

Journal ArticleDOI
TL;DR: The connections with the classical concept based on the Taylor development of a differentiablef are exhibited and some hints are given which could lead to an implementable version of this approximation for convex functions without differentiability assumptions.
Abstract: Consider the problem of minimizing a real functionalf. A Newton-like method requires first an approximationD(d) off(x + d)−f(x) at the current iteratex, valid for smalld to an order higher than 1, and consists in minimizingD. In this paper, we will introduce a new concept of such an approximation for convex functions without differentiability assumptions. The connections with the classical concept based on the Taylor development of a differentiablef are exhibited. The material is used to study a conceptual (nonimplementable) algorithm. Some hints are given which could lead to an implementable version.

Book ChapterDOI
01 Jan 1983
TL;DR: The use of the approximate subdifferential of a convex function has been proved to be useful in convex optimization, from the theoretical viewpoint as well as for the purposes of devising algorithms as mentioned in this paper.
Abstract: The introduction of the approximate subdifferential of a convex function has been proved to be useful in convex optimization, from the theoretical viewpoint as well as for the purposes of devising algorithms. Within the context of necessary and sufficient conditions for almost optimality, it was known from the beginning that to claim that a point x ° is an ~-minimum of f is equivalent to declaring that 0 belongs to the ~-subdifferential of f at x o. From the point of view of minimization procedures, it iswidely recognized that ~-subgradient methods can be more usable than methods using exact subgradients. That is mainly due to the fact that it is often easier to have access to an ~-subgradient than to a subgradient. For what concerns the study and the use of ~-subdifferentials, the past sixteen years can roughly be divided into three periods of time :

Journal ArticleDOI
Sanjo Zlobec1
TL;DR: This paper describes data for convex programs using perturbed saddle points for optimal input and selection of data in convex systems using perturbated saddle points.
Abstract: When every feasible stable perturbation of data results in a non-improvement of the optimal value function, then we talk about an ‘optimal input’ or an ‘optimal selection of data”. In this paper we describe such data for convex programs using perturbed saddle points.


Journal ArticleDOI
TL;DR: A class of mixed finite element discretizations of the problem of limit analysis for plastic plates is analysed and a linear programming approachbased on linearization of the yield condition and a convex programming approach based on the exact Mises condition are discussed.
Abstract: A class of mixed finite element discretizations of the problem of limit analysis for plastic plates is analysed. For the discrete problem we discuss a linear programming approach based on linearization of the yield condition and a convex programming approach based on the exact Mises condition. Applying the method to rectangular plates with uniform load and concentrated loads we get improved and new results partly in disagreement with one of our references.

Journal ArticleDOI
TL;DR: The ellipsoid method is applied to the unconstrained minimization of a general convex function, and equating the Steiner polynomial associated to the optimal set, and the volume of the ellipseid at a given iteration, will give an upper bound on the minimum recorded function value.
Abstract: The ellipsoid method is applied to the unconstrained minimization of a general convex function. The method converges at a geometric rate, which depends only upon the dimension of the space but not on the actual function. This rate can be improved somewhat if the function satisfies some Lipschitz-type condition, or if the minimum set has dimension greater than zero. If the ellipsoid entirely contains the optimal set, equating the Steiner polynomial associated to the optimal set, and the volume of the ellipsoid at a given iteration, will give an upper bound on the minimum recorded function value.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of finding an element that optimizes a real function φ 0, subject to inequality constraints φ 1⩽ 0, φ p+1=0, ε, ϵ+q+1.
Abstract: We consider the following abstract mathematical programming problem: in a setD, find an element that optimizes a real function φ0, subject to inequality constraints φ1⩽0, ..., φp⩽0 and equality constraints φp+1=0, ..., φp+q=0. Necessary conditions for this problem, like the Karush-Kuhn-Tucker theorem, can be seen as a consequence of separating with a hyperplane two convex sets inRp+q+1, the image space of the map Φ=(φ0, φ1, ..., φp+q). This paper reviews this approach and organizes it into a coherent way of looking at necessary conditions in optimization theory.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the dual of the problem of minimizing the 2-norm of the primal and dual optimal variables and slacks of a linear program can be transformed into an unconstrained minimization of a convex parameter-free globally differentiable piecewise quadratic function with a Lipschitz continuous gradient.

Journal ArticleDOI
TL;DR: A globally and superlinearly convergent algorithm for solving one-dimensional constrained minimization problems involving (not necessarily smooth) convex functions and the constraint is handled by what can be interpreted as a new type of penalty method.
Abstract: This paper presents a globally and superlinearly convergent algorithm for solving one-dimensional constrained minimization problems involving (not necessarily smooth) convex functions. The constraint is handled by what can be interpreted as a new type of penalty method. The algorithm does not require the objective function to be evaluated at infeasible points and it does not use constraint function values at feasible points. The penalty parameter is automatically generated by the algorithm via linear approximation of the constraint function. As in the unconstrained case developed by Lemarechal and the author, the algorithm uses a step that is the shorter of a quadratic approximation step and a polyhedral approximation step. Here the latter is actually a “penalized” polyhedral step whose computation is well conditioned if the constraint satisfies a nondegeneracy assumption.

Journal ArticleDOI
TL;DR: An algorithm is given for the sequential selection of N nodes (i.e., measurement points) for the uniform approximation (recovery) of convex functions over [0, 1]2, which has almost optimal order global error, (≦c1N−1lgN), over a naturally defined class of conveX functions.
Abstract: In this paper an algorithm is given for the sequential selection ofN nodes (i.e., measurement points) for the uniform approximation (recovery) of convex functions over [0, 1]2, which has almost optimal order global error, (źc1Nź1lgN), over a naturally defined class of convex functions. This shows the essential superiority of sequential algorithms for this class of approximation problems because any simultaneous choice ofN nodes leads to a global error >c0Nź1/2. New construction and estimation methods are presented, with possible (e.g., multidimensional) generalizations.

Journal ArticleDOI
TL;DR: In this article, the problem of approximating 0 by elements of a closed nonempty convex subset K which is contained in a finite-dimensional subspace and which does not contain 0 was considered.