scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1993"


Book
21 Oct 1993
TL;DR: In this article, the cutting plane algorithm is used to construct approximate subdifferentials of convex functions, and the inner construction of the subdifferential is performed by a dual form of Bundle Methods.
Abstract: IX. Inner Construction of the Subdifferential.- X. Conjugacy in Convex Analysis.- XI. Approximate Subdifferentials of Convex Functions.- XII. Abstract Duality for Practitioners.- XIII. Methods of ?-Descent.- XIV. Dynamic Construction of Approximate Subdifferentials: Dual Form of Bundle Methods.- XV. Acceleration of the Cutting-Plane Algorithm: Primal Forms of Bundle Methods.- Bibliographical Comments.- References.

3,043 citations


Journal ArticleDOI
TL;DR: An alternative convergence proof of a proximal-like minimization algorithm using Bregman functions, recently proposed by Censor and Zenios, is presented and allows the establishment of a global convergence rate of the algorithm expressed in terms of function values.
Abstract: An alternative convergence proof of a proximal-like minimization algorithm using Bregman functions, recently proposed by Censor and Zenios, is presented. The analysis allows the establishment of a global convergence rate of the algorithm expressed in terms of function values.

481 citations


Journal ArticleDOI
01 Jan 1993
TL;DR: In this paper, the authors investigate star-like functions f(z) = z + Yfk = 2akzk wim the property that zf'(z)/'f(z)) lies inside a certain parabola and give some particular examples of functions having the required properties.
Abstract: We investigate starlike functions f(z) = z + Yfk=2akzk wim the property that zf'(z)/'f(z) lies inside a certain parabola. These functions are closely related to a class of functions called uniformly convex and recently introduced by Goodman. We give some particular examples of functions having the required properties, and we give upper bounds on the coefficients and the modulus \f(z)\ of the functions in the class.

471 citations


Journal ArticleDOI
TL;DR: It is shown that by appropriate grouping of terms, feedforward neural networks with sigmoidal activation functions can be viewed as architectures which implement affine wavelet decompositions of mappings.
Abstract: A representation of a class of feedforward neural networks in terms of discrete affine wavelet transforms is developed. It is shown that by appropriate grouping of terms, feedforward neural networks with sigmoidal activation functions can be viewed as architectures which implement affine wavelet decompositions of mappings. It is shown that the wavelet transform formalism provides a mathematical framework within which it is possible to perform both analysis and synthesis of feedforward networks. For the purpose of analysis, the wavelet formulation characterizes a class of mappings which can be implemented by feedforward networks as well as reveals an exact implementation of a given mapping in this class. Spatio-spectral localization properties of wavelets can be exploited in synthesizing a feedforward network to perform a given approximation task. Two synthesis procedures based on spatio-spectral localization that reduce the training problem to one of convex optimization are outlined. >

434 citations


Journal ArticleDOI
TL;DR: Applying this generalization of the proximal point algorithm to convex programming, one obtains the D-function proximal minimization algorithm of Censor and Zenios, and a wide variety of new multiplier methods.
Abstract: A Bregman function is a strictly convex, differentiable function that induces a well-behaved distance measure or D-function on Euclidean space. This paper shows that, for every Bregman function, there exists a "nonlinear" version of the proximal point algorithm, and presents an accompanying convergence theory. Applying this generalization of the proximal point algorithm to convex programming, one obtains the D-function proximal minimization algorithm of Censor and Zenios, and a wide variety of new multiplier methods. These multiplier methods are different from those studied by Kort and Bertsekas, and include nonquadratic variations on the proximal method of multipliers.

340 citations


Journal ArticleDOI
TL;DR: Weak sharp minima were introduced in this article to characterize the existence of non-unique solution sets for linear and quadratic convex programming problems and for the linear complementarity problem.
Abstract: The notion of a sharp, or strongly unique, minimum is extended to include the possibility of a nonunique solution set. These minima will be called weak sharp minima. Conditions necessary for the solution set of a minimization problem to be a set of weak sharp minima are developed in both the unconstrained and constrained cases. These conditions are also shown to be sufficient under the appropriate convexity hypotheses. The existence of weak sharp minima is characterized in the cases of linear and quadratic convex programming and for the linear complementarity problem. In particular, a result of Mangasarian and Meyer is reproduced that shows that the solution set of a linear program is always a set of weak sharp minima whenever it is nonempty. Consequences for the convergence theory of algorithms are also examined, especially conditions yielding finite termination.

337 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables.

306 citations


Journal ArticleDOI
TL;DR: An efficient convex optimization algorithm is used here, guaranteed to find the exact solution to the convex programming problem, and improved upon existing methods for computing the circuit delay as an Elmore time constant to achieve higher accuracy.
Abstract: A general sequential circuit consists of a number of combinational stages that lie between latches For the circuit to meet a given clocking specification, it is necessary for each combinational stage to satisfy a certain delay requirement Roughly speaking, increasing the sizes of some transistors in a stage reduces the delay, with the penalty of increased area The problem of transistor sizing is to minimize the area of a combinational stage, subject to its delay being less than a given specification Although this problem has been recognized as a convex programming problem, most existing approaches do not take full advantage of this fact, and often give nonoptimal results An efficient convex optimization algorithm has been used here This algorithm is guaranteed to find the exact solution to the convex programming problem We have also improved upon existing methods for computing the circuit delay as an Elmore time constant, to achieve higher accuracy, CMOS circuit examples, including a combinational circuit with 832 transistors are presented to demonstrate the efficacy of the new algorithm >

301 citations


Journal ArticleDOI
TL;DR: This work presents a simple and unified technique to establish convergence of various minimization methods, as well as implementable forms such as bundle algorithms, including the classical subgradient relaxation algorithm with divergent series.
Abstract: We present a simple and unified technique to establish convergence of various minimization methods. These contain the (conceptual) proximal point method, as well as implementable forms such as bundle algorithms, including the classical subgradient relaxation algorithm with divergent series.

238 citations


Journal ArticleDOI
Mario A. Rotea1
TL;DR: Keywords .

219 citations


Book ChapterDOI
01 Jan 1993
TL;DR: Approximation of convex bodies is frequently encountered in geometric convexity, discrete geometry, the theory of finite-dimensional normed spaces, in geometric algorithms and optimization, and in the realm of engineering as discussed by the authors.
Abstract: Publisher Summary This chapter reviews various aspects of approximation of convex bodies. Approximation of convex bodies is frequently encountered in geometric convexity, discrete geometry, the theory of finite-dimensional normed spaces, in geometric algorithms and optimization, and in the realm of engineering. Also, approximation problems in optimization arise often from more practical problems of operations research and pattern recognition. Several effective approximation algorithms formulated for convex functions or convex bodies are described in the chapter. In the former case the approximation is considered with respect to the maximum norm, in the latter case with respect to the Hausdorff metric. The chapter presents more recent developments in approximation, but many older results are also described.

Proceedings ArticleDOI
02 Jun 1993
TL;DR: In this article, two parameter-dependent control problems for linear, parametrically varying (LPV) systems are presented and sufficient conditions for exponential stability and an induced L 2 -norm performance objective are given.
Abstract: In this paper two parameter-dependent control problems for linear, parametrically varying (LPV) systems are presented. Sufficient conditions for exponential stability and an induced L 2 -norm performance objective are given. The resulting synthesis problems are reformulated into convex optimization problems which can be solved with efficient new algorithms.

Journal ArticleDOI
TL;DR: This paper analyzes the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic.
Abstract: In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic We also analyze a dual counterpart, the entropy minimization algorithm, which operates like the proximal minimization algorithm, except that it uses a logarithmic/entropy “proximal” term in place of a quadratic We strengthen substantially the available convergence results for these methods, and we derive the convergence rate of these methods when applied to linear programs

Journal ArticleDOI
TL;DR: A new equivalent of an unconstrained and convex minimization problem in displacements only, where the function to be minimized is the sum of terms, each of which is the maximum of two convex,quadratic functions.
Abstract: Truss topology optimization formulated in terms of displacements and bar volumes results in a large, nonconvex optimization problem. For the case of maximization of stiffness for a prescribed volume,this paper presents a new equivalent, an unconstrained and convex minimization problem in displacements only, where the function to be minimized is the sum of terms, each of which is the maximum of two convex,quadratic functions. Existence of solutions is proved, as is the convergence of a nonsmooth steepest descent-type algorithm for solving the topology optimization problem. The algorithm is computationally attractive and has been tested on a large number of examples, some of which are presented.

Journal ArticleDOI
TL;DR: A mixed H2/H∞ control problem for discrete-time systems is considered for both state- feedback and output-feedback cases and it is shown that these problems can be effectively solved by reducing them to convex programming problems.

BookDOI
01 Jul 1993
TL;DR: Average performance of self-dual interior point algorithm for linear programming and complexity results for a class of min-max problems with robust optimization applications.
Abstract: Average performance of self-dual interior point algorithm for linear programming, K.M. Anstreicher et al the complexity of approximating a nonlinear program, M. Bellare and P. Rogaway algorithms for the least distance problem, P. Berman et al translational cuts for convex minimization, J.V. Burke et al maximizing concave functions in fixed dimension, E. Cohen and N. Megiddo the complexity of allocating resources in parallel - upper and lower bounds, E.J. Friedman complexity issues in nonconvex network flow problems, G. Guisewite and P.M. Pardalos a classification of static scheduling problems, J.W. Herrmann et al complexity of single machine dual criteria and hierarchical scheduling - a survey, C.-Y. Lee and G. Vairaktarakis performance driven graph enhancement problems, D. Paik and S. Sahni weighted means of cuts, parametric flows and fractional combinatorial optimization, T. Radzik some complexity issues involved in the construction of test cases for NP-hard problems, L. Sanchis a note on the complexity of fixed-point computation for noncontractive maps, C.W. Tsai and K. Sikorski maximizing non-linear concave functions in fixed dimension, S. Toledo polynomial time weak approximation algorithms for quadratic programming, S. Vavasis complexity results for a class of min-max problems with robust optimization applications, G. Yu and P. Kouvelis. (Part contents).

Journal ArticleDOI
TL;DR: In this article, a necessary and sufficient condition of optimality for nonlinear optimal control is defined, which generalizes well-known sufficient conditions, referred to as verification theorems, in dynamic programming; as a byproduct, the minimum cost in terms of the upper envelope of subsolutions to the Hamilton-Jacobi equation is obtained.
Abstract: Problems in nonlinear optimal control can be reformulated as convex optimization problems over a vector space of linear functionals. In this way, methods of convex analysis can be brought to bear on the task of characterizing solutions to such problems. The result is a necessary and sufficient condition of optimality that generalizes well-known sufficient conditions, referred to as verification theorems, in dynamic programming; as a byproduct, we obtain a representation of the minimum cost in terms of the upper envelope of subsolutions to the Hamilton–Jacobi equation. It is a striking illustration of the wide range of problems to which convex analysis, and, in particular, convex duality, is applicable. The approach, applied to parametric problems in the calculus of variations, was pioneered by L. C. Young [Lectures on the Calculus of Variations and Optimal Control Theory, W. B. Saunders, Philadelphia, PA, 1969]. As recent work has shown, however, it is equally fruitful when applied in optimal control. Thi...

Book ChapterDOI
01 Jan 1993
TL;DR: I want to discuss here some results concerning separately convex functions obtained some time ago but only mentioned to a few specialists, and I had not taken the time to publish them before, for obvious reasons.
Abstract: I want to discuss here some results concerning separately convex functions. Most of these results were obtained some time ago but only mentioned to a few specialists, and I had not taken the time to publish them before, for obvious reasons. The motivation of these studies was nonlinear elasticity, but once I had solved an academic example where quasiconvexity had been replaced by separate convexity, it was not clear to me how to get further on. I find useful to choose this subject now in order to describe the evolution of some ideas during the last fifteen years.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: The problem of image decompression is cast as an ill-posed inverse problem, and a stochastic regularization technique is used to form a well-posed reconstruction algorithm which produces reconstructed images which greatly reduced the noticeable artifacts which exist using standard techniques.
Abstract: The problem of image decompression is cast as an ill-posed inverse problem, and a stochastic regularization technique is used to form a well-posed reconstruction algorithm. A statistical model for the image which incorporates the convex Huber minimax function is proposed. The use of the Huber minimax function rho T(.) helps to maintain the discontinuities from the original image which produces high-resolution edge boundaries. Since rho T(.) is convex, the resulting multidimensional minimization problem is a constrained convex optimization problem. The maximum a posteriori (MAP) estimation technique that is proposed results in the constrained optimization of a convex functional. The proposed image decompression algorithm produces reconstructed images which greatly reduced the noticeable artifacts which exist using standard techniques. >

Journal ArticleDOI
TL;DR: In this paper, the regularization of Fredholm integral equations of the first kind with positive solutions by means of maximum entropy is considered, where the regularized solution is the minimizes of a functional analogous to the case of Phillips-Tikhonov regularization.
Abstract: The regularization of Fredholm integral equations of the first kind is considered with positive solutions by means of maximum entropy. The regularized solution is the minimizes of a functional analogous to the case of Phillips–Tikhonov regularization. The regularized solution is shown to converge to the solution of the maximum entropy least squares problem, assuming it exists. Under additional regularity conditions akin to those for Phillips–Tikhonov regularization error estimates are obtained as well. In addition it is shown that the regularity conditions are necessary for these estimates to hold. Approximation from finite-dimensional subspaces are also considered, as well as exact and approximate moment problems for the integral equations. The basic tools in the analysis are the weak compactness of subsets of $L_1$ consisting of functions of bounded entropy, and an inequality for convex optimization problems with Bregman functionals.

Journal ArticleDOI
TL;DR: A linear convergence rate is established that has additional important features that provide an efficient grid management scheme to generate cuts and ultimately to test feasibility to a high degree of accuracy.
Abstract: The central cutting plane algorithm for linear semi-infinite programming (SIP) is extended to nonlinear convex SIP of the form min $\{ f ( x )|x \in H,g ( x,t ) \leq 0\,{\text{all}}\,t \in S \}$. Under differentiability assumptions that are weaker than those employed in superlinearly convergent algorithms, a linear convergence rate is established that has additional important features. These features are the ability to (i) generate a cut from any violated constraint; (ii) invoke efficient constraint-dropping rules for management of linear programming (LP) subproblem size; (iii) provide an efficient grid management scheme to generate cuts and ultimately to test feasibility to a high degree of accuracy, as well as to provide an automatic grid refinement for use in obtaining admissible starting solutions for the nonlinear system of first-order conditions; and, (iv) provide primal and dual (Lagrangian) SIP feasible solutions in a finite number of iterations.Numerical tests are provided on a collection of prob...

Journal ArticleDOI
TL;DR: A convergence result can be applied to sharpen the available convergence results for Han's methods and it is proved that the decomposition methods of Han and the method of multipliers may be viewed as special cases of this method.
Abstract: We consider a dual method for solving non-strictly convex programs possessing a certain separable structure. This method may be viewed as a dual version of a block coordinate ascent method studied by Auslender [1, Section 6]. We show that the decomposition methods of Han [6, 7] and the method of multipliers may be viewed as special cases of this method. We also prove a convergence result for this method which can be applied to sharpen the available convergence results for Han's methods.

Journal ArticleDOI
TL;DR: This study analyzes the rate of convergence of certain dual ascent methods for the problem of minimizing a strictly convex essentially smooth function subject to linear constraints and shows that, under mild assumptions on the problem, these methods attain a linear rate of converge.
Abstract: We analyze the rate of convergence of certain dual ascent methods for the problem of minimizing a strictly convex essentially smooth function subject to linear constraints. Included in our study are dual coordinate ascent methods and dual gradient methods. We show that, under mild assumptions on the problem, these methods attain a linear rate of convergence. Our proof is based on estimating the distance from a feasible dual solution to the optimal dual solution set by the norm of a certain residual.

Book ChapterDOI
C. Fleury1
01 Jan 1993
TL;DR: In this Lecture, several recent methods based on convex approximation schemes are discussed, that have demonstrated strong potential for efficient solution of structural optimization problems.
Abstract: In this Lecture, several recent methods based on convex approximation schemes are discussed, that have demonstrated strong potential for efficient solution of structural optimization problems.

Journal ArticleDOI
TL;DR: In this article, the authors considered the H ∞ guaranteed cost control problem for continuous-time uncertain systems, which consists of the determination of a stabilizing state feedback gain which imposes on all possible closed-loop models an upper bound γ > 0.

Journal ArticleDOI
TL;DR: A synthesis procedure, based on a sufficient condition for quadratic stabilization and root clustering, such as stabilizability, is given, using an auxiliary convex problem for robust control laws for uncertain linear systems.
Abstract: The problem of designing robust control laws, in performance and in stability, for uncertain linear systems is considered. Performances are taken into account using root clustering of the closed-loop dynamic matrix in a sector of the complex plane. A synthesis procedure, based on a sufficient condition for quadratic stabilization and root clustering, such as stabilizability, is given, using an auxiliary convex problem. The results are illustrated by a significant example from the literature. >

Journal ArticleDOI
TL;DR: This paper examines nonsmooth constrained multi-objective optimization problems where the objective function and the constraints are compositions of convex functions, and locally Lipschitz and Gâteaux differentiable functions.
Abstract: This paper examines nonsmooth constrained multi-objective optimization problems where the objective function and the constraints are compositions of convex functions, and locally Lipschitz and Gâteaux differentiable functions. Lagrangian necessary conditions, and new sufficient optimality conditions for efficient and properly efficient solutions are presented. Multi-objective duality results are given for convex composite problems which are not necessarily convex programming problems. Applications of the results to new and some special classes of nonlinear programming problems are discussed. A scalarization result and a characterization of the set of all properly efficient solutions for convex composite problems are also discussed under appropriate conditions.

Proceedings ArticleDOI
15 Dec 1993
TL;DR: This paper presents the software package LMI-LAB for the manipulation and resolution of linear matrix inequalities (LMI) and confirms that LMI formulations constitute a computationally viable and reasonable approach to control system design.
Abstract: This paper presents the software package LMI-LAB for the manipulation and resolution of linear matrix inequalities (LMI). Fairly general systems of LMI's can be handled as well as two important optimization problems under LMI constraints. The polynomial-time projectile method of Nesterov and Nemirovsky is used to solve the underlying convex optimization programs. Several benchmark examples demonstrate that the complexity and running time of these algorithms are by no means prohibitive. This confirms that LMI formulations constitute a computationally viable and reasonable approach to control system design. >

Journal ArticleDOI
TL;DR: It is shown that this nonconvex problem can be converted to a concave minimization problem withp variables, whose objective function value is determined by solving a convex minimizationProblem, and an outer approximation method is proposed for obtaining a global minimum of the resulting problem.
Abstract: This paper addresses the minimization of the product ofp convex functions on a convex set. It is shown that this nonconvex problem can be converted to a concave minimization problem withp variables, whose objective function value is determined by solving a convex minimization problem. An outer approximation method is proposed for obtaining a global minimum of the resulting problem. Computational experiments indicate that this algorithm is reasonable efficient whenp is less than 4.

Journal ArticleDOI
TL;DR: In this paper, a parametrized convex vector optimization problem with a parameter vectoru is considered and sufficient conditions for MinDW=MinDY andDW=W minDY are obtained, respectively.
Abstract: We consider a parametrized convex vector optimization problem with a parameter vectoru. LetY(u) be the objective space image of the parametrized feasible region. The perturbation mapW(u) is defined as the set of all minimal points of the setY(u) with respect to an ordering cone in the objective space. The purpose of this paper is to investigate the relationship between the contingent derivativeDW ofW and the contingent derivativeDY ofY. Sufficient conditions for MinDW=MinDY andDW=W minDY are obtained, respectively. Therefore, quantitative information on the behavior of the perturbation map is provided.