scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1992"


Journal ArticleDOI
TL;DR: This paper shows, by means of an operator called asplitting operator, that the Douglas—Rachford splitting method for finding a zero of the sum of two monotone operators is a special case of the proximal point algorithm, which allows the unification and generalization of a variety of convex programming algorithms.
Abstract: This paper shows, by means of an operator called asplitting operator, that the Douglas--Rachford splitting method for finding a zero of the sum of two monotone operators is a special case of the proximal point algorithm. Therefore, applications of Douglas--Rachford splitting, such as the alternating direction method of multipliers for convex programming decomposition, are also special cases of the proximal point algorithm. This observation allows the unification and generalization of a variety of convex programming algorithms. By introducing a modified version of the proximal point algorithm, we derive a new,generalized alternating direction method of multipliers for convex programming. Advances of this sort illustrate the power and generality gained by adopting monotone operator theory as a conceptual framework.

2,913 citations


Journal ArticleDOI
TL;DR: In this paper, the authors apply the techniques developed in [Cl] to the problem of mappings with a convex potential between domains, and study the map vf = V W for a Lipschitz convex, such that V,/ maps Ql onto Q?2 in the a.e. sense and in some (weak) sense.
Abstract: In this work, we apply the techniques developed in [Cl] to the problem of mappings with a convex potential between domains. That is, given two bounded domains Q, Q2 of Rn and two nonnegative real functions fi defined in Qi that are bounded away from zero and infinity, we want to study the map vf = V W for a Lipschitz convex ,v, such that V ,/ maps Ql onto Q?2 in the a.e. sense and in some (weak) sense. (1) f2(VyV) det Dij V = f1 (X) . In recent work Y. Brenier showed existence and uniqueness of such a map (provided that JaQil = 0) under the obvious compatibility condition

445 citations


Journal ArticleDOI
TL;DR: Two new proximal point algorithms for minimizing a proper, lower-semicontinuous convex function f, which converges even if f has no minimizers or is unbounded from below, are introduced.
Abstract: This paper introduces two new proximal point algorithms for minimizing a proper, lower-semicontinuous convex function $f: \mathbf{R}^n \to R \cup \{ \infty \}$. Under this minimal assumption on f, ...

250 citations


Journal ArticleDOI
Roman A. Polyak1
TL;DR: The excellent MBF properties allow us to discover that for any nondegenerate constrained optimization problem, there exists a “hot” start, from which the NMBM has a better rate of convergence, a better complexity bound, and is more stable than the interior point methods, which are based on the classical barrier functions.
Abstract: The nonlinear rescaling principle employs monotone and sufficiently smooth functions to transform the constraints and/or the objective function into an equivalent problem, the classical Lagrangian which has important properties on the primal and the dual spaces. The application of the nonlinear rescaling principle to constrained optimization problems leads to a class of modified barrier functions (MBF's) and MBF Methods (MBFM's). Being classical Lagrangians (CL's) for an equivalent problem, the MBF's combine the best properties of the CL's and classical barrier functions (CBF's) but at the same time are free of their most essential deficiencies. Due to the excellent MBF properties, new characteristics of the dual pair convex programming problems have been found and the duality theory for nonconvex constrained optimization has been developed. The MBFM have up to a superlinear rate of convergence and are to the classical barrier functions (CBF's) method as the Multipliers Method for Augmented Lagrangians is to the Classical Penalty Function Method. Based on the dual theory associated with MBF, the method for the simultaneous solution of the dual pair convex programming problems with up to quadratic rates of convergence have been developed. The application of the MBF to linear (LP) and quadratic (QP) programming leads to a new type of multipliers methods which have a much better rate of convergence under lower computational complexity at each step as compared to the CBF methods. The numerical realization of the MBFM leads to the Newton Modified Barrier Method (NMBM). The excellent MBF properties allow us to discover that for any nondegenerate constrained optimization problem, there exists a "hot" start, from which the NMBM has a better rate of convergence, a better complexity bound, and is more stable than the interior point methods, which are based on the classical barrier functions.

244 citations


Journal ArticleDOI
Masao Fukushima1
TL;DR: A decomposition algorithm for solving convex programming problems with separable structure that reduces to the ordinary method of multipliers when the problem is regarded as nonseparable.
Abstract: This paper presents a decomposition algorithm for solving convex programming problems with separable structure. The algorithm is obtained through application of the alternating direction method of multipliers to the dual of the convex programming problem to be solved. In particular, the algorithm reduces to the ordinary method of multipliers when the problem is regarded as nonseparable. Under the assumption that both primal and dual problems have at least one solution and the solution set of the primal problem is bounded, global convergence of the algorithm is established.

216 citations


Journal ArticleDOI
TL;DR: This paper deals with an application of a variant of Karmarkar's projective algorithm for linear programming to the solution of a generic nondifferentiable minimization problem, based on a column generation technique defining a sequence of primal linear programming maximization problems.
Abstract: This paper deals with an application of a variant of Karmarkar's projective algorithm for linear programming to the solution of a generic nondifferentiable minimization problem. This problem is closely related to the Dantzig-Wolfe decomposition technique used in large-scale convex programming. The proposed method is based on a column generation technique defining a sequence of primal linear programming maximization problems. Associated with each problem one defines a weighted potential function which is minimized using a variant of the projective algorithm. When a point close to the minimum of the potential function is reached, a corresponding point in the dual space is constructed, which is close to the analytic center of a polytope containing the solution set of the nondifferentiable optimization problem. An admissible cut of the polytope, corresponding to a new supporting hyperplane of the epigraph of the function to minimize, is then generated at this approximate analytic center. In the primal space this new cut translates into a new column for the associated linear programming problem. The algorithm has performed well on a set of convex nondifferentiable programming problems.

203 citations



Journal ArticleDOI
TL;DR: In this paper, the authors considered the optimal constant scaling problem for the full-information H∞ control problem, and obtained the solution by transforming the original problem into a convex feasibility problem, specifically, a structured, linear matrix inequality.

135 citations


Journal ArticleDOI
TL;DR: It is shown that LMP can be solved efficiently by the combination of the parametric simplex method and any standard convex minimization procedure, and can be extended to a convex multiplicative programming problem (CMP), which minimizes the product of two convex functions under convex constraints.
Abstract: An algorithm for solving a linear multiplicative programming problem (referred to as LMP) is proposed. LMP minimizes the product of two linear functions subject to general linear constraints. The product of two linear functions is a typical non-convex function, so that it can have multiple local minima. It is shown, however, that LMP can be solved efficiently by the combination of the parametric simplex method and any standard convex minimization procedure. The computational results indicate that the amount of computation is not much different from that of solving linear programs of the same size. In addition, the method proposed for LMP can be extended to a convex multiplicative programming problem (CMP), which minimizes the product of two convex functions under convex constraints.

119 citations


Journal ArticleDOI
TL;DR: New results for manipulating and searching semi-dynamic planar convex hulls are obtained, and logarithmic time bounds for set splitting and for finding a tangent when the two convex Hulls are not linearly separated are derived.
Abstract: We obtain new results for manipulating and searching semi-dynamic planar convex hulls (subject to deletions only), and apply them to derive improved bounds for two problems in geometry and scheduling. The new convex hull results are logarithmic time bounds for set splitting and for finding a tangent when the two convex hulls are not linearly separated. Using these results, we solve the following two problems optimally inO(n logn) time: (1) [matching] givenn red points andn blue points in the plane, find a matching of red and blue points (by line segments) in which no two edges cross, and (2) [scheduling] givenn jobs with due dates, linear penalties for late completion, and a single machine on which to process them, find a schedule of jobs that minimizes the maximum penalty.

96 citations


Journal ArticleDOI
TL;DR: In this paper, a new iterative method for solving linear complementarity problems is proposed, which makes two matrix-vector multiplications and a trivial projection onto the nonnegative orthant at each iteration, and the Euclidean distance of the iterates to the solution set monotonously converges to zero.
Abstract: In this paper we propose a new iterative method for solving a class of linear complementarity problems:u ≥ 0,Mu + q ≥ 0, uT(Mu + q)=0, where M is a givenl ×l positive semidefinite matrix (not necessarily symmetric) andq is a givenl-vector. The method makes two matrix-vector multiplications and a trivial projection onto the nonnegative orthant at each iteration, and the Euclidean distance of the iterates to the solution set monotonously converges to zero. The main advantages of the method presented are its simplicity, robustness, and ability to handle large problems with any start point. It is pointed out that the method may be used to solve general convex quadratic programming problems. Preliminary numerical experiments indicate that this method may be very efficient for large sparse problems.

Journal ArticleDOI
TL;DR: This paper shows that a recently introduced simple constraint qualification, and the so-called quasi relative interior constraint qualification both extend to (P), from the special case thatg = g2 is affine andS = S2 is polyhedral in a finite dimensional space (the so- called partially finite program).
Abstract: In this paper we study constraint qualifications and duality results for infinite convex programs (P)μ = inf{f(x): g(x) ź ź S, x ź C}, whereg = (g1,g2) andS = S1 ×S2,Si are convex cones,i = 1, 2,C is a convex subset of a vector spaceX, andf andgi are, respectively, convex andSi-convex,i = 1, 2. In particular, we consider the special case whenS2 is in afinite dimensional space,g2 is affine andS2 is polyhedral. We show that a recently introduced simple constraint qualification, and the so-called quasi relative interior constraint qualification both extend to (P), from the special case thatg = g2 is affine andS = S2 is polyhedral in a finite dimensional space (the so-called partially finite program). This provides generalized Slater type conditions for (P) which are much weaker than the standard Slater condition. We exhibit the relationship between these two constraint qualifications and show how to replace the affine assumption ong2 and the finite dimensionality assumption onS2, by a local compactness assumption. We then introduce the notion of strong quasi relative interior to get parallel results for more general infinite dimensional programs without the local compactness assumption. Our basic tool reduces to guaranteeing the closure of the sum of two closed convex cones.

Journal ArticleDOI
TL;DR: This work derived a duality theorem for partially finite convex programs, problems for which the standard Slater condition fails almost invariably, and shall apply its results to a number of more concrete problems, including variants of semi-infinite linear programming, L1 approximation, constrained approximation and interpolation, spectral estimation, semi-Infinite transportation problems and the generalized market area problem of Lowe and Hurter (1976).
Abstract: In Part I of this work we derived a duality theorem for partially finite convex programs, problems for which the standard Slater condition fails almost invariably. Our result depended on a constraint qualification involving the notion ofquasi relative interior. The derivation of the primal solution from a dual solution depended on the differentiability of the dual objective function: the differentiability of various convex functions in lattices was considered at the end of Part I. In Part II we shall apply our results to a number of more concrete problems, including variants of semi-infinite linear programming,L1 approximation, constrained approximation and interpolation, spectral estimation, semi-infinite transportation problems and the generalized market area problem of Lowe and Hurter (1976). As in Part I, we shall use lattice notation extensively, but, as we illustrated there, in concrete examples lattice-theoretic ideas can be avoided, if preferred, by direct calculation.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship of two basic conditions used in interior-point methods for generalized convex programming, self-concordance and a relative Lipschitz condition.
Abstract: This work is concerned with generalized convex programming problems, where the objective function and also the constraints belong to a certain class of convex functions. It examines the relationship of two basic conditions used in interior-point methods for generalized convex programming—self-concordance and a relative Lipschitz condition—and gives a short and simple complexity analysis of an interior-point method for generalized convex programming. In generalizing ellipsoidal approximations for the feasible set, it also allows a geometrical interpretation of the analysis.

Journal ArticleDOI
TL;DR: In this article, a Bahadur-type strong approximation was established for the LAD estimator, and bounds on the rate of convergence were established on the convergence rate of the estimator.
Abstract: We consider $M$-estimators defined by minimization of a convex criterion function, not necessarily smooth. Our asymptotic results generalize some of those concerning the LAD estimators. We establish a Bahadur-type strong approximation and bounds on the rate of convergence.

Journal ArticleDOI
TL;DR: This paper presents a nonadjacent extreme-point search algorithm for finding a globally optimal solution for problem (P), and finds an exact extreme- point optimal solution after a finite number of iterations.
Abstract: The problem (P) of optimizing a linear function over the efficient set of a multiple-objective linear program serves many useful purposes in multiple-criteria decision making. Mathematically, problem (P) can be classified as a global optimization problem. Such problems are much more difficult to solve than convex programming problems. In this paper, a nonadjacent extreme-point search algorithm is presented for finding a globally optimal solution for problem (P). The algorithm finds an exact extreme-point optimal solution for the problem after a finite number of iterations. It can be implemented using only linear programming methods. Convergence of the algorithm is proven, and a discussion is included of its main advantages and disadvantages.

Journal ArticleDOI
TL;DR: In this paper, the concept of well-posedness was introduced for convex minimization problems on metric spaces, generalizing the notion due to Tykhonov to situations in which there is no uniqueness of solutions.
Abstract: A concept of well-posedness, or more exactly of stability in a metric sense, is introduced for minimization problems on metric spaces generalizing the notion due to Tykhonov to situations in which there is no uniqueness of solutions. It is compared with other concepts, in particular to a variant of the notion after Hadamard reformulated via a metric semicontinuity approach. Concrete criteria of well-posedness are presented, e.g., for convex minimization problems.

Journal ArticleDOI
TL;DR: The rationally constrained rational programming (RCR) problem is shown, for the first time, to be equivalent to the quadratically constrained quadratic programming problem with convex objective function and constraints that are all convex except for one that is concave and separable.
Abstract: The rationally constrained rational programming (RCR) problem is shown, for the first time, to be equivalent to the quadratically constrained quadratic programming problem with convex objective function and constraints that are all convex except for one that is concave and separable. This equivalence is then used in developing a novel implementation of the Generalized Benders Decomposition (GBDA) which, unlike all earlier implementations, is guaranteed to identify the global optimum of the RCRP problem. It is also shown, that the critical step in the proposed GBDA implementation is the solution of the master problem which is a quadratically constrained, separable, reverse convex programming problem that must be solved globally. Algorithmic approaches to the solution of such problems are discussed and illustrative examples are presented.

Journal ArticleDOI
TL;DR: A technique for reducing the infinite valued case to the finite valued one is presented and this technique is extended to extend the results in Burke (1987) to the case in which the convex function may take infinite values.
Abstract: Burke (1987) has recently developed second-order necessary and sufficient conditions for convex composite optimization in the case where the convex function is finite valued. In this note we present a technique for reducing the infinite valued case to the finite valued one. We then use this technique to extend the results in Burke (1987) to the case in which the convex function may take infinite values. We conclude by comparing these results with those established by Rockafellar (1989) for the piecewise linear-quadratic case.

Journal ArticleDOI
TL;DR: It is proved that the number of iterations required by the algorithm to converge to an ε-optimal solution isO((1+M2)n∣logε∣), depending on the updating scheme for the lower bound.
Abstract: In this paper, we describe a natural implementation of the classical logarithmic barrier function method for smooth convex programming. It is assumed that the objective and constraint functions fulfill the so-called relative Lipschitz condition, with Lipschitz constantM>0.

Proceedings ArticleDOI
01 Jul 1992
TL;DR: A general framework for obtaining algorithms for solving convex programs in a small number of variables but large number of constraints, where all but asmall number of the constraints are linear.
Abstract: We consider the solution of convex programs in a small number of variables but large number of constraints, where all but a small number of the constraints are linear. We develop a general framework for obtaining algorithms for these problems which run in time linear in the number of constraints. We give an application to computing minimum spanning ellipsoids in fixed dimension.

Journal ArticleDOI
TL;DR: It is shown, subject to some standard constraint qualifications, that the operations of addition and restriction are continuous and are applied to convex well-posed optimization problems, as well as to convergence of approximated solutions in infinite dimensional convex programming.
Abstract: Let I“X denote the proper, lower semicontinuous, convex functions on a Banach space X, equipped with the completely metrizable topology of uniform convergence of distance functions on bounded sets. We show, subject to some standard constraint qualifications, that the operations of addition and restriction are continuous. These results are applied to convex well-posed optimization problems, as well as to convergence of approximated solutions in infinite dimensional convex programming, to linear functionals exposing convex sets, and to metric projections. Also, for any given function in I“X, we obtain results regarding the convergence of its inf-convolution with smoothing kernels.

Proceedings ArticleDOI
24 Jun 1992
TL;DR: A method is presented for developing confidence that the available apriori information is correct and an essentially optimal identification algorithm is given for this problem, which is (worst-case strongy) optimal to within a factor of two.
Abstract: This paper is concerned with a particular control-oriented system identification problem recently considered by several authors. This problem has been referred to as the problem of worst-case system identification in H∞ in the literature. The formulation of this problem is worst-case/deterministic in nature. The available apriori information consists of a lower bound on the relative stability of the plant, an upper bound on a certain gain associated with the plant, and an upper bound on the noise level. The available aposteriori information consists-of a finite number of noisy plant point frequency response samples. The objective is to identify the plant transfer function in H∞ using the available apriori and aposteriori information. In this paper we resolve several important open issues pertaining to this problem. First, a method is presented for developing confidence that the available apriori information is correct. This method requires the solution of a certain nondifferentiable convex programming problem. Second, an essentially optimal identification algorithm is given for this problem. This algorithm is (worst-case strongy) optimal to within a factor of two. Finally, new upper and lower bounds on the optimal identification error for this problem are derived and used to estimate the identification error associated with the algorithm presented here. Interestingly, the development of each of the results described above draws heavily upon the classical Nevanlinna-Pick optimal interpolation theory. As such, the results of this paper establish a clear link between the areas of system identification and optimal interpolation theory.

Proceedings ArticleDOI
16 Dec 1992
TL;DR: In this paper, a strongly robust H/sub infinity / performance criterion is introduced, and its applications in robust performance analysis and synthesis for nominally linear systems with time-varying uncertainties are discussed and compared with the constant scaled small gain criterion.
Abstract: Robust performance analysis and state feedback design are considered for systems with time-varying parameter uncertainties. The notion of a strongly robust H/sub infinity / performance criterion is introduced, and its applications in robust performance analysis and synthesis for nominally linear systems with time-varying uncertainties are discussed and compared with the constant scaled small gain criterion. It is shown that most robust performance analysis and synthesis problems under this strongly robust H/sub infinity / performance criterion can be transformed into linear matrix inequality problems, and can be solved through finite-dimensional convex programming. The results are in general less conservative than those obtained using small-gain-type criteria. >

Journal ArticleDOI
TL;DR: In this paper, the stochastic version of a theorem on J-convex functions majorized byJ-concave functions is given, and some results on the differentiability of convex processes are given.
Abstract: We present some results on the differentiability of convex stochastic processes. Furthermore, the stochastic version of a theorem onJ-convex functions majorized byJ-concave functions is given.

Journal ArticleDOI
TL;DR: In this article, the authors describe a natural implementation of the classical logarithmic barrier function method for smooth convex programming, where the objective and constraint functions are assumed to be the same.
Abstract: In this paper, we describe a natural implementation of the classical logarithmic barrier function method for smooth convex programming. It is assumed that the objective and constraint functions ful...

Journal ArticleDOI
01 Jul 1992
TL;DR: This work presents a primal-dual path following interior algorithm for a class of linearly constrained convex programming problems with non-negative decision variables that reduces the duality gap by at least a factor of (1−δ/√n) at each iteration.
Abstract: We present a primal-dual path following interior algorithm for a class of linearly constrained convex programming problems with non-negative decision variables We introduce the definition of a Scaled Lipschitz Condition and show that if the objective function satisfies the Scaled Lipschitz Condition then, at each iteration, our algorithm reduces the duality gap by at least a factor of (1−δ/√n), whereδ is positive and depends on the curvature of the objective function, by means of solving a system of linear equations which requires no more than O(n3) arithmetic operations The class of functions having the Scaled Lipschitz Condition includes linear, convex quadratic and entropy functions

Book ChapterDOI
01 Jan 1992
TL;DR: In this paper, the convergence of the proximal method of Martinet-Rockafellar, in exact or approximate form, is revisited in connexion to the asymptotical behaviour of the solutions to differential inclusions associated with maximal monotone operators.
Abstract: The convergence of the proximal method of Martinet-Rockafellar, in exact or approximate form, is revisited in connexion to the asymptotical behaviour of the solutions to differential inclusions associated with maximal monotone operators. Actually, in the context of convex optimization the generated sequence is shown to be minimizing without any boundedness assumption. In the more general context of monotone inclusions, if the set of solutions has a non empty interior, then the generated sequence is strongly convergent.

Proceedings ArticleDOI
01 Jun 1992
TL;DR: An interior point algorithm for the solution of these convex programs with a finite number of variables is presented and its application with the standard LQR design is illustrated.
Abstract: : Recent results have shown that several H2 and H2-related problems can be formulated as convex programs with a finite number of variables. We present an interior point algorithm for the solution of these convex programs and illustrate its application with the standard LQR design.