scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1991"


Journal ArticleDOI
TL;DR: It is shown that in the state-feedback case one can come arbitrarily close to the optimal (even over full information controllers) mixed H/sub 2//H/sub infinity / performance measure using constant gain state feedback.
Abstract: The problem of finding an internally stabilizing controller that minimizes a mixed H/sub 2//H/sub infinity / performance measure subject to an inequality constraint on the H/sub infinity / norm of another closed-loop transfer function is considered. This problem can be interpreted and motivated as a problem of optimal nominal performance subject to a robust stability constraint. Both the state-feedback and output-feedback problems are considered. It is shown that in the state-feedback case one can come arbitrarily close to the optimal (even over full information controllers) mixed H/sub 2//H/sub infinity / performance measure using constant gain state feedback. Moreover, the state-feedback problem can be converted into a convex optimization problem over a bounded subset of (n*n and n*q, where n and q are, respectively, the state and input dimensions) real matrices. Using the central H/sub infinity / estimator, it is shown that the output feedback problem can be reduced to a state-feedback problem. In this case, the dimension of the resulting controller does not exceed the dimension of the generalized plant. >

762 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that Han and Lou proposed a special case of a splitting algorithm analyzed by Gabay for finding a zero of the sum of two maximal monotone operators, and new applications of this algorithm to variational inequalities, convex programming, and the solution of linear complementarily problems are proposed.
Abstract: Recently Han and Lou proposed a highly parallelizable decomposition algorithm for minimizing a strongly convex cost over the intersection of closed convex sets. It is shown that their algorithm is in fact a special case of a splitting algorithm analyzed by Gabay for finding a zero of the sum of two maximal monotone operators. Gabay’s convergence analysis for the splitting algorithm is sharpened, and new applications of this algorithm to variational inequalities, convex programming, and the solution of linear complementarily problems are proposed. For convex programs with a certain separable structure, a multiplier method that is closely related to the alternating direction method of multipliers of Gabay–Mercier and of Glowinski–Marrocco, but which uses both ordinary and augmented Lagrangians, is obtained.

523 citations


Journal ArticleDOI
TL;DR: In this article, a new class of normalized functions regular and univalent in the unit disk are introduced, called uniformly convex functions, which are dened by a purely geometric property and obtained a few theorems about this new class and point out a number of open problems.
Abstract: We introduce a new class of normalized functions regular and univalent in the unit disk These functions, called uniformly convex functions, are dened by a purely geometric property We obtain a few theorems about this new class and we point out a number of open problems

519 citations


Book
01 Jan 1991
TL;DR: In this paper, the Lovasz Extensions of Submodular Functions are extended to include nonlinear weight functions and linear weight functions with continuous variables, and a Decomposition Algorithm is proposed.
Abstract: Introduction. 1. Mathematical Preliminaries. Submodular Systems and Base Polyhedra. 2. From Matroids to Submodular Systems. Matroids. Polymatroids. Submodular Systems. 3. Submodular Systems and Base Polyhedra. Fundamental Operations on Submodular Systems. Greedy Algorithm. Structures of Base Polyhedra. Intersecting- and Crossing-Submodular Functions. Related Polyhedra. Submodular Systems of Network Type. Neoflows. 4. The Intersection Problem. The Intersection Theorem. The Discrete Separation Theorem. The Common Base Problem. 5. Neoflows. The Equivalence of the Neoflow Problems. Feasibility for Submodular Flows. Optimality for Submodular Flows. Algorithms for Neoflows. Matroid Optimization. Submodular Analysis. 6. Submodular Functions and Convexity. Conjugate Functions and a Fenchel-Type Min-Max Theorem for Submodular and Supermodular Functions. Subgradients of Submodular Functions. The Lovasz Extensions of Submodular Functions. 7. Submodular Programs. Submodular Programs - Unconstrained Optimization. Submodular Programs - Constrained Optimization. Nonlinear Optimization with Submodular Constraints. 8. Separable Convex Optimization. Optimality Conditions. A Decomposition Algorithm. Discrete Optimization. 9. The Lexicographically Optimal Base Problem. Nonlinear Weight Functions. Linear Weight Functions. 10. The Weighted Max-Min and Min-Max Problems. Continuous Variables. Discrete Variables. 11. The Fair Resource Allocation Problem. Continuous Variables. Discrete Variables. 12. The Neoflow Problem with a Separable Convex Cost Function. References. Index.

505 citations


Journal ArticleDOI
TL;DR: Optimization and convexity complexity theory convex quadratic programming non-convex quadRatic programming local optimization complexity in the black-box model.
Abstract: Optimization and convexity complexity theory convex quadratic programming non-convex quadratic programming local optimization complexity in the black-box model.

352 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a new procedure for continuous and discrete-time linear control systems design, which consists of the definition of a convex programming problem in the parameter space that, when solved, provides the feedback gain.
Abstract: This paper presents a new procedure for continuous and discrete-time linear control systems design. It consists of the definition of a convex programming problem in the parameter space that, when solved, provides the feedback gain. One of the most important features of the procedure is that additional design constraints are easily incorporated in the original formulation, yielding solutions to problems that have raised a great deal of interest within the last few years. This is precisely the case of the decentralized control problem and the quadratic stabilizability problem of uncertain systems with both dynamic and input uncertain matrices. In this last case, necessary and sufficient conditions for the existence of a linear stabilizing gain are provided and, to the authors’ knowledge, this is one of the first numerical procedures able to handle and solve this interesting design problem for high-order, continuous-time or discrete-time linear models. The theory is illustrated by examples.

348 citations


Journal ArticleDOI
TL;DR: In this article, the minimization of a convex integral functional over the positive cone of an $L_p $ space, subject to a finite number of linear equality constraints, is considered.
Abstract: This paper considers the minimization of a convex integral functional over the positive cone of an $L_p $ space, subject to a finite number of linear equality constraints. Such problems arise in spectral estimation, where the bjective function is often entropy-like, and in constrained approximation. The Lagrangian dual problem is finite-dimensional and unconstrained. Under a quasi-interior constraint qualification, the primal and dual values are equal, with dual attainment. Examples show the primal value may not be attained. Conditions are given that ensure that the primal optimal solution can be calculated directly from a dual optimum. These conditions are satisfied in many examples.

230 citations


Journal ArticleDOI
TL;DR: In this article, the Eremin-Zangwill exact penalty functions have been used to develop the foundations of the theory of constrained optimization for finite dimensions in an elementary and straightforward way.
Abstract: In their seminal papers Eremin [Soviet Mathematics Doklady, 8 (1966), pp. 459–462] and Zangwill [Management Science, 13 (1967), pp. 344–358] introduce a notion of exact penalization for use in the development of algorithms for constrained optimization. Since that time, exact penalty functions have continued to play a key role in the theory of mathematical programming. In the present paper, this theory is unified by showing how the Eremin–Zangwill exact penalty functions can be used to develop the foundations of the theory of constrained optimization for finite dimensions in an elementary and straightforward way. Regularity conditions, multiplier rules, second-order optimality conditions, and convex programming are all given interpretations relative to the Eremin–Zangwill exact penalty functions. In conclusion, a historical review of those results associated with the existence of an exact penalty parameter is provided.

213 citations


ReportDOI
01 Apr 1991
TL;DR: Analytical techniques are developed that provide an accurate approximation of the absolute time at which each event in an ER system occurs and, using the techniques of convex programming, optimal transistor widths can be determined.
Abstract: Analytical techniques are developed to determine the performance of asynchronous digital circuits. These techniques can be used to guide the designer during the synthesis of such a circuit, leading to a high-performance, efficient implementation. Optimization techniques are also developed that further improve this implementation by determining the optimal sizes of the low-level devices (CMOS transistors) that compose the circuit. In order to determine the performance of an asynchronous circuit, it is first translated into an event-rule (ER) system, an abstract representation of the time dependencies (rules) between the primitive actions (events) of the circuit. This translation can be done from any of several different intermediate representations including: (i) a communicating sequential processes (CSP) program, (ii) a handshaking expansion, a refinement of the original CSP program in which all communication actions are replaced by explicit manipulations of boolean variables, (iii) a production rule set, a refinement of the handshaking expansion in which all sequencing is implemented by restricting concurrency, and (iv) a CMOS transistor network, a final representation from which the circuit can be fabricated. The analysis techniques are based on linear programming and provide an accurate approximation of the absolute time at which each event in an ER system occurs. Efficient algorithms for performing this approximation are developed and proven correct. Numerous examples are provided. This approximation can be represented as a formula expressing the performance of the circuit in terms of certain design variables, such as the widths of the transistors in the final CMOS network. This formula can be evaluated at particular width values and thus can be used to determine the performance of a particular realization of the circuit. Furthermore, using the techniques of convex programming, optimal transistor widths can be determined. The analysis techniques are applied to several large examples. Several implementations of first-in-first-out buffers are compared. A handshaking-expansion-level analysis of a simplified version of the Caltech Asynchronous Microprocessor is provided.

208 citations


01 Jan 1991
TL;DR: The problem of computing the volume of a convex body K in R is discussed and worst-case results are reviewed and randomised approximation algorithms which show that with randomisation one can approximate very nicely are provided.
Abstract: We discuss the problem of computing the volume of a convex body K in R. We review worst-case results which show that it is hard to deterministically approximate voln K and randomised approximation algorithms which show that with randomisation one can approximate very nicely. We then provide some applications of this latter result. •Supported by NATO grant RG0088/89 t Supported by NSF grant CCR-8900112 and NATO grant RG0088/89

150 citations


Journal ArticleDOI
TL;DR: This paper presents consistency results for sequences of optimal solutions to convex stochastic optimization problems constructed from empirical data, by applying the strong law of large numbers to these problems.
Abstract: This paper presents consistency results for sequences of optimal solutions to convex stochastic optimization problems constructed from empirical data, by applying the strong law of large numbers fo...

Journal ArticleDOI
TL;DR: Several equivalent definitions of the property of a sharp minimum on a set are given and the notion is used to prove finite termination of the proximal point algorithm.
Abstract: This paper concerns the notion of a sharp minimum on a set and its relationship to the proximal point algorithm. We give several equivalent definitions of the property and use the notion to prove finite termination of the proximal point algorithm.

Journal ArticleDOI
TL;DR: A primal interior point method for convex quadratic programming which is based upon a logarithmic barrier function approach, which generates a sequence of problems, each of which is approximately solved by taking a single Newton step.
Abstract: We present a primal interior point method for convex quadratic programming which is based upon a logarithmic barrier function approach. This approach generates a sequence of problems, each of which is approximately solved by taking a single Newton step. It is shown that the method requires $$O(\sqrt n L)$$ iterations and O(n3.5L) arithmetic operations. By using modified Newton steps the number of arithmetic operations required by the algorithm can be reduced to O(n3L).

Journal ArticleDOI
TL;DR: In this paper, the properties of geodesic convex functions defined on a connected Riemannian C2k-manifold are investigated in order to extend some results of convex optimization problems to nonlinear ones, whose feasible region is given by equalities and by inequalities.
Abstract: The properties of geodesic convex functions defined on a connected RiemannianC2k-manifold are investigated in order to extend some results of convex optimization problems to nonlinear ones, whose feasible region is given by equalities and by inequalities and is a subset of a nonlinear space.

Book ChapterDOI
01 Mar 1991
TL;DR: In this paper, the Fourier-Motzkinetic method is used for computing elementary semi-flows in P/T nets, which are non-negative left anullers of a net's flow matrix.
Abstract: P-semiflows are non-negative left anullers of a net's flow matrix. The importance of these vectors lies in their usefulness for analyzing net properties. The concept of minimal p-semiflow is known in the context of Mathematical Programming under the name "extremal direction of a cone". This connection highlights a parallelism between properties found in the domains of P/T nets and Mathematical Programming. The algorithms known in the domain of P/T nets for computing elementary semi-flows are basically a new rediscovery, with technical improvements with respect to type of problems involved, of the basic Fourier-Motzkin method. One of the fundamental problems of these algorithms is their complexity. Various methods and rules for mitigating this problem are examined. As a result, this paper presents two improved algorithms which are more efficient and robust when handling "real-life" Nets.

Journal ArticleDOI
TL;DR: An iterative method for minimizing strictly convex quadratic functions over the intersection of a finite number of convex sets is presented and convergence proofs are given even for the inconsistent case, i.e. when the intersections of the sets is empty.
Abstract: We present an iterative method for minimizing strictly convex quadratic functions over the intersection of a finite number of convex sets. The method consists in computing projections onto the individual sets simultaneously and the new iterate is a convex combination of those projections. We give convergence proofs even for the inconsistent case, i.e. when the intersection of the sets is empty.

Journal ArticleDOI
TL;DR: This paper gives several characterizations of the solution set of convex programs, and the subgradients attaining the minimum principle are explicitly characterized, and this characterization is shown to be independent of any solution.

Journal ArticleDOI
TL;DR: This paper addresses itself to the algorithm for minimizing the sum of a convex function and a product of two linear functions over a polytope and shows that this nonconvex minimization problem can be solved by solving a sequence of convex programming problems.
Abstract: This paper addresses itself to the algorithm for minimizing the sum of a convex function and a product of two linear functions over a polytope. It is shown that this nonconvex minimization problem can be solved by solving a sequence of convex programming problems. The basic idea of this algorithm is to embed the original problem into a problem in higher dimension and apply a parametric programming (path following) approach. Also it is shown that the same idea can be applied to a generalized linear fractional programming problem whose objective function is the sum of a convex function and a linear fractional function.

Journal ArticleDOI
TL;DR: This algorithm combines a new prismatic branch and bound technique with polyhedral outer approximation in such a way that only linear programming problems have to be solved.
Abstract: We are dealing with a numerical method for solving the problem of minimizing a difference of two convex functions (a d.c. function) over a closed convex set in ℝ n . This algorithm combines a new prismatic branch and bound technique with polyhedral outer approximation in such a way that only linear programming problems have to be solved.

Journal ArticleDOI
TL;DR: In this paper, a convex multiplicative programming problem of the form % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqq
Abstract: We consider a convex multiplicative programming problem of the form % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9qq-f0-yqaqVeLsFr0-vr% 0-vr0db8meaabaqaciGacaGaaeqabaWaaeaaeaaakeaacaGG7bGaam% OzamaaBaaaleaacaaIXaaabeaakiaacIcacaWG4bGaaiykaiabgwSi% xlaadAgadaWgaaWcbaGaaGOmaaqabaGccaGGOaGaamiEaiaacMcaca% GG6aGaamiEaiabgIGiolaadIfacaGG9baaaa!4A08!\[\{ f_1 (x) \cdot f_2 (x):x \in X\} \] where X is a compact convex set of ℝ n and f 1, f 2 are convex functions which have nonnegative values over X. Using two additional variables we transform this problem into a problem with a special structure in which the objective function depends only on two of the (n+2) variables. Following a decomposition concept in global optimization we then reduce this problem to a master problem of minimizing a quasi-concave function over a convex set in ℝ2 2. This master problem can be solved by an outer approximation method which requires performing a sequence of simplex tableau pivoting operations. The proposed algorithm is finite when the functions f i, (i=1, 2) are affine-linear and X is a polytope and it is convergent for the general convex case.


Journal ArticleDOI
TL;DR: An efficient algorithm for Waterman's problem, an on-line two-dimensional dynamic programming problem that is used for the prediction of RNA secondary structure, and an O(n + h log min{h, n2h}) time algorithm for the sparse convex case, where h is the number of possible base pairs in the RNA structure.

Journal ArticleDOI
TL;DR: It is shown that an upper bound for the gap in between the objective function value and the optimal value is reduced by a factor ofε, and that a simple (zero order) algorithm starting from an initial center of the feasible set generates a sequence of strictly feasible points whose objective function values converge to the optimalvalue.
Abstract: This work examines the method of analytic centers of Sonnevend when applied to solve generalized convex quadratic programs — where also the constraints are given by convex quadratic functions. We establish the existence of a two-sided ellipsoidal approximation for the set of feasible points around its center and show, that a simple (zero order) algorithm starting from an initial center of the feasible set generates a sequence of strictly feasible points whose objective function values converge to the optimal value. Concerning the speed of convergence it is shown that an upper bound for the gap in between the objective function value and the optimal value is reduced by a factor ofe with\(O(\sqrt m \left| {ln \varepsilon } \right|)\) iterations wherem is the number of inequality constraints. Here, each iteration involves the computation of one Newton step. The bound of\(O(\sqrt m \left| {ln \varepsilon } \right|)\) Newton iterations to guarantee an error reduction by a factor ofe in the objective function is as good as the one currently given forlinear programs. However, the algorithm considered here is of theoretical interest only, full efficiency of the method can only be obtained when accelerating it by some (higher order) extrapolation scheme, see e.g. the work of Jarre, Sonnevend and Stoer.

Journal ArticleDOI
TL;DR: New versions of proximal bundle methods for solving convex constrained nondifferentiable minimization problems withGlobal convergence of the methods is established, as well as finite termination for polyhedral problems.
Abstract: This paper presents new versions of proximal bundle methods for solving convex constrained nondifferentiable minimization problems. The methods employϑ 1 orϑ ∞ exact penalty functions with new penalty updates that limit unnecessary penalty growth. In contrast to other methods, some of them are insensitive to problem function scaling. Global convergence of the methods is established, as well as finite termination for polyhedral problems. Some encouraging numerical experience is reported. The ideas presented may also be used in variable metric methods for smooth nonlinear programming.

Journal ArticleDOI
TL;DR: The concept of quasiconjugate was introduced in this paper for functions defined on Rn whose values are in −R. This duality relationship allows us to establish a primal-dual pair in a class of nonconvex optimization problems without the duality gap.

Journal ArticleDOI
TL;DR: If sufficiently good approximations to the solutions of the nonlinear systems can be found, then the primal-dual gap becomes less that e in O( n |lne|) steps, where n is the number of variables.

Journal ArticleDOI
01 Jan 1991
TL;DR: A theory of discrete-time optimal filtering and smoothing based on convex sets of probability distributions is presented and the resulting estimator is an exact solution to the problem of running an infinity of Kalman filters and fixed-interval smoothers.
Abstract: A theory of discrete-time optimal filtering and smoothing based on convex sets of probability distributions is presented. Rather than propagating a single conditional distribution as does conventional Bayesian estimation, a convex set of conditional distributions is evolved. For linear Gaussian systems, the convex set can be generated by a set of Gaussian distributions with equal covariance with means in a convex region of state space. The conventional point-valued Kalman filter is generated to a set-valued Kalman filter consisting of equations of evolution of a convex set of conditional means and a conditional covariance. The resulting estimator is an exact solution to the problem of running an infinity of Kalman filters and fixed-interval smoothers, each with different initial conditions. An application is presented to illustrate and interpret the estimator results. >

Proceedings ArticleDOI
11 Nov 1991
TL;DR: The authors use a convex programming algorithm to compute a set of upper bounds on the net wire lengths and a modified min-cut algorithm is used to generate a placement with the objective of minimizing the number of nets, the wire lengths of which exceed their corresponding upper bounds.
Abstract: The authors present a novel performance driven placement algorithm. They use a convex programming algorithm to compute a set of upper bounds on the net wire lengths. A modified min-cut algorithm is then used to generate a placement with the objective of minimizing the number of nets, the wire lengths of which exceed their corresponding upper bounds. The situation in which the modified min-cut algorithm fails to generate a placement that satisfies the timing requirements is addressed, and an iterative approach is used to modify the set of upper bounds making use of information from previous placements. The algorithm was implemented in C and tested on eight problems on a Sparc 2 workstation. >

Journal ArticleDOI
TL;DR: A convergent cutting plane algorithm is presented to solve the equivalent nonlinear program, which takes advantage of the characteristics of the problem and compares favorably with a general nonlinear code and other approaches proposed for solving this problem.
Abstract: One approach for solving linear programs with random coefficients is chance constrained programming. For the case where the technical coefficients are normally distributed, we present a convergent cutting plane algorithm to solve the equivalent nonlinear program, which takes advantage of the characteristics of the problem. The algorithm requires a moderate computational effort and compares favorably with a general nonlinear code and other approaches proposed for solving this problem.

Journal ArticleDOI
TL;DR: It is shown that the global minimum of this nonconvex problem can be obtained by solving a sequence of convex programming problems and the basic algorithm is to embed the original problem into a problem in a higher dimensional space and to apply a branch-and-bound algorithm using an underestimating function.
Abstract: This paper addresses itself to the algorithm for minimizing the product of two nonnegative convex functions over a convex set. It is shown that the global minimum of this nonconvex problem can be obtained by solving a sequence of convex programming problems. The basic idea of this algorithm is to embed the original problem into a problem in a higher dimensional space and to apply a branch-and-bound algorithm using an underestimating function. Computational results indicate that our algorithm is efficient when the objective function is the product of a linear and a quadratic functions and the constraints are linear. An extension of our algorithm for minimizing the sum of a convex function and a product of two convex functions is also discussed.