scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Penalty and Smoothing Methods for Convex Semi-Infinite Programming

TL;DR: This paper introduces a unified framework concerning Remez-type algorithms and integral methods coupled with penalty and smoothing methods that subsumes well-known classical algorithms, but also provides some new methods with interesting properties.
Abstract: In this paper we consider min-max convex semi-infinite programming. To solve these problems we introduce a unified framework concerning Remez-type algorithms and integral methods coupled with penalty and smoothing methods. This framework subsumes well-known classical algorithms, but also provides some new methods with interesting properties. Convergence of the primal and dual sequences are proved under minimal assumptions.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The convergence of primal and dual central paths associated to entropy and exponential functions, respectively, for semidefinite programming problem are studied and the convergence of a particular weighted dual proximal sequence to a point in the dual optimal set is obtained.
Abstract: The convergence of primal and dual central paths associated to entropy and exponential functions, respectively, for semidefinite programming problem are studied in this paper. It is proved that the primal path converges to the analytic center of the primal optimal set with respect to the entropy function, the dual path converges to a point in the dual optimal set and the primal-dual path associated to this paths converges to a point in the primal-dual optimal set. As an application, the generalized proximal point method with the Kullback-Leibler distance applied to semidefinite programming problems is considered. The convergence of the primal proximal sequence to the analytic center of the primal optimal set with respect to the entropy function is established and the convergence of a particular weighted dual proximal sequence to a point in the dual optimal set is obtained.

2 citations

Book ChapterDOI
01 Jan 2014

2 citations

Journal ArticleDOI
TL;DR: This work proposes an explicit exchange method, and proves that the algorithm terminates in a finite number of iterations, and shows that the obtained output is an approximate optimum of SOCCSIP.
Abstract: We focus on the convex semi-infinite program with second-order cone constraints (for short, SOCCSIP), which has wide applications such as filter design, robust optimization, and so on. For solving the SOCCSIP, we propose an explicit exchange method, and prove that the algorithm terminates in a finite number of iterations. In the convergence analysis, we do not need to use the special structure of second-order cone (SOC) when the objective or constraint function is strictly convex. However, if both of them are non-strictly convex and constraint function is affine or quadratic, then we have to utilize the SOC complementarity conditions and the spectral factorization techniques associated with Euclidean Jordan algebra. We also show that the obtained output is an approximate optimum of SOCCSIP. We report some numerical results involving the application to the robust optimization in the classical convex semi-infinite program.

2 citations

Book ChapterDOI
01 Jan 2014

2 citations

Book ChapterDOI
01 Jan 2014
TL;DR: Ordinary (or finite) linear optimization, linear infinite optimization, and linear semi-infinite optimization deal with linear optimization problems, where the dimension of the decision space and the number of constraints are both finite, both infinite and exactly one of them finite, respectively.
Abstract: Ordinary (or finite) linear optimization, linear infinite optimization, and linear semi-infinite optimization (LO, LIO, and LSIO in short) deal with linear optimization problems, where the dimension of the decision space and the number of constraints are both finite, both infinite, and exactly one of them finite, respectively.

2 citations

References
More filters
01 Feb 1977

5,933 citations


"Penalty and Smoothing Methods for C..." refers background in this paper

  • ...We recall here some basic notions about asymptotic cones and functions (for more details see, for instance, the books of Auslender and Teboulle [4], Rockafellar [24])....

    [...]

  • ...We recall here some basic notions about asymptotic cones and functions (for more details see, for instance, the books of Auslender and Teboulle [4] and of Rockafellar [24])....

    [...]

Journal ArticleDOI
TL;DR: A new approach for constructing efficient schemes for non-smooth convex optimization is proposed, based on a special smoothing technique, which can be applied to functions with explicit max-structure, and can be considered as an alternative to black-box minimization.
Abstract: In this paper we propose a new approach for constructing efficient schemes for non-smooth convex optimization. It is based on a special smoothing technique, which can be applied to functions with explicit max-structure. Our approach can be considered as an alternative to black-box minimization. From the viewpoint of efficiency estimates, we manage to improve the traditional bounds on the number of iterations of the gradient schemes from ** keeping basically the complexity of each iteration unchanged.

2,948 citations

Book
11 May 2000
TL;DR: It is shown here how the model derived recently in [Bouchut-Boyaval, M3AS (23) 2013] can be modified for flows on rugous topographies varying around an inclined plane.
Abstract: Basic notation.- Introduction.- Background material.- Optimality conditions.- Basic perturbation theory.- Second order analysis of the optimal value and optimal solutions.- Optimal Control.- References.

2,067 citations

Journal ArticleDOI

1,560 citations


"Penalty and Smoothing Methods for C..." refers methods in this paper

  • ...Applied to LSIP, especially Cheney and Goldstein [10] and Kelley [15] turn out to be identical or mere modifications of the dual simplex method discussed above, so that they have similar properties and drawbacks....

    [...]

  • ...Supposing that F is 1 (as is generally the case in ordinary CSIP), we can use cutting-plane methods of Cheney and Goldstein [10], Kelley [15], Veinott [31], or Elzinga and Moore [11], and their variants (see, e.g., Reemtsen and Görner [22] for more references)....

    [...]

  • ...To avoid slow convergence, constraint dropping rules are again given under some conditions as strict convexity on F for Cheney and Goldstein [10] and Kelley [15]....

    [...]

  • ...Supposing that F is 1 (as is generally the case in ordinary CSIP), we can use cutting-plane methods of Cheney and Goldstein [10], Kelley [15], Veinott [31], or Elzinga and Moore [11], and their variants (see, e....

    [...]

Journal ArticleDOI
TL;DR: A class of parametric smooth functions that approximate the fundamental plus function, (x)+=max{0, x}, by twice integrating a probability density function leads to classes of smooth parametric nonlinear equation approximations of nonlinear and mixed complementarity problems (NCPs and MCPs).
Abstract: We propose a class of parametric smooth functions that approximate the fundamental plus function, (x)+=max{0, x}, by twice integrating a probability density function. This leads to classes of smooth parametric nonlinear equation approximations of nonlinear and mixed complementarity problems (NCPs and MCPs). For any solvable NCP or MCP, existence of an arbitrarily accurate solution to the smooth nonlinear equations as well as the NCP or MCP, is established for sufficiently large value of a smoothing parameter α. Newton-based algorithms are proposed for the smooth problem. For strongly monotone NCPs, global convergence and local quadratic convergence are established. For solvable monotone NCPs, each accumulation point of the proposed algorithms solves the smooth problem. Exact solutions of our smooth nonlinear equation for various values of the parameter α, generate an interior path, which is different from the central path for interior point method. Computational results for 52 test problems compare favorably with these for another Newton-based method. The smooth technique is capable of solving efficiently the test problems solved by Dirkse and Ferris [6], Harker and Xiao [11] and Pang & Gabriel [28].

465 citations


"Penalty and Smoothing Methods for C..." refers background in this paper

  • ...In [9], Chen and Mangasarian provided a systematic way to generate elements of 1....

    [...]