scispace - formally typeset
Search or ask a question

Showing papers by "José Mario Martínez published in 2010"


Journal ArticleDOI
TL;DR: A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems and global convergence to an $$varepsilon}$$ -global minimizer of the original problem is proved.
Abstract: A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the $${\varepsilon_{k}}$$ -global minimization of the Augmented Lagrangian with simple constraints, where $${\varepsilon_k \to \varepsilon}$$ . Global convergence to an $${\varepsilon}$$ -global minimizer of the original problem is proved. The subproblems are solved using the źBB method. Numerical experiments are presented.

128 citations


Journal ArticleDOI
TL;DR: It is proved that a well-established augmented Lagrangian algorithm produces sequences whose limits satisfy the new condition of this type, and practical consequences are discussed.
Abstract: Necessary first-order sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Sequential optimality conditions are satisfied by local minimizers of optimization problems independently of the fulfillment of constraint qualifications. A new condition of this type is introduced in the present paper. It is proved that a well-established augmented Lagrangian algorithm produces sequences whose limits satisfy the new condition. Practical consequences are discussed.

89 citations


Journal ArticleDOI
TL;DR: A Nonlinear Programming algorithm that converges to second-order stationary points that is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type.
Abstract: A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.

50 citations


Journal ArticleDOI
TL;DR: A method for linearly constrained optimization which modifies and generalizes recent box-constraint optimization algorithms is introduced, based on a relaxed form of Spectral Projected Gradient iterations.
Abstract: A method for linearly constrained optimization which modifies and generalizes recent box-constraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithm. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango Project web page: http://www.ime.usp.br/∼egbirgin/tango/.

15 citations


Journal ArticleDOI
TL;DR: An Augmented Lagrangian method is defined with the addition of a regularization term that inhibits the possibility that the iterates go far from a reference point.
Abstract: When one solves Nonlinear Programming problems by means of algorithms that use merit criteria combining the objective function and penalty feasibility terms, a phenomenon called greediness may occur. Unconstrained minimizers attract the iterates at early stages of the calculations and, so, the penalty parameter needs to grow excessively, in such a way that ill-conditioning harms the overall convergence. In this paper a regularization approach is suggested to overcome this difficulty. An Augmented Lagrangian method is defined with the addition of a regularization term that inhibits the possibility that the iterates go far from a reference point. Convergence proofs and numerical examples are given.

10 citations