scispace - formally typeset
Search or ask a question
Author

Jean Charles Gilbert

Bio: Jean Charles Gilbert is an academic researcher from French Institute for Research in Computer Science and Automation. The author has contributed to research in topics: Augmented Lagrangian method & Linear complementarity problem. The author has an hindex of 12, co-authored 32 publications receiving 3803 citations. Previous affiliations of Jean Charles Gilbert include International Institute for Applied Systems Analysis.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper focuses on the primal version of the new algorithm, an algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints, which applies sequential quadratic programming techniques to a sequence of barrier problems.
Abstract: An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primal-dual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented.

1,514 citations

Journal ArticleDOI
TL;DR: This paper explores the convergence of nonlinear conjugate gradient methods without restarts, and with practical line searches, covering two classes of methods that are globally convergent on smooth, nonconvex functions.
Abstract: This paper explores the convergence of nonlinear conjugate gradient methods without restarts, and with practical line searches. The analysis covers two classes of methods that are globally convergent on smooth, nonconvex functions. Some properties of the Fletcher–Reeves method play an important role in the first family, whereas the second family shares an important property with the Polak–Ribiere method. Numerical experiments are presented.

947 citations

Journal ArticleDOI
TL;DR: It is found appropriate to use a diagonal matrix, generated by an update of the identity matrix, so as to fit the Rayleigh ellipsoid of the local Hessian in the direction of the change in the gradient.
Abstract: This paper describes some numerical experiments with variable-storage quasi-Newton methods for the optimization of some large-scale models (coming from fluid mechanics and molecular biology). In addition to assessing these kinds of methods in real-life situations, we compare an algorithm of A. Buckley with a proposal by J. Nocedal. The latter seems generally superior, provided that careful attention is given to some nontrivial implementation aspects, which concern the general question of properly initializing a quasi-Newton matrix. In this context, we find it appropriate to use a diagonal matrix, generated by an update of the identity matrix, so as to fit the Rayleigh ellipsoid of the local Hessian in the direction of the change in the gradient. Also, a variational derivation of some rank one and rank two updates in Hilbert spaces is given.

719 citations

Book
31 Aug 2008

120 citations


Cited by
More filters
Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Journal ArticleDOI
TL;DR: A comprehensive description of the primal-dual interior-point algorithm with a filter line-search method for nonlinear programming is provided, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix.
Abstract: We present a primal-dual interior-point algorithm with a filter line-search method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several line-search options, and a comparison is provided with two state-of-the-art interior-point codes for nonlinear programming.

7,966 citations

Journal ArticleDOI
TL;DR: The numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence, and the convergence properties are studied to prove global convergence on uniformly convex problems.
Abstract: We study the numerical performance of a limited memory quasi-Newton method for large scale optimization, which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir (1985), which combines cycles of BFGS steps and conjugate direction steps. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Toint (1982a). The results show that, for some problems, the partitioned quasi-Newton method is clearly superior to the L-BFGS method. However we find that for other problems the L-BFGS method is very competitive due to its low iteration cost. We also study the convergence properties of the L-BFGS method, and prove global convergence on uniformly convex problems.

7,004 citations

Journal ArticleDOI
TL;DR: An algorithm for solving large nonlinear optimization problems with simple bounds is described, based on the gradient projection method and uses a limited memory BFGS matrix to approximate the Hessian of the objective function.
Abstract: An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited memory BFGS matrix to approximate the Hessian of the objective function. It is shown how to take advantage of the form of the limited memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.

5,079 citations