On mutual impact of numerical linear algebra and large-scale optimization with focus on interior point methods
read more
Citations
Interior point methods 25 years later
Matrix-free interior point method
Convergence Analysis of an Inexact Feasible Interior Point Method for Convex Quadratic Programming
Convergence Analysis of an Inexact Feasible Interior Point Method for Convex Quadratic Programming
A comparison of reduced and unreduced KKT systems arising from interior point methods
References
Numerical Optimization
GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems
On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming
Related Papers (5)
Frequently Asked Questions (16)
Q2. What is the basic idea of such strategies?
The basic idea of such strategies is to reuse the CP until its effectiveness deteriorates in terms of inner iterations required to solve the system.
Q3. Why is the fill-in problem a natural choice?
Since the matrices to be factorized are often sparse, suitable reordering strategies are exploited to deal with the fill-in problem.
Q4. Why is SQMR able to be applied to the preconditioned KKT system?
Note that, due to the symmetry of K and P , SQMR can be applied to the preconditioned KKT system, which is transpose-free, and hence computationally more efficient.
Q5. What are the main issues that are related to the iterative linear algebra solvers?
The authors focused on large-scale problems and on the iterative linear algebra solvers, addressing, in particular, three fundamental issues which are related to specific needs of IP methods and have a significat impact on their effectiveness: preconditioning of the KKT system, with special attention to CPs, adaptive stopping criteria for the inner iterations, and controlling the inertia of the KKT matrix.
Q6. What are the main shortcomings of the SQP methods?
The SQP methods also have several critical shortcomings, such as the possibility that the subproblem is not convex, the linearized constraints are inconsistent and the iterates do not converge.
Q7. How has the PRQP solver been compiled?
The PR code, written in Fortran 77 with a C driver that manages dynamic memory allocation, has been compiled using the g77 3.4.6 and gcc 4.1.3 compilers.
Q8. What is the definition of inertia control?
The ability of a solver to reveal and modify the inertia of K is referred to as inertia control, and hence a solver that has this capability is referred to as inertia-controlling solver.
Q9. How can the authors obtain an approximate solution of (6)?
An approximate solution of (6) can be obtained by applying a Newton step to the KKT conditions of the BP, starting from a previous approximation.
Q10. What is the default choice for the preconditioner?
The default choice for the preconditioner is the exact CP; it is applied through the sparse LBLT factorization provided by the MA27 suite of routines [27] from the Harwell Subroutine Library.
Q11. What can be used to decide when to update the preconditioner?
Other criteria can be chosen to decide when to update the preconditioner; furthermore, CG and SQMR can be also applied alternately, i.e. CG when the CP is used for the first time and SQMR in all the remaining cases (see [16] for details).
Q12. Why is the cost of factorizations prohibitive?
When the problem is large-scale, the cost of the factorizations may be prohibitive in terms of memory and time, thus limiting the effective use of optimization codes.
Q13. How can one obtain an approximate CP?
CP approximations are obtained by reusing for multiple IP iterations the CP that has been computed at a certain iteration.
Q14. Why is the CP idea motivated by the observation that when the IP method progresses toward?
This idea is motivated by the observation that, when the IP method progresses toward the solution, the entries in D generally get smaller.
Q15. What are the conditions of the first order optimality conditions of reduced quadratic optimization problems?
The authors observe that systems (24), (25) and (20) are the first order optimality conditions of reduced quadratic optimization problems, obtained by eliminating all the constraints from the original quadratic problems.
Q16. What is the difference between QMR and GMRES?
Unlike GMRES, which uses a long-term recurrence for generating an orthogonal basis for the corresponding Krylov subspace, QMR is based on a short-term recurrence, but generates a nonorthogonal basis.