Q2. What are the effects of reorderings on the rate of convergence of preconditioned K?
Reorderings have been used to reduce fill-in (as with sparse direct solvers), to introduce parallelism in the construction and application of ILU preconditioners, and to improve the stability of the incomplete factorization.
Q3. What is the common method of reducing the cost of applying the preconditioner?
Postfiltration (i.e., a posteriori removal of small entries) is used to reduce the cost of applying the preconditioner, usually without adversely affecting the rate of convergence.
Q4. What is the main reason iterative methods have not been widely used in industrial applications?
This lack of robustness of incomplete factorization is one of the main reasons iterative methods have not been widely used in industrial applications, where reliability is paramount.
Q5. What was the first iterative method used for solving large linear systems?
At this time, stationary iterative methods, such as successive overrelaxation (SOR) and its variants, were perfected and widely applied to the solution of large linear systems arising from the discretization of PDEs of elliptic type.
Q6. Why was it motivated by the hope that if only a few of the pivots are?
It was motivated by the hope that if only a few of the pivots are unstable (i.e., nonpositive), the resulting factorization might still yield a satisfactory preconditioner.
Q7. Why are the z- and w-vectors sparse?
Because of the initialization chosen (step (1) in Algorithm 5.2), the z- and w-vectors are initially very sparse; however, the updates in step (8) cause them to fill in rapidly (see [40, 43, 64] for graph-theoretical characterizations of fill-in in Algorithm 5.2).
Q8. What are the limitations of stationary iterations?
In spite of their mathematical elegance, stationary iterations suffer from serious limitations, such as lack of sufficient generality and dependence on convergence parameters that might be difficult to estimate without a priori information, for example on the spectrum of the coefficient matrix.
Q9. How many iterations does it take to reduce the intitial residual?
Bi-CGSTAB with no preconditioning requires 4033 iterations to reduce the intitial residual by eight orders of magnitude; this takes 10.1 s on one processor of a Sun Starfire server.
Q10. What is the reason why the ILU preconditioners are not readily apparent?
the forward and backward triangular solves that form the preconditioning operation are highly sequential in nature, and parallelism in these operations is not readily apparent.
Q11. How many iterations does a block algorithm take to solve?
while the number of iterations and even the time per iteration may go down with a block algorithm, using blocks which are not completely dense introduces additional arithmetic overhead that tends to offset the gains in convergence rates and flops rate.
Q12. What is the preconditioner for a general SPD matrix?
Because the pivots d j are computed as d j = 〈Az j , z j 〉, with z j = 0 (since the jth entry of z j is equal to 1), this preconditioner, which is usually referred to as SAINV, is well defined for a general SPD matrix.
Q13. What is the reason why the incomplete factorization may fail?
Notice that the incomplete factorization may fail due to division by zero (this is usually referred to as a breakdown), even if A admits an LU factorization without pivoting.
Q14. What is the reliability of preconditioned iterative solvers applied to general sparse?
the reliability of preconditioned iterative solvers applied to general sparse matrices, thus opening the door to the use of iterative methods in application areas which were previously the exclusive domain of direct solution methods.
Q15. What is the difference between modified incomplete factorizations and unmodified ones?
In spite of the large improvements that are possible for model problems through the idea of modification, modified incomplete factorizations are not as widely used as unmodified ones, possibly due to the fact that modified methods are more likely to break down on nonmodel problems.