scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Iterative Solution of Augmented Systems Arising in Interior Methods

01 May 2007-Siam Journal on Optimization (Society for Industrial and Applied Mathematics)-Vol. 18, Iss: 2, pp 666-690
TL;DR: A family of constraint preconditioners is proposed that provably eliminates the inherent ill-conditioning in the augmented system of linear equations that arise in interior methods for general nonlinear optimization.
Abstract: Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton's method for a point satisfying the perturbed optimality conditions. These equations involve both the primal and dual variables and become increasingly ill-conditioned as the optimization proceeds. In this context, an iterative linear solver must not only handle the ill-conditioning but also detect the occurrence of KKT matrices with the wrong matrix inertia. A one-parameter family of equivalent linear equations is formulated that includes the KKT system as a special case. The discussion focuses on a particular system from this family, known as the “doubly augmented system,” that is positive definite with respect to both the primal and dual variables. This property means that a standard preconditioned conjugate-gradient method involving both primal and dual variables will either terminate successfully or detect if the KKT matrix has the wrong inertia. Constraint preconditioning is a well-known technique for preconditioning the conjugate-gradient method on augmented systems. A family of constraint preconditioners is proposed that provably eliminates the inherent ill-conditioning in the augmented system. A considerable benefit of combining constraint preconditioning with the doubly augmented system is that the preconditioner need not be applied exactly. Two particular “active-set” constraint preconditioners are formulated that involve only a subset of the rows of the augmented system and thereby may be applied with considerably less work. Finally, some numerical experiments illustrate the numerical performance of the proposed preconditioners and highlight some theoretical properties of the preconditioned matrices.

Summary (1 min read)

1.3. Notation and assumptions.

  • It is often the case in practice that the equations and variables corresponding to unit rows of A are eliminated directly from the KKT system.
  • This elimination creates no additional nonzero elements and provides a smaller "partially condensed" system with an Ω(1/μ) diagonal term added to H.
  • It will be shown that preconditioners for both the full and partially condensed KKT systems depend on the eigenvalues of the same matrix (see Lemmas 3.4 and 3.5) .
  • It follows that their analysis also applies to preconditioners defined for the partially condensed system.

By successively replacing H by

  • Fixing μ defines a regularization of the problem, which allows the formulation of methods that do not require an assumption on the rank of the equality constraint Jacobian.
  • With an appropriate choice of constraints, this feature can be used to guarantee that the nonlinear functions and their derivatives are well defined at all points generated by the interior method.
  • Note that the authors cannot compute the condensed or doubly augmented system for these equation because of the zero block.
  • Then, provided that the constraint preconditioner is applied exactly at every PCG step, the right-hand side of (5.3) will remain zero for all subsequent iterations.

6. Some numerical examples.

  • To illustrate the numerical performance of the proposed preconditioners, a PCG method was applied to a collection of illustrative large sparse KKT systems.
  • The test matrices were generated from a number of realistic KKT systems arising in the context of primal-dual interior methods.
  • The authors conclude with some randomly generated problems that illustrate some of the properties of the preconditioned matrices.

6.2. Results from randomly generated problems.

  • The authors expect that the proposed preconditioners would asymptotically give a cluster of 700 unit eigenvalues.
  • Table 6 .3 was generated with the same data used for Table 6 .2, with the one exception that strict complementarity was assumed not to hold.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
SIAM J. OPTIM.
c
2007 Society for Industrial and Applied Mathematics
Vol. 18, No. 2, pp. 666–690
ITERATIVE SOLUTION OF AUGMENTED SYSTEMS ARISING IN
INTERIOR METHODS
ANDERS FORSGREN
, PHILIP E. GILL
, AND JOSHUA D. GRIFFIN
§
Abstract. Iterative methods are proposed for certain augmented systems of linear equations
that arise in interior methods for general nonlinear optimization. Interior methods define a sequence
of KKT equations that represent the symmetrized (but indefinite) equations associated with New-
ton’s method for a point satisfying the perturbed optimality conditions. These equations involve both
the primal and dual variables and become increasingly ill-conditioned as the optimization proceeds.
In this context, an iterative linear solver must not only handle the ill-conditioning but also detect the
occurrence of KKT matrices with the wrong matrix inertia. A one-parameter family of equivalent
linear equations is formulated that includes the KKT system as a special case. The discussion focuses
on a particular system from this family, known as the “doubly augmented system,” that is positive
definite with respect to both the primal and dual variables. This property means that a standard
preconditioned conjugate-gradient method involving both primal and dual variables will either termi-
nate successfully or detect if the KKT matrix has the wrong inertia. Constraint preconditioning is a
well-known technique for preconditioning the conjugate-gradient method on augmented systems. A
family of constraint preconditioners is proposed that provably eliminates the inherent ill-conditioning
in the augmented system. A considerable benefit of combining constraint preconditioning with the
doubly augmented system is that the preconditioner need not be applied exactly. Two particular
“active-set” constraint preconditioners are formulated that involve only a subset of the rows of the
augmented system and thereby may be applied with considerably less work. Finally, some numerical
experiments illustrate the numerical performance of the proposed preconditioners and highlight some
theoretical properties of the preconditioned matrices.
Key words. large-scale nonlinear programming, nonconvex optimization, interior methods,
augmented systems, KKT systems, iterative methods, conjugate-gradient method, constraint pre-
conditioning
AMS subject classifications. 49J20, 49J15, 49M37, 49D37, 65F05, 65K05, 90C30
DOI. 10.1137/060650210
1. Introduction. This paper concerns the formulation and analysis of precon-
ditioned iterative methods for the solution of augmented systems of the form
(1.1)
H A
T
AG

x
1
x
2
=
b
1
b
2
,
with A an m×n matrix, H symmetric, and G symmetric positive semidefinite. These
equations arise in a wide variety of scientific and engineering applications, where they
are known by a number of different names, including “augmented systems,” “saddle-
point systems,” “KKT systems,” and “equilibrium systems.” (The bibliography of the
survey by Benzi, Golub, and Liesen [3] contains 513 related articles.) The main focus
Received by the editors January 17, 2006; accepted for publication (in revised form) March 5,
2007; published electronically August 22, 2007. The research of the second and third authors was
supported by National Science Foundation grants DMS-9973276, CCF-0082100, and DMS-0511766.
http://www.siam.org/journals/siopt/18-2/65021.html
Optimization and Systems Theory, Department of Mathematics, Royal Institute of Technology,
SE-100 44 Stockholm, Sweden (andersf@kth.se). The research of this author was supported by the
Swedish Research Council (VR).
Department of Mathematics, University of California, San Diego, La Jolla, CA 92093-0112
(pgill@ucsd.edu).
§
Sandia National Laboratories, Livermore, CA 94551-9217 (jgriffi@sandia.gov). Part of this work
was carried out during the Spring of 2003 while this author was visiting KTH with financial support
from the oran Gustafsson Foundation.
666

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
ITERATIVE SOLUTION OF AUGMENTED SYSTEMS 667
of this paper will be on the solution of augmented systems arising in interior methods
for general constrained optimization, in which case (1.1) is the system associated with
Newton’s method for finding values of the primal and dual variables that satisfy the
perturbed KKT optimality conditions (see, e.g., Wright [49] and Forsgren, Gill, and
Wright [15]). In this context H is the Hessian of the Lagrangian, A is the constraint
Jacobian, and G is diagonal.
Many of the benefits associated with the methods discussed in this paper derive
from formulating the interior method so that the diagonal G is positive definite.We
begin by presenting results for G positive definite and consider the treatment of sys-
tems with positive semidefinite and singular G in section 5. Throughout, for the case
where G is positive definite, we denote G by D and rewrite (1.1) as an equivalent
symmetric system Bx = b, where
(1.2) B =
H A
T
A D
and b =
b
1
b
2
,
with D positive definite and diagonal. We will refer to this symmetric system as the
KKT system. (It is possible to symmetrize (1.1) in a number of different ways. The
format (1.2) will simplify the linear algebra in later sections.) When D is nonsingular,
it is well known that the augmented system is equivalent to the two smaller systems
(1.3) (H + A
T
D
1
A)x
1
= b
1
+ A
T
D
1
b
2
and x
2
= D
1
(b
2
Ax
1
),
where the system for x
1
is known as the condensed system. It is less well known that
another equivalent system is the doubly augmented system
(1.4)
H +2A
T
D
1
AA
T
AD

x
1
x
2
=
b
1
+2A
T
D
1
b
2
b
2
,
which has been proposed for use with direct factorization methods by Forsgren and
Gill [16]. In this paper we investigate the properties of preconditioned iterative meth-
ods applied to system (1.2) directly or to the equivalent systems (1.3) and (1.4).
If the underlying optimization problem is not convex, the matrix H may be in-
definite. The KKT matrix B of (1.2) is said to have correct inertia if the matrix
H + A
T
D
1
A is positive definite. This definition is based on the properties of the un-
derlying optimization problem. Broadly speaking, the KKT system has correct inertia
if the problem is locally convex (for further details see, e.g., Forsgren and Gill [16],
Forsgren [18], and Griffin [32]). If the KKT matrix has correct inertia, then systems
(1.2)–(1.4) have a common unique solution (see section 2).
1.1. Properties of the KKT system. The main issues associated with using
iterative methods to solve KKT systems are (i) termination control, (ii) inertia con-
trol, and (iii) inherent ill-conditioning. The first of these issues is common to other
applications where the linear system represents a linearization of some underlying non-
linear system of equations. Issues (ii) and (iii), however, are unique to optimization
and will be the principal topics of this paper.
In the context of interior methods, the KKT system (1.2) is solved as part of
a two-level iterative scheme. At the outer level, nonlinear equations that define the
first-order optimality conditions are parameterized by a small positive quantity μ.
The idea is that the solution of the parameterized equations should approach the
solution of the optimization problem as μ 0. At the inner level, equations (1.2)
represent the symmetrized Newton equations associated with finding a zero of the

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
668 ANDERS FORSGREN, PHILIP E. GILL, AND JOSHUA D. GRIFFIN
perturbed optimality conditions for a given value of μ. Although systems (1.2)–(1.4)
have identical solutions, an iterative method will generally produce a different se-
quence of iterates in each case (see section 3 for a discussion of the equivalence of
iterative solvers in this context). An iterative method applied to the augmented sys-
tem (1.2) or the doubly augmented system (1.4) treats x
1
and x
2
as independent
variables, which is appropriate in the optimization context because x
1
and x
2
are as-
sociated with independent quantities in the perturbed optimality conditions (i.e., the
primal and dual variables). In contrast, an iterative solver for the condensed system
(1.3) will generate approximations to x
1
only, with the variables x
2
being defined as
x
2
= D
1
(b
2
Ax
1
). This becomes an important issue when an approximate solution
is obtained by truncating the iterations of the linear solver. During the early outer
iterations, it is usually inefficient to solve the KKT system accurately, and it is better
to accept an inexact solution that gives a residual norm that is less than some factor
of the norm of the right-hand side (see, e.g., Dembo, Eisenstat, and Steihaug [7]).
For the condensed system, the residual for the second block of equations will be zero
regardless of the accuracy of x
1
, which implies that termination must be based on the
accuracy of x
1
alone. It is particularly important for the solver to place equal weight
on x
1
and x
2
when system (1.2) is being solved in conjunction with a primal-dual
trust-region method (see Gertz and Gill [20] and Griffin [32]). The conjugate-gradient
version of this method exploits the property that the norms of the (x
1
,x
2
) iterates
increase monotonically (see Steihaug [44]). This property does not hold for (x
1
,x
2
)
iterates generated for the condensed system.
If the KKT matrix does not have the correct inertia, the solution of (1.2) is not
useful, and the optimization continues with an alternative technique based on either
implicitly or explicitly modifying the matrix H (see, e.g., Toint [45], Steihaug [44],
Gould et al. [30], Hager [33], and Griffin [32]). It is therefore important that the
iterative solver is able to detect if B does not have correct inertia.
As the perturbation parameter μ is reduced, the KKT systems become increas-
ingly ill-conditioned. The precise form of this ill-conditioning depends on the formu-
lation of the interior method, but a common feature is that some diagonal elements
of D are big and some are small. (It is almost always possible to formulate an interior
method that requires the solution of an unsymmetric system that does not exhibit
inevitable ill-conditioning as μ 0. This unsymmetric system could be solved us-
ing an unsymmetric solver such as
GMRES or QMR. Unfortunately, this approach
is unsuitable for general KKT systems because an unsymmetric solver is unable to
determine if the KKT matrix has correct inertia.) In section 3 we consider a pre-
conditioned conjugate-gradient (PCG) method that provably removes the inherent
ill-conditioning. In particular, we define a one-parameter family of preconditioners
related to the class of so-called constraint preconditioners proposed by Keller, Gould,
and Wathen [34]. Several authors have used constraint preconditioners in conjunction
with the conjugate-gradient method to solve the indefinite KKT system (1.2) with
b
2
= 0 and D = 0 (see, e.g., Lukˇsan and Vlˇcek [36], Gould, Hribar, and Nocedal [29],
Perugia and Simoncini [40], and Bergamaschi, Gondzio, and Zilli [4]). Recently, Dol-
lar [12] and Dollar et al. [11] have proposed constraint preconditioners for system (1.2)
with no explicit inertial or diagonal condition on D, but a full row-rank requirement
on A and the assumption that b
2
=0.
Methods that require b
2
= 0 must perform an initial projection step that effec-
tively shifts the right-hand side to zero. The constraint preconditioner then forces the
x
1
iterates to lie in the null space of A. A disadvantage with this approach is that the

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
ITERATIVE SOLUTION OF AUGMENTED SYSTEMS 669
constraint preconditioner must be applied exactly if subsequent iterates are to lie in
the null space. This limits the ability to perform approximate solves with the precon-
ditioner, as is often required when the matrix A has a PDE-like structure that also
must be handled using an iterative solver (see, e.g., Saad [41], Notay [37], Simoncini
and Szyld [43], and Elman et al. [14]). In section 3 we consider preconditioners that
do not require the assumption that b
2
= 0, and hence do not require an accurate solve
with the preconditioner.
1.2. A PCG method for the KKT system. The goal of this paper is to for-
mulate iterative methods that not only provide termination control and inertia control,
but also eliminate the inevitable ill-conditioning associated with interior methods. All
these features are present in an algorithm based on applying a PCG method to the
doubly augmented system (1.4). This system is positive definite if the KKT matrix
has correct inertia, and gives equal weight to x
1
and x
2
for early terminations. As
preconditioner we use the constraint preconditioner
(1.5) P =
M +2A
T
D
1
AA
T
AD
,
where M is an approximation of H such that M + A
T
D
1
A is positive definite.
The equations Pv = r used to apply the preconditioner are solved by exploiting the
equivalence of the systems
M +2A
T
D
1
AA
T
AD

v
1
v
2
=
r
1
r
2
,(1.6a)
M A
T
A D

v
1
v
2
=
r
1
2A
T
D
1
r
2
r
2
, and(1.6b)
(M + A
T
D
1
A)v
1
= r
1
A
T
D
1
r
2
,v
2
= D
1
(r
2
Av
1
)(1.6c)
(see section 3). This allows us to compute the solution of (1.6a) by solving either
(1.6b) or (1.6c). (The particular choice will depend on the relative efficiency of the
methods available to solve the condensed and augmented systems.)
We emphasize that the doubly augmented systems are never formed or factored
explicitly. The matrix associated with the doubly augmented equations (1.4) is used
only as an operator to define products of the form v = Bu. As mentioned above, the
equations (1.6a) that apply the preconditioner are solved using either (1.6b) or (1.6c).
An important property of the method is that these equations also may be solved using
an iterative method. (It is safe to use the augmented or condensed system for the
preconditioner equations Pv = r because the inertia of P is guaranteed by the choice
of M (see section 3).)
In section 4 we formulate and analyze two variants of the preconditioner (1.5)
that exploit the asymptotic behavior of the elements of D. The use of these so-called
active-set preconditioners may require significantly less work when the underlying
optimization problem has more constraints than variables. In section 5, we consider
the case where G is positive semidefinite and singular. Finally, in section 6, we present
some numerical examples illustrating the properties of the proposed preconditioners.
1.3. Notation and assumptions. Unless explicitly indicated otherwise, ·
denotes the vector two-norm or its subordinate matrix norm. The inertia of a real
symmetric matrix A, denoted by In(A), is the integer triple (a
+
,a
,a
0
) giving the
number of positive, negative, and zero eigenvalues of A. The spectrum of a (possi-
bly unsymmetric) matrix A is denoted by eig(A). As the analysis concerns matrices

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
670 ANDERS FORSGREN, PHILIP E. GILL, AND JOSHUA D. GRIFFIN
with only real eigenvalues, eig(A) is regarded as an ordered set, with the least (i.e.,
“leftmost”) eigenvalue, denoted by eig
min
(A), appearing first. The quantity σ
k
(A)
denotes the kth largest singular value of A. Given a positive-definite A, the unique
positive-definite X such that X
2
= A is denoted by A
1/2
. Given vectors x
1
and x
2
,
the column vector consisting of the elements of x
1
augmented by the elements of x
2
is denoted by (x
1
,x
2
).
When μ is a positive scalar such that μ 0, the notation p = O
μ
means that
there exists a constant K such that |p|≤ for all μ sufficiently small. For a positive
p, p = Ω(1) implies that there exists a constant K such that 1/p for all
μ sufficiently small. In particular, p = O
1
means that |p| is bounded, and, for a
positive p, p = Ω(1) means that p is bounded away from zero. For a positive p, the
notation p = Θ
1
is used for the case where both p = O
1
and p = Ω(1), so that p
remains bounded and is bounded away from zero as μ 0.
As discussed in section 1.1, we are concerned with solving a sequence of systems of
the form (1.2), where the matrices A, H, and D depend implicitly on μ. In particular,
A and H are first and second derivatives evaluated at a point depending on μ, and
D is an explicit function of μ. The notation defined above allows us to characterize
the properties of H, A, and D in terms of their behavior as μ 0. Throughout the
analysis, it is assumed that the following properties hold:
(A
1
) H and A are both O
1
.
(A
2
) The row indices of A may be partitioned into disjoint subsets S, M, and B
such that d
ii
= O
μ
for i ∈S, d
ii
= Θ
1
for i ∈M, and d
ii
= Ω(1) for
i ∈B.
(A
3
)IfA
S
is the matrix of rows of A with indices in S and r = rank(A
S
), then r
remains constant as μ 0 and σ
r
(A
S
) = Θ(1).
The second assumption reflects the fact that for μ sufficiently small, some diagonal
elements of D are “small,” some are “medium,” and some are “big.”
It is often the case in practice that the equations and variables corresponding to
unit rows of A are eliminated directly from the KKT system. This elimination creates
no additional nonzero elements and provides a smaller “partially condensed” system
with an Ω(1) diagonal term added to H. It will be shown that preconditioners for
both the full and partially condensed KKT systems depend on the eigenvalues of the
same matrix (see Lemmas 3.4 and 3.5). It follows that our analysis also applies to
preconditioners defined for the partially condensed system.
2. A parameterized system of linear equations. In this section, it is shown
how the indefinite KKT system (1.2) may be embedded in a family of equivalent
linear systems, parameterized by a scalar ν. This parameterization facilitates the
simultaneous analysis of the three systems (1.2)–(1.4).
Definition 2.1 (the parameterized system). Let ν denote a scalar. Associ-
ated with the KKT equations Bx = b of (1.2), we define the parameterized equations
B(ν)x = b(ν), with
B(ν)=
H +(1+ν)A
T
D
1
A
T
νA νD
and b(ν)=
b
1
+(1+ν)A
T
D
1
b
2
νb
2
,
where H is symmetric and D is positive definite and diagonal.
The following proposition states the equivalence of the KKT system (1.2) and the
parameterized system of Definition 2.1.
Proposition 2.2 (equivalence of the parameterized systems). Let ν denote a
scalar parameter. If ν =0, then the system Bx = b of (1.2) and the system B(ν)x =

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, an iterative solution of sequences of KKT linear systems arising in interior point methods applied to large convex quadratic programming problems is considered, where an efficient preconditioning strategy is crucial for the efficiency of the overall method.
Abstract: This work focuses on the iterative solution of sequences of KKT linear systems arising in interior point methods applied to large convex quadratic programming problems. This task is the computational core of the interior point procedure, and an efficient preconditioning strategy is crucial for the efficiency of the overall method. Constraint preconditioners are very effective in this context; nevertheless, their computation may be very expensive for large-scale problems, and resorting to approximations of them may be convenient. Here we propose a procedure for building inexact constraint preconditioners by updating a seed constraint preconditioner computed for a KKT matrix at a previous interior point iteration. These updates are obtained through low-rank corrections of the Schur complement of the (1,1) block of the seed preconditioner. The updated preconditioners are analyzed both theoretically and computationally. The results obtained show that our updating procedure, coupled with an adaptive strategy f...

18 citations

Book ChapterDOI
01 Jan 2008
TL;DR: The paper discusses how to efficiently implement incomplete Cholesky preconditioners and how to eliminate ill-conditioning caused by the barrier approach and an evaluation of methods that use quasi-Newton approximations to the Hessian of the Lagrangian.
Abstract: This papers studies the performance of several interior-point and active-set methods on bound constrained optimization problems. The numerical tests show that the sequential linear-quadratic programming (SLQP) method is robust, but is not as effective as gradient projection at identifying the optimal active set. Interior-point methods are robust and require a small number of iterations and function evaluations to converge. An analysis of computing times reveals that it is essential to develop improved preconditioners for the conjugate gradient iterations used in SLQP and interior-point methods. The paper discusses how to efficiently implement incomplete Cholesky preconditioners and how to eliminate ill-conditioning caused by the barrier approach. The paper concludes with an evaluation of methods that use quasi-Newton approximations to the Hessian of the Lagrangian.

17 citations

Journal ArticleDOI
TL;DR: It is proved that, under mild assumptions on the underlying problem, a class of block preconditionsers can be chosen in a way which guarantees that the convergence rate of the preconditioned conjugate residuals method is independent of the discretization mesh parameter.
Abstract: We discuss a class of preconditioning methods for the iterative solution of symmetric algebraic saddle point problems, where the (1, 1) block matrix may be indefinite or singular. Such problems may arise, e.g. from discrete approximations of certain partial differential equations, such as the Maxwell time harmonic equations. We prove that, under mild assumptions on the underlying problem, a class of block preconditioners (including block diagonal, triangular and symmetric indefinite preconditioners) can be chosen in a way which guarantees that the convergence rate of the preconditioned conjugate residuals method is independent of the discretization mesh parameter. We provide examples of such preconditioners that do not require additional scaling. Copyright © 2010 John Wiley & Sons, Ltd.

17 citations


Cites background from "Iterative Solution of Augmented Sys..."

  • ...On the other hand, interior point optimization methods lead to systems with a positive-semidefinite matrix A, which is singular, or almost singular [3]....

    [...]

  • ...Doubly constrained [3] ⎛ ⎝ Ã0 BT B 2B Ã−1 0 B T ⎞ ⎠ 1 1 1...

    [...]

Journal ArticleDOI
TL;DR: This work proposes an updating procedure that performs a low-rank correction of the Schur complement of the (1,1) block of the CP for the seed matrix of a regularized KKT sequence and presents a technique for updating the factorization and building inexact CPs for subsequent matrices of the sequence.
Abstract: We address the problem of preconditioning sequences of regularized KKT systems, such as those arising in interior point methods for convex quadratic programming. In this case, constraint preconditioners (CPs) are very effective and widely used; however, when solving large-scale problems, the computational cost for their factorization may be high, and techniques for approximating them appear as a convenient alternative. Here, given a block $$LDL^T$$LDLT factorization of the CP associated with a KKT matrix of the sequence, called seed matrix, we present a technique for updating the factorization and building inexact CPs for subsequent matrices of the sequence. We have recently proposed an updating procedure that performs a low-rank correction of the Schur complement of the (1,1) block of the CP for the seed matrix. Now we focus on KKT sequences with nonzero (2,2) blocks and make a step further, by enriching the low-rank correction of the Schur complement by an additional cheap update. The latter update takes into account information not included in the former one and expressed as a diagonal modification of the low-rank correction. Theoretical results and numerical experiments show that the new strategy can be more effective than the procedure based on the low-rank modification alone.

11 citations


Additional excerpts

  • ..., [10,11,15,19,20,24,28,31])....

    [...]

Journal ArticleDOI
TL;DR: This paper explores the block triangular preconditioning techniques applied to the iterative solution of the saddle point linear systems arising from the discretized Maxwell equations and shows that all the eigenvalues of the preconditionsed matrix are strongly clustered.
Abstract: In this paper, we explore the block triangular preconditioning techniques applied to the iterative solution of the saddle point linear systems arising from the discretized Maxwell equations. Theoretical analysis shows that all the eigenvalues of the preconditioned matrix arestrongly clustered. Numerical experiments are given to demonstrate the efficiency of the presented preconditioner. Mathematical subject classification: 65F10.

11 citations


Cites background from "Iterative Solution of Augmented Sys..."

  • ...2): block diagonal preconditioner [22, 23, 24, 25], block triangular preconditioner [15, 16, 26, 27, 28, 37], constraint preconditioner [29, 30, 31, 32, 33] and Hermitian and skew-Hermitian splitting (HSS) preconditioner [34]....

    [...]

References
More filters
Book
01 Jan 1983

34,729 citations

Book
01 Nov 1996

8,608 citations

Journal ArticleDOI
TL;DR: It is shown that performance profiles combine the best features of other tools for performance evaluation to create a single tool for benchmarking and comparing optimization software.
Abstract: We propose performance profiles — distribution functions for a performance metric — as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.

3,729 citations

Book
01 Jan 1993
TL;DR: An efficient translator is implemented that takes as input a linear AMPL model and associated data, and produces output suitable for standard linear programming optimizers.
Abstract: Practical large-scale mathematical programming involves more than just the application of an algorithm to minimize or maximize an objective function. Before any optimizing routine can be invoked, considerable effort must be expended to formulate the underlying model and to generate the requisite computational data structures. AMPL is a new language designed to make these steps easier and less error-prone. AMPL closely resembles the symbolic algebraic notation that many modelers use to describe mathematical programs, yet it is regular and formal enough to be processed by a computer system; it is particularly notable for the generality of its syntax and for the variety of its indexing operations. We have implemented an efficient translator that takes as input a linear AMPL model and associated data, and produces output suitable for standard linear programming optimizers. Both the language and the translator admit straightforward extensions to more general mathematical programs that incorporate nonlinear expressions or discrete variables.

3,176 citations

Journal ArticleDOI
TL;DR: An SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems is discussed.
Abstract: Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. SNOPT is a particular implementation that makes use of a semidefinite QP solver. It is based on a limited-memory quasi-Newton approximation to the Hessian of the Lagrangian and uses a reduced-Hessian algorithm (SQOPT) for solving the QP subproblems. It is designed for problems with many thousands of constraints and variables but a moderate number of degrees of freedom (say, up to 2000). An important application is to trajectory optimization in the aerospace industry. Numerical results are given for most problems in the CUTE and COPS test collections (about 900 examples).

2,831 citations

Frequently Asked Questions (16)
Q1. What have the authors contributed in "Iterative solution of augmented systems arising in interior methods∗" ?

In this paper, a family of constraint preconditioners is proposed to eliminate the inherent ill-conditioning in the augmented system. 

zero elements of G are associated with linearized equality constraints, where the corresponding subset of equations (5.1) are the Newton equations for a zero of the constraint residual. 

The interior-point method requires the solution of systems with a KKT matrix of the form(6.1)( H −JT−J −Γ) ,where H is the n × n Hessian of the Lagrangian, J is the m × n Jacobian matrix of constraint gradients, and Γ is a positive-definite diagonal with some large and smallCopyright © by SIAM. 

If the zero elements of G are associated with linear constraints, and the system (5.3) is solved exactly, it suffices to compute the special step y only once, when solving the first system. 

The authors prefer to do the analysis in terms of the doubly augmented system because it provides the parameterization based on the scalar parameter ν.Copyright © by SIAM. 

provided that the constraint preconditioner is applied exactly at every PCG step, the right-hand side of (5.3) will remain zero for all subsequent iterations. 

It should be emphasized that the choice of C and B affects only the efficiency of the active-set constraint preconditioners and not the definition of the linear equations that need to be solved. 

An advantage of using preconditioning in conjunction with the doubly augmented system is that the linear equations used to apply the preconditioner need not be solved exactly. 

The preconditioner (4.5) has the factorization P 1P(ν) = RPP 2 P(ν)R T P , where RPis the upper-triangular matrix (4.2a) and P 2P(ν) is given by(4.6) P 2P(ν) = ⎛⎝M + (1 + ν)ATCD−1C AC νATCνAC νDC νDB ⎞⎠ . 

In this strict-complementarity case, the authors expect that the proposed preconditioners would asymptotically give a cluster of 700 unit eigenvalues. 

Consider the PCG method applied to a generic symmetric system Ax = b with symmetric positive-definite preconditioner P and initial iterate x0 = 0. 

The data for the test matrices was generated using a primaldual trust-region method (see, e.g., [16, 20, 32]) applied to eight problems, Camshape, Channel, Gasoil, Marine, Methanol, Pinene, Polygon, and Tetra, from the COPS 3.0 test collection [6, 8, 9, 10]. 

Theorems 3.6 and 4.1 predict that for the preconditioners P (1) and P 1P(1), 700 (= m + rank(AS)) eigenvalues of the preconditioned matrix will cluster close to unity, with 600 of these eigenvalues exactly equal to one. 

Since SC is independent of ν, the spectrum of P 1P(ν)−1B(ν) is also independent of ν. Lemma 3.5 implies that SC has at least rank(AS) eigenvalues that are 1 + O ( μ1/2 ) , which establishes part (b).To establish part (c), the authors need to estimate the difference between the eigenvalues of SC and S, where S is given by (3.3a). 

For their example with 100 variables and 100,000 inequality constraints, a matrix of dimension 150 would need to be factored instead of a matrix of dimension 100,100. 

Without loss of generality it may be assumed that the rows of A are ordered so that the m1 row indices in S corresponding to linearlyCopyright © by SIAM.