scispace - formally typeset
Open AccessJournal ArticleDOI

Continuation-Conjugate Gradient Methods for the Least Squares Solution of Nonlinear Boundary Value Problems

Roland Glowinski, +2 more
- 01 Oct 1985 - 
- Vol. 6, Iss: 4, pp 793-832
TLDR
In this article, a new combination of methods for solving nonlinear boundary value problems containing a parameter is discussed, combining methods of the continuation type with least squares formulations, preconditioned conjugate gradient algorithms and finite element approximations.
Abstract
We discuss in this paper a new combination of methods for solving nonlinear boundary value problems containing a parameter. Methods of the continuation type are combined with least squares formulations, preconditioned conjugate gradient algorithms and finite element approximations.We can compute branches of solutions with limit points, bifurcation points, etc.Several numerical tests illustrate the possibilities of the methods discussed in the present paper; these include the Bratu problem in one and two dimensions, one-dimensional bifurcation and perturbed bifurcation problems, the driven cavity problem for the Navier–Stokes equations.

read more

Content maybe subject to copyright    Report

SIAM
J.
ScI.
STAT.
COMPUT.
Voi.
6,
No.
4,
October
1985
1985
Society
for
Industrial
and
Applied
Mathematics
001
CONTINUATION-CONJUGATE
GRADIENT
METHODS
FOR
THE
LEAST
SQUARES
SOLUTION
OF
NONLINEAR
BOUNDARY
VALUE
PROBLEMS*
R.
GLOWINSKI"f,
H.
B.
KELLER:I:
AND
L.
REINHART
Abstract.
We
discuss
in
this
paper
a
new
combination
of
methods
for
solving
nonlinear
boundary
value
problems
containing
a
parameter.
Methods
of
the
continuation
type
are
combined
with
least
squares
formulations,
preconditioned
conjugate
gradient
algorithms
and
finite
element
approximations.
We
can
compute
branches
of
solutions
with
limit
points,
bifurcation
points,
etc.
Several
numerical
tests
illustrate
the
possibilities
of
the
methods
discussed
in
the
present
paper;
these
include
the
Bratu
problem
in
one
and
two
dimensions,
one-dimensional
bifurcation
and
perturbed
bifurcation
problems,
the
driven
cavity
problem
for
the
Navier-Stokes
equations.
Key
words,
nonlinear
boundary
value
problems,
bifurcation,
continuation
methods,
nonlinear
least
squares,
conjugate
gradient,
finite
elements,
Navier-Stokes
equations,
biharmonic
solvers
1.
Introduction.
We
present
in
this
paper
a
powerful
combination
of
techniques
that
are
used
to
solve
a
variety
of
nonlinear
boundary
value
problems
containing
a
parameter.
Indeed
the
resulting
method
can
be
employed
to
study
a
large
class
of
nonlinear
eigenvalue
problems.
The
individual
techniques
include:
arclength
or
pseudo-
arclength
continuation,
least
squares
formulation
in
an
appropriate
Hilbert
space
setting,
a
conjugate
gradient
iterative
method
for
solving
the
least
squares
problem
and
finite
element
approximations
to
yield
a
finite
dimensional
problem
for
compu-
tation.
In
2
the
solution
techniques
are
described
in
some
detail.
Specifically
in
2.1
the
last
squares
formulation
of
a
broad
class
of
nonlinear
problems,
say
in
the
form
(1.1)
AU=T(U),
are
formulated
in
an
appropriate
Hilbert
space
setting.
Then
in
2.2
a
conjugate
gradient
iterative
solution
technique
for
solving
such
least
squares
problems
is
presen-
ted.
In
2.3
a
pseudo-arc
length
continuation
method
for
nonlinear
eigenvalue
problems
in
the
form
(1.2)
Lu=G(u,A)
is
discussed.
This
involves
adjoining
a
scalar
linear
constraint,
say
(1.3)
l(u,A,s)=O,
and
with
U
{u,
A
}
the
previous
least
squares
and
conjugate
gradient
techniques
can
be
applied
to
the
system
(1.2),
(1.3).
One
big
advantage
of
our
specific
continuation
method
is
that
simple
limit
or
fold
points
of
the
original
problem
(1.2)
are
just
regular
points
for
our
reformulation
in
the
form
(1.1).
The
entire
procedure
thus
enables
us
to
determine
large
arcs
of
branches
of
solutions
of
(1.2)
with
no
special
precautions
or
change
of
methods
near
limit
points.
*
Received
by
the
editors
November
22,
1982,
and
in
revised
form
June
12,
1984.
"
Paris
VI
University,
LA
189,
Tour
55.65,
75230
Paris
Cedex
05
France,
and
INRIA,
Domaine
de
Voluceau,
Rocuqencourt,
78153
Le
Chesnay
Cedex,
France.
t
Applied
Mathematics,
California
Institute
of
Technology,
Pasadena,
California
91125.
The
research
of
this
author
was
supported
in
part
by
the
U.S.
Department
of
Energy
under
contract
EY-76-S-03-0767,
Project
Agreement
12,
and
by
the
Army
Research
Office
under
contract
DAAG
29-78-C-0011.
INRIA,
Domaine
de
Voluceau,
Rocquencourt,
78153
Le
Chesnay
Cedex,
France.
793

794
R.
GLOWINSKI,
H.
B.
KELLER
AND
L.
REINHART
These
techniques,
as
described
in
2,
apply
to
the
analytical
problem.
However
they
go
over
extremely
well
when
various
discrete
approximations
are
applied
to
yield
computational
methods
of
great
power
and
practicality.
We
illustrate
this
by
considering
several
nonlinear
boundary
value
problems
of
some
difficulty
and
current
interest.
In
each
of
these
problems
the
discretization
is
obtained
by
some
finite
element
formulation.
The
well-known
Bratu
problem
on
a
square
domain
is
treated
in
3.
Several
ordinary
differential
equation
examples
displaying
bifurcation
and
the
effects
of
perturbed
bifurcation
are
treated
in
4.
We
show
how
to
use
perturbed
bifurcation
and
continu-
ation
to
obtain
the
bifurcating
solutions.
Finally
in
5
the
Navier-Stokes
equations
are
solved
for
the
driven
cavity
problem.
Actually
the
techniques
described
in
this
paper
have
also
been
applied
to
the
solution
of
nonlinear
boundary
value
problems,
more
complicated
than
those
con-
sidered
in
the
following
sections.
Among
these
problems,
we
shall
mention
the
Von
Karman
equations
for
nonlinear
plates
and
the
computation
of
the
multiple
solutions
of
the
full
potential
equation
modelling
transonic
flows
for
compressible
inviscid
fluids.
2.
Solution
techniques.
We
introduce
in
this
section
the
methods
we
shall
apply
in
3,
4,
5,
to
the
solution
of
quite
general
nonlinear
boundary
value
problems.
They
include
least
squares,
conjugate
gradient
and
arc
length
continuation
methods.
Let
V
be
a
Hilbert
space
(real
for
simplicity)
equipped
with
the
scalar
product
(.,.)
and
the
corresponding
norm
11.11.
We
denote
by
V’
the
dual
space
of
V,
by
(.,.)
the
duality
pairing
between
V
and
V’,
and
by
[[.
1[.
the
corresponding
dual
norm,
i.e.
(2.1)
Ilfll,
sup
f
V’.
The
problem
that
we
consider
is
to
find
u
V
such
that
(2.2)
S(u)
=0,
where
S
is
a
nonlinear
operator
from
V
to
V’.
2.1.
Least
squares
formulation
of
problem
(2.2).
A
least
squares
formulation
of
(2.2)
is
obtained
by
observing
that
any
solution
of
(2.2)
is
also
a
global
minimizer
over
V
of
the
functional
J:
V-
defined
by
(2.3)
J(v)
1/21IS(
)11
=
Hence
a
least
squares
formulation
of
(2.2)
is:
Find
u
V
such
that
(2.4)
J(u)<=J(v)
Vv
V.
In
practice
we
proceed
as
follows.
Let
A
be
the
duality
isomorphism
corresponding
to
(.,-)
and
(.,.).
That
is
V
v
V,
Av
V’
satisfies
(Av,
w)=(v,
w)
Vw
V,
(2.5)
(2.6)
It
follows
that
(2.7)
J(
v)
1/2(A,
)
where
is
a
(nonlinear)
function
of
v
obtained
via
the
solution
of
the
well-posed
linear
problem
(2.8)
A=S(v).

CONTINUATION-CONJUGATE
GRADIENT
METHODS
795
We
observe
that
(2.4)
has
the
structure
of
an
optimal
control
problem,
where
(i)
v
is
the
control
vector,
(ii)
is
the
state
vector,
(iii)
(2.8)
is
the
state
equation,
(iv)
J
is
the
cost
function.
As
a
final
remark
we
observe
that
any
solution
of
the
minimization
problem
(2.4)
for
which
J
vanishes
is
also
a
solution
of
the
original
problem
(2.2).
2.2.
Solution
by
a
conjugate
gradient
algorithm.
We
suppose
from
now
on
that
S
is
ditterentiable
implying
in
turn
the
ditterentiability
of
J
over
V.
We
denote
by
S’
and
J’
the
Fr6chet
derivatives
of
S
and
J
respectively.
From
the
ditterentiability
of
J
it
is
quite
natural
to
solve
the
minimization
problem
(2.4)
by
a
conjugate
gradient
algorithm;
among
the
possible
conjugate
gradient
algorithms
we
have
selected
the
Polak-Ribire
variant
(cf.
Polak
[1])
whose
very
good
performance
has
been
discussed
by
Powell
[2]
(see
also
Shanno
[28]).
The
Polak-Ribire
method
applied
to
the
solution
of
(2.4)
provides
the
following
algorithm.
Step
O"
Initialization.
For
some
given
(2.9)
u
V,
compute
g0
V
as
the
solution
of
(2.10)
Ag=J’(u),
and
set
(2.11)
z=g
.
Then,
for
n_>-0,
with
u",
g",
z"
known,
compute
u
"+1
Step
1"
Descent.
Compute:
(2.12)
p.
arg
min
J(u"
pz"),
and
then
set
(2.13)
u
"+1
u"
p,z".
g"+l,
z
"+
by:
Step
2"
New
descent
direction.
Define
g
n+lG
V
as
the
solution
of
(2.14)
then
compute
and
set
Ag"+=j’(u"+);
(Ag",g")
(g",g")
(2.16)
zn+I
gn+l
d-
Tnz
n.
Set
n
n
+
1
and
return
to
Step
1.
The
two
costly
steps
(because
they
need
some
auxiliary
computations)
of
algorithm
(2.9)-(2.16)
are:
(i)
The
solution
of
the
one-dimensional
minimization
problem
(2.12)
to
obtain
p,.
We
have
done
the
corresponding
line
search
by
dichotomy
and
parabolic
interpolation,
using
p,_
as
starting
value
(see
[3]
for
more
details).
We
recall
that
each
evaluation
If
the
nonlinearity
is
polynomial
we
can
use
faster
methods.

796
R.
GLOWlNSKI,
H.
B.
KELLER
AND
L.
REINHART
of
J(v),
for
a
given
argument
v,
requires
the
solution
of
the
linear
problem
(2.8)
to
obtain
the
corresponding
:.
(ii)
The
calculation
of
g,/l
from
u
"/1
which
requires
the
solution
of
two
linear
problems
associated
with
A
(namely
(2.8)
with
v
u
"+
and
(2.14)).
Calculation
of
J’(u")
and
g":
Owing
to
the
importance
of
Step
2,
let
us
detail
the
calculation
of
J’(u
)
and
g".
Let
v
V;
then
J’(v)
may
be
defined
by
(r
+
tw)
(v)
(2.17)
(J’(v),
w)=
lim
Vw
V.
to
t#o
We
obtain
from
(2.7),
(2.8),
(2.17)
that
(2.18)
(J’(v),
w)=(A,
where
and
r/are
the
solutions
of
(2.8)
and
(2.19)
Arl=
S’(v).
w,
respectively.
Since
A
is
self-adjoint
(from
(2.5))
we
also
have
from
(2.18),
(2.19)
that
(2.20)
(J’(v),
w)
(A:,
r/)
(At/,
)
(S’(v).
w,
:).
Therefore
J’(v)
V’
may
be
identified
with
the
linear
functional
(2.21)
w
-->
<S’(
v)
w,
>.
It
follows
then
from
(2.14),
(2.20),
(2.21)
that
g"
is
the
solution
of
the
following
linear
variational
problem:
Find
g"6
V
such
that
(2.22)
(Ag",
w)=
(S’(u").
w,
")
Vw
V,
where
:"
is
the
solution
of
(2.8)
corresponding
to
v
u".
Remark
2.1.
It
is
clear
from
the
above
observations
that
an
efficient
solver
for
linear
problems
related
to
operator
A
(in
fact
to
a
finite-dimensional
approximation
of
A)
will
be
a
fundamental
tool
for
the
solution
of
problem
(2.2)
by
the
conjugate
gradient
algorithm
(2.9)-(2.16).
Remark
2.2.
The
fact
that
J’(v)
is
known
through
(2.20)
is
not
at
all
a
drawback
if
a
Galerkin
or
a
finite
element
method
is
used
to
approximate
(2.2).
Indeed
we
only
need
to
know
the
value
of
(J’(v),
w)
for
w
belonging
to
a
basis
of
the
finite-dimensional
subspace
of
V
corresponding
to
the
Galerkin
or
finite
element
approximation
under
consideration.
Convergence
of
algorithm
(2.9)-(2.16):
We
introduce
the
concept
of
regular
solution
of
problem
(2.2)
by
the
following
definition.
DEFINrrON
2.1.
A
solution
u
of
(2.2)
is
said
to
be
regular
if
the
operator
S’(u)
(
(
V,
V’))
is
an
isomorphism
from
V
onto
V’.
Using
a
modification
of
the
finite-dimensional
techniques
of
Polak
[
1],
it
has
been
proved
in
Reinhart
[3]
that
if
problem
(2.2)
has
a
finite
number
of
solutions
and
if
these
solutions
are
regular,
then
the
conjugate
gradient
algorithm
(2.9)-(2.16)
converges
to
a
solution
of
(2.2),
depending
upon
the
initial
iterate
u
in
(2.9).
This
convergence
result
requires
that
u
is
well
chosen,
as
in
Newton’s
method.
Hence
the
role
that
continuation
methods
may
play
is
apparent.

CONTINUATION-CONJUGATE
GRADIENT
METHODS
797
2.3.
Arc
length
continuation
methods.
Consider
now
the
solution
of
nonlinear
problems
depending
upon
a
real
parameter
A;
we
would
like
to
follow
in
the
space
VR
branches
of
solutions
{u(A),
A}
when
A
belongs
to
a
compact
interval
of
R.
These
nonlinear
eigenvalue
problems
can
be
written
as
follows
(2.23)
S(u,)t)=0,
)t
i,
uV.
Equation
(2.23)
reduces
quite
often
to
(2.24)
Lu
G(
u,
)t
),
)t
,
u
6
V,
where
L:
V-->
V’
is
a
linear
elliptic
operator,
and
where
G
is
a
nonlinear
Fredholm
operator
(see
e.g.
Berger
[4]
for
the
definition
of
Fredholm
operators).
A
classical
approach
is
to
use
)t
as
the
parameter
defining
arcs
of
solutions.
If
for
)t
)to
problem
(2.23)
has
a
unique
solution
u
Uo
and
if
that
solution
is
isolated,
that
is
OS
(2.25)
S
=uu(UO,
)to)
is
an
isomorphism
from
V
onto
V’,
and
if
{u,)t}-->
S(u,)t)
is
C
in
some
ball
around
{Uo,)to},
then
the
implicit
function
theorem
implies
the
existence
of
a
smooth
arc
of
regular
solutions
u=u()t)
for
I)t-
)tol
<
P-
Therefore,
for
)t
given
sufficiently
close
to
)to
we
may
solve
problem
(2.23)
just
as
problem
(2.2).
These
procedures,
however,
may
fail
or
encounter
difficulties
(slow
convergence
for
example)
close
to
a
nonisolated
solution.
To
overcome
these
difficulties
we
replace
problem
(2.23)
by
the
following
system
(2.26)
S(u,
)t
0,
(2.27)
l(u,
)t,
s)=O,
where
1:
VR
-->R
is
chosen
such
that
s
is
some
arc
length
(or
a
convenient
approximation
to
it)
on
the
solution
branch.
We
look
then
for
a
solution
{u(s),
)t(s)},
s
being
given
(but
not
)t).
If
in
addition
to
{Uo,
)to}
we
know
a
tangent
vector
to
the
path
{ti(So),
A(So)}
(where
t3
denotes
the
derivative
of
v
with
respect
to
s)
satisfying:
(2.28a)
(2.28b)
S.(uo,
Xo)a(So)
+
s
(Uo,
o);(So)=0,
II(so)ll
=
+
IA(so)12
1,
then
we
can
use
(2.29)
l(u,
)t,
s)=
(ti(So),
u(s)-
U(So))
+
A(So)()t
(s)
)t
(So))-
(s-
So)
=0,
for
Is-
So[
sufficiently
small.
Let
us
define
U
V
x
by
U
{u,
)t};
then
problem
(2.26),
(2.27)
can
be
written
as
(2.30)
where
(2.31)
with
(2.32)
L(u)=o,
T(
U)
(
TI
u)
Ts(U)=S(u,)t),
T_s(U)=l(u,)t,s).

Citations
More filters
Book

Introduction to Numerical Continuation Methods

TL;DR: The Numerical Continuation Methods for Nonlinear Systems of Equations (NCME) as discussed by the authors is an excellent introduction to numerical continuuation methods for solving nonlinear systems of equations.
Journal ArticleDOI

Choosing the forcing terms in an inexact Newton method

TL;DR: Promising choices of the forcing terms are given, their local convergence properties are analyzed, and their practical performance is shown on a representative set of test problems.
Journal ArticleDOI

Hybrid Krylov methods for nonlinear systems of equations

TL;DR: To improve the global convergence properties of these basic algorithms, hybrid methods based on Powell's dogleg strategy are proposed, as well as linesearch backtracking procedures.
Journal ArticleDOI

Anderson Acceleration for Fixed-Point Iterations

TL;DR: It is shown that, on linear problems, Anderson acceleration without truncation is “essentially equivalent” in a certain sense to the generalized minimal residual (GMRES) method and the Type 1 variant in the Fang-Saad Anderson family is similarly essentially equivalent to the Arnoldi (full orthogonalization) method.
References
More filters
Book

Navier-Stokes Equations

Roger Temam
TL;DR: Schiff's base dichloroacetamides having the formula OR2 PARALLEL HCCl2-C-N ANGLE R1 in which R1 is selected from the group consisting of alkenyl, alkyl, alkynyl and alkoxyalkyl; and R2 is selected by selecting R2 from the groups consisting of lower alkylimino, cyclohexenyl-1 and lower alkynyl substituted cycloenenyl -1 as discussed by the authors.
Journal ArticleDOI

Bifurcation from simple eigenvalues

TL;DR: In this article, a general version of the main problem of bifurcation theory, given p ϵ C, determine the structure of G−1{0} in some neighborhood of p.
Journal ArticleDOI

Fixed Point Equations and Nonlinear Eigenvalue Problems in Ordered Banach Spaces

Herbert Amann
- 01 Oct 1976 - 
TL;DR: A survey of nonlinear functional analysis in ordered Banach spaces can be found in this paper, where some of the most important methods and results of non-linear functional analyses are discussed.