scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Newton Method for Shape-Preserving Spline Interpolation

01 Jun 2002-Siam Journal on Optimization (Society for Industrial and Applied Mathematics)-Vol. 13, Iss: 2, pp 588-602
TL;DR: This paper proves local quadratic convergence of their proposed Newton-type method for shape-preserving interpolation by viewing it as a semismooth Newton method and presents a modification of the method which has global quadRatic convergence.
Abstract: In 1986, Irvine, Marin, and Smith proposed a Newton-type method for shape-preserving interpolation and, based on numerical experience, conjectured its quadratic convergence. In this paper, we prove local quadratic convergence of their method by viewing it as a semismooth Newton method. We also present a modification of the method which has global quadratic convergence. Numerical examples illustrate the results.

Summary (1 min read)

If the integral over [a, b] of the piecewise linear function (

  • Furthermore, in this case quadratic convergence of the Newton method would follow directly from [13] .
  • The following example of dimension 2 shows that such an argument does not work.
  • Hence the function above is not piecewise smooth.
  • Then the positive semidefiniteness of the elements of ∂F − (λ) follows from the fact that any matrix in the generalized Jacobian of the gradient of a convex function must be symmetric and positive semidefinite.
  • By combining the above lemmas and applying Theorem 1.1, the authors obtain the main result of this paper which settles the question posed in [11] .

4. Numerical results.

  • In Figures 1-5 , the dashed line is for the resulting shape-preserving cubic spline (using the data obtained with the starting point λ 0 = sign(d)); the solid line is for the natural spline (using the MATLAB SPLINE function), and "o" stands for the original given data.
  • In Table 1 for results of the numerical experiments the authors use the following notation: 1 , they observe that Algorithm 3.2 converges rapidly to the solution from both starting points for all problems except Example 4.4 (y 9 = 4), to which the algorithm within 30 iterations failed to produce an approximate solution meeting the required accuracy.
  • These observations indicate that how far away from zero each divided difference is may make a big difference in the numerical performance of the algorithm.
  • The problem is, however, nonsmooth, and here the authors are entering a new territory.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

A NEWTON METHOD FOR SHAPE-PRESERVING SPLINE
INTERPOLATION
ASEN L. DONTCHEV
, HOU-DUO QI
, LIQUN QI
§
, AND HONGXIA YIN
SIAM J. O
PTIM.
c
2002 Society for Industrial and Applied Mathematics
Vol. 13, No. 2, pp. 588–602
This work is dedicated to Professor Jochem Zowe
Abstract. In 1986, Irvine, Marin, and Smith proposed a Newton-type method for shape-
preserving interpolation and, based on numerical experience, conjectured its quadratic convergence.
In this paper, we prove local quadratic convergence of their method by viewing it as a semismooth
Newton method. We also present a modification of the method which has global quadratic conver-
gence. Numerical examples illustrate the results.
Key words. shape-preserving interpolation, splines, semismooth equation, Newton’s method,
quadratic convergence
AMS subject classifications. 41A29, 65D15, 49J52, 90C25
PII. S1052623401393128
1. Introduction. Given nodes a = t
1
<t
2
< ··· <t
N+2
= b and values
y
i
= f(t
i
),i =1,...,N +2,N 3, of an unknown function f :[a, b] R, the
standard interpolation problem consists of finding a function s from a given set S of
interpolants such that s(t
i
)=y
i
, i =1,...,N+ 2. When S is the set of twice contin-
uously differentiable piecewise cubic polynomials across t
i
, we deal with cubic spline
interpolation. The problem of cubic spline interpolation can be viewed in various
ways; the closest to this paper is the classical Holladay variational characterization,
according to which the natural cubic interpolating spline can be defined as the unique
solution of the following optimization problem:
min f

2
subject to f(t
i
)=y
i
,i=1,...,N +2,(1)
where ·denotes the norm of L
2
[a, b]. With a simple transformation, this problem
can be written as a nearest point problem in L
2
[a, b]: find the projection of the origin
on the intersection of the hyperplanes
u L
2
[a, b] |
b
a
u(t)B
i
(t)dt = d
i
,i=1,...,N
,
where B
i
are the piecewise linear normalized B-splines with support [t
i
,t
i+2
] and d
i
are the second divided differences.
Received by the editors July 29, 2001; accepted for publication (in revised form) March 26, 2002;
published electronically October 1, 2002.
http://www.siam.org/journals/siopt/13-2/39312.html
Mathematical Reviews, Ann Arbor, MI 48107 (ald@ams.org).
School of Mathematics, University of New South Wales, Sydney 2052, NSW, Australia (hdqi@
maths.unsw.edu.au). The research of this author was supported by the Australian Research Council.
§
Department of Applied Mathematics, Hong Kong Polytechnic University, Hung Hom, Kowloon,
Hong Kong (maqilq@polyu.edu.hk). This author’s work is supported by the Hong Kong Research
Grant Council, grant PolyU 5296/02P.
Department of Mathematics, Graduate School, Chinese Academy of Sciences, P.O. Box 3908,
Beijing 100039, People’s Republic of China (hxyin@maths.unsw.edu.au). This author’s work was
done while the author was visiting the University of New South Wales, Australia, and was supported
by the Australian Research Council.
588

A NEWTON METHOD FOR CONSTRAINED INTERPOLATION 589
Since the mid ’80s, after the ground-breaking paper of Micchelli et al. [15], the
attention of a number of researchers has been attracted to spline interpolation prob-
lems with constraints. For example, if we add to problem (1) the additional constraint
f

0, we obtain a convex interpolation problem; provided that the data are “con-
vex,” then a convex interpolant “preserves the shape” of the data. If we add the
constraint f
0, we obtain a monotone interpolation problem. Central to our analy-
sis here is a subsequent paper by Irvine, Marin, and Smith [11], who rigorously defined
the problem of shape-preserving spline interpolation and laid the groundwork for its
numerical analysis. In particular, they proposed a Newton-type method and, based
on numerical examples, conjectured its fast (quadratic) theoretical convergence. In
the present paper we prove this conjecture.
We approach the problem of Irvine, Marin, and Smith [11] in a new way, by
using recent advances in optimization. It is now well understood that, in general,
the traditional methods based on standard calculus may not work for optimization
problems with constraints; however, such problems can be reformulated as nonsmooth
problems that need special treatment. The corresponding theory emerged already in
the ’70s, championed by the works of R. T. Rockafellar and his collaborators, and is
now becoming a standard tool for more and more theoretical and practical problems.
The present paper is an example of how nonsmooth analysis can be applied to solve
a problem from numerical analysis that hasn’t been solved for quite a while.
Before stating the problem of shape-preserving interpolation that we consider in
this paper, we briefly review the result of nonsmooth analysis which provides the basis
for this work.
For a locally Lipschitz continuous function G : R
n
R
n
, the generalized Jaco-
bian ∂G(x)ofG at x in the sense of Clarke [2] is the convex hull of all limits obtained
along sequences on which G is differentiable:
∂G(x)=co
lim
x
j
x
G(x
j
) | G is differentiable at x
j
R
n
.
The generalized Newton method for the (nonsmooth) equation G(x) = 0 has the
following form:
x
k+1
= x
k
V
1
k
G(x
k
),V
k
∂G(x
k
).(2)
A function G : R
n
R
m
is strongly semismooth at x if it is locally Lipschitz and
directionally differentiable at x, and for all h 0 and V ∂G(x + h) one has
G(x + h) G(x) Vh= O(h
2
).
The local convergence of the generalized Newton method for strongly semismooth
equations is summarized in the following fundamental result, which is a direct gener-
alization of the classical theorem of quadratic convergence of the Newton method.
Theorem 1.1 (see [16, Theorem 3.2]). Let G : R
n
R
n
be strongly semismooth
at x
and let G(x
)=0. Assume that all elements V of the generalized Jacobian
∂G(x
) are nonsingular matrices. Then every sequence generated by the method (2)
is q-quadratically convergent to x
, provided that the starting point x
0
is sufficiently
close to x
.
In the remaining part of the introduction we review the method of Irvine, Marin,
and Smith [11] for shape-preserving cubic spline interpolation and also briefly discuss
the contents of this paper. Let {(t
i
,y
i
)}
N+2
1
be given interpolation data and let
d
i
,i =1, 2,...,N, be the associated second divided differences. Throughout the

590 A. L. DONTCHEV, H.-D. QI, L. QI, AND H. YIN
paper we assume that d
i
= 0 for all i =1,...,N; we will discuss this assumption
later. Define the following subsets
i
,i=1, 2, 3, of [a, b]:
1
:= {[t
i
,t
i+1
]| d
i1
> 0 and d
i
> 0},
2
:= {[t
i
,t
i+1
]| d
i1
< 0 and d
i
< 0},
3
:= {[t
i
,t
i+1
]| d
i1
d
i
< 0}.
Also, let
[t
1
,t
2
]
1
if d
1
> 0,
2
if d
1
< 0,
[t
N+1
,t
N+2
]
1
if d
N
> 0,
2
if d
N
< 0.
The problem of shape-preserving interpolation as stated by Micchelli et al. [15] is as
follows:
minimize f

2
(3)
subject to f(t
i
)=y
i
,i=1, 2,...,N +2,
f

(t) 0,t
1
,f

(t) 0,t
2
,
f W
2,2
[a, b].
Here W
2,2
[a, b] denotes the Sobolev space of functions with absolutely continuous first
derivatives and second derivatives in L
2
[a, b]. The inequality constraint on the set
1
(resp.,
2
) means that the interpolant preserves the convexity (resp., concavity) of
the data; for more details, see [11, p. 137].
Micchelli et al. [15, Theorem 4.3] showed that the solution of the problem (3)
exists and is unique, and its second derivative has the following form:
f

(t)=
N
i=1
λ
i
B
i
(t)
+
X
1
(t)
N
i=1
λ
i
B
i
(t)
X
2
(t)(4)
+
N
i=1
λ
i
B
i
(t)
X
3
(t),
where λ =(λ
1
,...,λ
N
)
T
is a vector in R
N
, a
+
=max{0,a},(a)
=(a)
+
, and
X
is the characteristic function of the set Ω. This result can also be deduced, as
shown first in [4], from duality in optimization; specifically, here λ is the vector of
the Lagrange multipliers associated with the equality (interpolation) constraints. For
more on duality in this context, see the discussion in our previous paper [5]. In short,
the optimality condition of the problem dual to (3) has the form of the nonlinear
equation
F (λ)=d,(5)
where d =(d
1
,...,d
N
)
T
and the vector function F : R
N
R
N
has components
F
i
(λ)=
[t
i
,t
i+2
]
1
N
l=1
λ
l
B
l
(t)
+
B
i
(t)dt
[t
i
,t
i+2
]
2
N
l=1
λ
l
B
l
(t)
B
i
(t)dt

A NEWTON METHOD FOR CONSTRAINED INTERPOLATION 591
+
[t
i
,t
i+2
]
3
N
l=1
λ
l
B
l
(t)
B
i
(t)dt, i =1, 2,...,N.(6)
Irvine, Marin, and Smith [11] proposed the following method for solving equation
(5): Given λ
0
R
N
, λ
k+1
is a solution of the linear system
M(λ
k
)(λ
k+1
λ
k
)=F (λ
k
)+d,(7)
where M(λ) R
N×N
is the tridiagonal symmetric matrix with components
(M(λ))
ij
=
b
a
P (λ, t)B
i
(t)B
j
(t)dt.
Here
P (λ, t):=
N
l=1
λ
l
B
l
(t)
0
+
X
1
(t)+
N
l=1
λ
l
B
l
(t)
0
X
2
(t)+X
3
(t),(8)
where
(τ)
0
+
:=
1ifτ>0,
0 otherwise,
(τ)
0
:= (τ)
0
+
.
Since the matrix M resembles the Jacobian of F (which may not exist for some λ,
and then M is a kind of “directional Jacobian,” more precisely, as we will see later,
an element of the generalized Jacobian), the method (7) has been named the Newton
method. It was also observed in [11] that the Newton-type iteration (7) reduces to
M(λ
k
)λ
k+1
= d; that is, no evaluations of the function F are needed during iterations.
In our previous paper [5], we considered the problem of convex spline interpola-
tion, that is, with
1
=[a, b], and proved local superlinear convergence of the corre-
sponding version of the Newton method (7). In a subsequent paper [6], by a more
detailed analysis of the geometry of the dual problem, we obtained local quadratic
convergence of the Newton method, again for convex interpolation. In this paper, we
consider the shape-preserving interpolation problem originally stated in Irvine, Marin,
and Smith [11] and prove their conjecture that the method is locally quadratically
convergent. As a side result, we observe that the solution of the problem considered
is Lipschitz continuous with respect to the interpolation values. In section 3 we give
a modification of the method which has global quadratic convergence. Results of
extensive numerical experiments are presented in section 4.
As for related results, the conjecture of Irvine, Marin, and Smith [11] was proved
in [1] under an additional condition which turned out to be equivalent to smoothness
of the function F in (5). Also, a positive answer to this conjecture without additional
assumptions was announced in [10], but a proof was never made available to us.
2. Local quadratic convergence. For notational convenience, we introduce a
“dummy” node t
0
with corresponding λ
0
= 0 and B
0
(t) = 0; then, for every i, the
sum
N
l=1
λ
l
B
l
(t) restricted to [t
i
,t
i+1
] has the form λ
i1
B
i1
(t)+λ
i
B
i
(t). Our first
result concerns continuity and differentiability properties of the function F defined in
(6).
Lemma 2.1. The function F with components defined in (6) is strongly semi-
smooth.

592 A. L. DONTCHEV, H.-D. QI, L. QI, AND H. YIN
Proof. The claim is merely an extension of [6, Proposition 2.4], where it is proved
that the functions
t
i+1
t
i
N
l=1
λ
l
B
l
(t)
+
B
i
(t)dt,
t
i+2
t
i+1
N
l=1
λ
l
B
l
(t)
+
B
i
(t)dt,
and
t
i+2
t
i
N
l=1
λ
l
B
l
(t)
+
B
i
(t)dt
are strongly semismooth. Hence the function
[t
i
,t
i+2
]
1
N
l=1
λ
l
B
l
(t)
+
B
i
(t)dt
is strongly semismooth by noticing that
[t
i
,t
i+2
]
1
∈{[t
i
,t
i+1
], [t
i+1
,t
i+2
], [t
i
,t
i+2
], ∅} .
We note that the function
[t
i
,t
i+2
]
3
N
l=1
λ
l
B
l
(t)
B
i
(t)dt
is linear and therefore is strongly semismooth. Since either [t
i
,t
i+2
]
1
= or
[t
i
,t
i+2
]
2
= , F
i
is given either by
F
i
(λ)=
[t
i
,t
i+2
]
1
N
l=1
λ
l
B
l
(t)
+
B
i
(t)dt(9)
+
[t
i
,t
i+2
]
3
N
l=1
λ
l
B
l
(t)
B
i
(t)dt
or by
F
i
(λ)=
[t
i
,t
i+2
]
2
N
l=1
λ
l
B
l
(t)
B
i
(t)dt(10)
+
[t
i
,t
i+2
]
3
N
l=1
λ
l
B
l
(t)
B
i
(t)dt.
A composite of strongly semismooth functions is strongly semismooth [8, Theorem
19]. Hence the function F
i
by (9) is strongly semismooth. If F
i
is given by (10), then
F
i
(λ)=
[t
i
,t
i+2
]
2
N
l=1
λ
l
B
l
(t)
+
B
i
(t)dt +
[t
i
,t
i+2
]
3
N
l=1
λ
l
B
l
(t)
B
i
(t)dt.
Again from [8, Theorem 19], the first part of F
i
is strongly semismooth, which in turn
implies the strong semismoothness of F
i
. We conclude that F is strongly semismooth
since each component of F is strongly semismooth.

Citations
More filters
Journal ArticleDOI
TL;DR: A realistic generalization of the Markov–Dubins problem, which is concerned with finding the shortest planar curve of constrained curvature joining two points with prescribed tangents, is the requirement that the curve passes through a number of prescribed intermediate points/nodes.
Abstract: A realistic generalization of the Markov–Dubins problem, which is concerned with finding the shortest planar curve of constrained curvature joining two points with prescribed tangents, is the requirement that the curve passes through a number of prescribed intermediate points/nodes. We refer to this generalization as the Markov–Dubins interpolation problem. We formulate this interpolation problem as an optimal control problem and obtain results about the structure of its solution using optimal control theory. The Markov–Dubins interpolants consist of a concatenation of circular (C) and straight-line (S) segments. Abnormal interpolating curves are shown to exist and characterized; however, if the interpolating curve contains a straight-line segment then it cannot be abnormal. We derive results about the stationarity, or criticality, of the feasible solutions of certain structure. In particular, any feasible interpolant with arc types of CSC in each stage is proved to be stationary, i.e., critical. We propose a numerical method for computing Markov–Dubins interpolating paths. We illustrate the theory and the numerical approach by four qualitatively different examples.

10 citations

01 Jan 2016
TL;DR: A nonnegative derivative constrained B-spline estimator is proposed, and it is demonstrated that this estimator achieves a critical uniform Lipschitz property, which is exploited to establish asymptotic bounds on the B- Splines estimator bias, stochastic error, and risk in the supremum norm.
Abstract: Title of dissertation: Constrained Estimation and Approximation Using Control, Optimization, and Spline Theory Teresa M. Lebair, Doctor of Philosophy, 2016 Dissertation directed by: Professor Jinglai Shen Department of Mathematics and Statistics There has been an increasing interest in shape constrained estimation and approximation in the fields of applied mathematics and statistics. Applications from various areas of research such as biology, engineering, and economics have fueled this soaring attention. Due to the natural constrained optimization and optimal control formulations achieved by inequality constrained estimation problems, optimization and optimal control play an invaluable part in resolving computational and statistical performance matters in shape constrained estimation. Additionally, the favorable statistical, numerical, and analytical properties of spline functions grant splines an influential place in resolving these issues. Hence, the purpose of this research is to develop numerical and analytical techniques for general shape constrained estimation problems using optimization, optimal control, spline theory, and statistical tools. A number of topics in shape constrained estimation are examined. We first consider the computation and numerical analysis of smoothing splines subject to general dynamics and control constraints. Optimal control formulations and nonsmooth algorithms for computing such splines are established; we then verify the convergence of these algorithms. Second, we consider the asymptotic analysis of the nonparametric estimation of functions subject to general nonnegative derivative constraints in the supremum norm. A nonnegative derivative constrained B-spline estimator is proposed, and we demonstrate that this estimator achieves a critical uniform Lipschitz property. This property is then exploited to establish asymptotic bounds on the B-spline estimator bias, stochastic error, and risk in the supremum norm. Minimax lower bounds are then established for a variety of nonnegative derivative constrained function classes, using the same norm. For the first, second, and third order derivative constraints, these asymptotic lower bounds match the upper bounds on the constrained B-spline estimator risk, demonstrating that the nonnegative derivative constrained B-spline estimator performs optimally over suitable constrained Hölder classes, with respect to the supremum norm. Constrained Estimation and Approximation Using Control, Optimization, and Spline Theory

5 citations


Cites background from "A Newton Method for Shape-Preservin..."

  • ...inite program that attains efficient algorithms [19, 22]....

    [...]

  • ...For instance, in [19, 20], the authors consider a cubic smoothing spline interpolation problem, where the nonnegative constrained control is given by the second derivative of the smoothing spline....

    [...]

Journal ArticleDOI
TL;DR: In this article, a smoothing spline approximation under inequality constraints in function of the spline coefficients is proposed, and a simulated annealing algorithm is used to solve the optimization problem.

4 citations

Journal ArticleDOI
TL;DR: A modified convex interpolation method (with and without smoothing) is developed to approximate the option price function while explicitly incorporating the shape restrictions, optimal for minimizing the distance between the implied risk-neutral density function and a prior density function.
Abstract: The interpolation of the market implied volatility function from several observations of option prices is often required in financial practice and empirical study. However, the results from existing interpolation methods may not satisfy the property that the European call option price function is monotonically decreasing and convex with respect to the strike price. In this paper, a modified convex interpolation method (with and without smoothing) is developed to approximate the option price function while explicitly incorporating the shape restrictions. The method is optimal for minimizing the distance between the implied risk-neutral density function and a prior density function, which allows us to benefit from nonparametric methodology and empirical experience. Numerical performance shows that the method is accurate and robust. Whether or not the sample satisfies the convexity and decreasing constraints, the method always works.

4 citations

Journal ArticleDOI
TL;DR: This work studies regularity and well-posedness of the dual program: two important issues that have been not yet well-addressed in the literature and describes the case when the generalized Hessian of the objective function is positive definite.
Abstract: An efficient approach to computing the convex best C 1-spline interpolant to a given set of data is to solve an associated dual program by standard numerical methods (e.g., Newton's method). We study regularity and well-posedness of the dual program: two important issues that have been not yet well-addressed in the literature. Our regularity results characterize the case when the generalized Hessian of the objective function is positive definite. We also give sufficient conditions for the coerciveness of the objective function. These results together specify conditions when the dual program is well-posed and hence justify why Newton's method is likely to be successful in practice. Examples are given to illustrate the obtained results.

4 citations


Cites background from "A Newton Method for Shape-Preservin..."

  • ...The proof technique for the following result is motivated by [3, 4] where a more difficult problem than (2) is considered (i....

    [...]

References
More filters
Book
01 Jan 1983
TL;DR: The Calculus of Variations as discussed by the authors is a generalization of the calculus of variations, which is used in many aspects of analysis, such as generalized gradient descent and optimal control.
Abstract: 1. Introduction and Preview 2. Generalized Gradients 3. Differential Inclusions 4. The Calculus of Variations 5. Optimal Control 6. Mathematical Programming 7. Topics in Analysis.

9,498 citations


"A Newton Method for Shape-Preservin..." refers background in this paper

  • ...As a side result, from Lemma 2.3 and the Clarke inverse function theorem [2, Theorem 7.1.1], we obtain that the solution of the problem (3) is a Lipschitz continuous function of the interpolation values yi....

    [...]

  • ...For a locally Lipschitz continuous function G : Rn → Rn, the generalized Jacobian ∂G(x) of G at x in the sense of Clarke [2] is the convex hull of all limits obtained along sequences on which G is differentiable: ∂G(x) = co { lim xj→x ∇G(xj) | G is differentiable at xj ∈ Rn } ....

    [...]

  • ...For a locally Lipschitz continuous function G : R → R, the generalized Jacobian ∂G(x) of G at x in the sense of Clarke [2] is the convex hull of all limits obtained along sequences on which G is differentiable:...

    [...]

Journal ArticleDOI
TL;DR: It is shown that the gradient function of the augmented Lagrangian forC2-nonlinear programming is semismooth, and the extended Newton's method can be used in the augmentedlagrangian method for solving nonlinear programs.
Abstract: Newton's method for solving a nonlinear equation of several variables is extended to a nonsmooth case by using the generalized Jacobian instead of the derivative. This extension includes the B-derivative version of Newton's method as a special case. Convergence theorems are proved under the condition of semismoothness. It is shown that the gradient function of the augmented Lagrangian forC2-nonlinear programming is semismooth. Thus, the extended Newton's method can be used in the augmented Lagrangian method for solving nonlinear programs.

1,464 citations

Journal ArticleDOI
TL;DR: IfF is monotone in a neighbourhood ofx, it is proved that 0 εδθ(x) is necessary and sufficient forx to be a solution of CP(F) and the generalized Newton method is shown to be locally well defined and superlinearly convergent with the order of 1+p.
Abstract: The paper deals with complementarity problems CP(F), where the underlying functionF is assumed to be locally Lipschitzian. Based on a special equivalent reformulation of CP(F) as a system of equationsź(x)=0 or as the problem of minimizing the merit functionź=1/2źźź22, we extend results which hold for sufficiently smooth functionsF to the nonsmooth case. In particular, ifF is monotone in a neighbourhood ofx, it is proved that 0 źźź(x) is necessary and sufficient forx to be a solution of CP(F). Moreover, for monotone functionsF, a simple derivative-free algorithm that reducesź is shown to possess global convergence properties. Finally, the local behaviour of a generalized Newton method is analyzed. To this end, the result by Mifflin that the composition of semismooth functions is again semismooth is extended top-order semismooth functions. Under a suitable regularity condition and ifF isp-order semismooth the generalized Newton method is shown to be locally well defined and superlinearly convergent with the order of 1+p.

296 citations

Journal ArticleDOI
TL;DR: In this article, the authors extended Newton and quasi-Newton methods to systems of PC 1 equations and established the quadratic convergence property of the extended Newton method and the Q-superlinear convergence property for the extended quasiNewton method.
Abstract: This paper extends Newton and quasi-Newton methods to systems of PC 1 equations and establishes the quadratic convergence property of the extended Newton method and the Q-superlinear convergence property of the extended quasi-Newton method.

136 citations


"A Newton Method for Shape-Preservin..." refers methods in this paper

  • ...Furthermore, in this case quadratic convergence of the Newton method would follow directly from [13]....

    [...]

Journal ArticleDOI
TL;DR: In this article, inexact Newton methods for solving systems of nonsmooth equations are investigated and a globally convergent inexact iteration function based method for locally Lipschitz functions is introduced.

130 citations


"A Newton Method for Shape-Preservin..." refers methods in this paper

  • ...For more discussion of the inexact Newton method, we refer to [3, 7, 14]....

    [...]