scispace - formally typeset
Search or ask a question
Topic

Local convergence

About: Local convergence is a research topic. Over the lifetime, 5135 publications have been published within this topic receiving 126547 citations.


Papers
More filters
Book
01 Jun 1970
TL;DR: In this article, the authors present a list of basic reference books for convergence of Minimization Methods in linear algebra and linear algebra with a focus on convergence under partial ordering.
Abstract: Preface to the Classics Edition Preface Acknowledgments Glossary of Symbols Introduction Part I. Background Material. 1. Sample Problems 2. Linear Algebra 3. Analysis Part II. Nonconstructive Existence Theorems. 4. Gradient Mappings and Minimization 5. Contractions and the Continuation Property 6. The Degree of a Mapping Part III. Iterative Methods. 7. General Iterative Methods 8. Minimization Methods Part IV. Local Convergence. 9. Rates of Convergence-General 10. One-Step Stationary Methods 11. Multistep Methods and Additional One-Step Methods Part V. Semilocal and Global Convergence. 12. Contractions and Nonlinear Majorants 13. Convergence under Partial Ordering 14. Convergence of Minimization Methods An Annotated List of Basic Reference Books Bibliography Author Index Subject Index.

7,669 citations

Book
01 Mar 1983
TL;DR: Newton's Method for Nonlinear Equations and Unconstrained Minimization and methods for solving nonlinear least-squares problems with Special Structure.
Abstract: Preface 1. Introduction. Problems to be considered Characteristics of 'real-world' problems Finite-precision arithmetic and measurement of error Exercises 2. Nonlinear Problems in One Variable. What is not possible Newton's method for solving one equation in one unknown Convergence of sequences of real numbers Convergence of Newton's method Globally convergent methods for solving one equation in one uknown Methods when derivatives are unavailable Minimization of a function of one variable Exercises 3. Numerical Linear Algebra Background. Vector and matrix norms and orthogonality Solving systems of linear equations-matrix factorizations Errors in solving linear systems Updating matrix factorizations Eigenvalues and positive definiteness Linear least squares Exercises 4. Multivariable Calculus Background Derivatives and multivariable models Multivariable finite-difference derivatives Necessary and sufficient conditions for unconstrained minimization Exercises 5. Newton's Method for Nonlinear Equations and Unconstrained Minimization. Newton's method for systems of nonlinear equations Local convergence of Newton's method The Kantorovich and contractive mapping theorems Finite-difference derivative methods for systems of nonlinear equations Newton's method for unconstrained minimization Finite difference derivative methods for unconstrained minimization Exercises 6. Globally Convergent Modifications of Newton's Method. The quasi-Newton framework Descent directions Line searches The model-trust region approach Global methods for systems of nonlinear equations Exercises 7. Stopping, Scaling, and Testing. Scaling Stopping criteria Testing Exercises 8. Secant Methods for Systems of Nonlinear Equations. Broyden's method Local convergence analysis of Broyden's method Implementation of quasi-Newton algorithms using Broyden's update Other secant updates for nonlinear equations Exercises 9. Secant Methods for Unconstrained Minimization. The symmetric secant update of Powell Symmetric positive definite secant updates Local convergence of positive definite secant methods Implementation of quasi-Newton algorithms using the positive definite secant update Another convergence result for the positive definite secant method Other secant updates for unconstrained minimization Exercises 10. Nonlinear Least Squares. The nonlinear least-squares problem Gauss-Newton-type methods Full Newton-type methods Other considerations in solving nonlinear least-squares problems Exercises 11. Methods for Problems with Special Structure. The sparse finite-difference Newton method Sparse secant methods Deriving least-change secant updates Analyzing least-change secant methods Exercises Appendix A. A Modular System of Algorithms for Unconstrained Minimization and Nonlinear Equations (by Robert Schnabel) Appendix B. Test Problems (by Robert Schnabel) References Author Index Subject Index.

6,217 citations

Book
01 Jan 1987
TL;DR: Preface How to Get the Software How to get the Software Part I.
Abstract: Preface How to Get the Software Part I. Linear Equations. 1. Basic Concepts and Stationary Iterative Methods 2. Conjugate Gradient Iteration 3. GMRES Iteration Part II. Nonlinear Equations. 4. Basic Concepts and Fixed Point Iteration 5. Newton's Method 6. Inexact Newton Methods 7. Broyden's Method 8. Global Convergence Bibliography Index.

2,531 citations

Book
01 Jan 1971
TL;DR: The ASM preconditioner B is characterized by three parameters: C0, ρ(E) , and ω , which enter via assumptions on the subspaces Vi and the bilinear forms ai(·, ·) (the approximate local problems).
Abstract: theory for ASM. In the following we characterize the ASM preconditioner B by three parameters: C0 , ρ(E) , and ω , which enter via assumptions on the subspaces Vi and the bilinear forms ai(·, ·) (the approximate local problems). Assumption 14.6 (stable decomposition) There exists a constant C0 > 0 such that every u ∈ V admits a decomposition u = ∑N i=0 ui with ui ∈ Vi such that N ∑ i=0 ai(ui, ui) ≤ C 0 a(u, u) (14.29) Assumption 14.7 (strengthened Cauchy-Schwarz inequality) For i, j = 1 . . . N , let Ei,j = Ej,i ∈ [0, 1] be defined by the inequalities |a(ui, uj)| ≤ Ei,j a(ui, ui) a(uj, uj) ∀ ui ∈ Vi, uj ∈ Vj (14.30) By ρ(E) we denote the spectral radius of the symmetric matrix E = (Ei,j) ∈ RN×N . The particular assumption is that we have a nontrivial bound for ρ(E) to our disposal. Note that due to Ei,j ≤ 1 (Cauchy-Schwarz inequality), the trivial bound ρ(E) = ∥E∥2 ≤ √ ∥E∥1 ∥E∥∞ ≤ N always holds; for particular Schwarz methods one usually aims at bounds for ρ(E) which are independent of N . Ed. 2011 Iterative Solution of Large Linear Systems 14.2 Additive Schwarz methods (ASM) 159 Assumption 14.8 (local stability) There exists ω > 0 such that for all i = 1 . . . N : a(ui, ui) ≤ ω ai(ui, ui) ∀ ui ∈ Vi (14.31) Remark 14.9 The space V0 is not included in the definition of E ; as we will see below, this space is allowed to play a special role. Ei,j = 0 implies that the spaces Vi and Vj are orthogonal (in the a(·, ·) inner product). We will see below that small ρ(E) is desirable. We will also see below that a small C0 is desirable. The parameter ω represents a one-sided measure of the approximation properties of the approximate solvers ai . If the local solver is of (exact) Galerkin type, i.e, ai(u, v) ≡ a(u, v) for u, v ∈ Vi , then ω = 1 . However, this does not necessarily imply that Assumptions 14.6 and 14.7 are satisfied. Lemma 14.10 (P. L. Lions) Let PASM be defined by (14.23) resp. (14.24). Then, under Assumption 14.6, (i) PASM : V → V is a bijection, and a(u, u) ≤ C 0 a(PASM u, u) ∀ u ∈ V (14.32) (ii) Characterization of b(u, u) : b(u, u) = a(P−1 ASM u, u) = min { N ∑ i=0 ai(ui, ui) : u = N ∑ i=0 ui, ui ∈ Vi } (14.33) Proof: We make use of the fundamental identity (14.27) and Cauchy-Schwarz inequalites. Proof of (i): Let u ∈ V and u = ∑ i ui be a decomposition of the type guaranteed by Assumption 14.6. Then: a(u, u) = a(u, ∑ i ui) = ∑ i a(u, ui) = ∑ i ai(Pi u, ui) ≤ ∑ i √ ai(Pi u, Pi u) ai(ui, ui) = ∑ i √ a(u, Pi u) ai(ui, ui) ≤ √∑ i a(u, Pi u) √∑ i ai(ui, ui) = √ a(u, PASM u) √∑ i ai(ui, ui) ≤ √ a(u, PASM u)C0 √ a(u, u) This implies the estimate (14.32). In particular, it follows that PASM is injective, because with (14.32), PASM u = 0 implies a(u, u) = 0 , hence u = 0 . Due to finite dimension, we conclude that PASM is bijective. Proof of (ii): We first show that the minimum on the right-hand side of (14.33) cannot be smaller than a(P−1 ASM u, u) . To this end, we consider an arbitrary decomposition u = ∑ i ui with ui ∈ Vi and estimate a(P−1 ASM u, u) = ∑ i a(P −1 ASM u, ui) = ∑ i ai(PiP −1 ASM u, ui) ≤ √∑ i ai(PiP −1 ASM u, PiP −1 ASM u) √∑ i ai(ui, ui) = √∑ i a(P −1 ASM u, PiP −1 ASM u) √∑ i ai(ui, ui) = √ a(P−1 ASM u, u) √∑ i ai(ui, ui) In order to see that a(P−1 ASM u, u) is indeed the minimum of the right-hand side of (14.33), we define ui = PiP −1 ASM u . Obviously, ui ∈ Vi and ∑ i ui = u . Furthermore, ∑ i ai(ui, ui) = ∑ i ai(PiP −1 ASM u, PiP −1 ASM u) = ∑ i a(P −1 ASM u, PiP −1 ASM u) = a(P−1 ASM u, ∑ i PiP −1 ASM u) = a(P −1 ASM u, u) This concludes the proof. Iterative Solution of Large Linear Systems Ed. 2011 160 14 SUBSTRUCTURING METHODS The matrix P ′ ASM = B −1A from (14.23) is the matrix representation of the operator PASM . Since PASM is self-adjoint in the A -inner product (see Lemma 14.2), we can estimate the smallest and the largest eigenvalue of B−1A by: λmin(B −1A) = inf 0 ̸=u ∈V a(PASM u, u) a(u, u) , λmax(B −1A) = sup 0 ̸=u ∈V a(PASM u, u) a(u, u) (14.34) Lemma 14.10, (i) in conjunction with Assumption 14.6 readily yields λmin(B −1A) ≥ 1 C 0 An upper bound for λmax(B −1A) is obtained with the help of the following lemma. Lemma 14.11 Under Assumptions 14.7 and 14.8 we have ∥Pi∥A ≤ ω, i = 0 . . . N (14.35) a(PASM u, u) ≤ ω (1 + ρ(E)) a(u, u) for all u ∈ V (14.36) Proof: Again we make use of identity (14.27). We start with the proof of (14.35): From Assumption 14.8, (14.31) we infer for all u ∈ V : ∥Pi u∥2A = a(Pi u, Pi u) ≤ ω ai(Pi u, Pi u) = ω a(u, Pi u) ≤ ω ∥u∥A ∥Pi u∥A which implies (14.35). For the proof of (14.36), we observe that the space V0 is assumed to play a special role. We define

2,527 citations

Journal ArticleDOI
TL;DR: A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided and Mathematical results on conditions for uniqueness of sparse solutions are also given.
Abstract: We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging.

1,864 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
86% related
Iterative method
48.8K papers, 1.2M citations
86% related
Differential equation
88K papers, 2M citations
85% related
Linear system
59.5K papers, 1.4M citations
84% related
Optimal control
68K papers, 1.2M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202340
202290
2021160
2020158
2019153
2018183