scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Algorithmes de relaxation chaotique à retards

01 Jan 1975-Vol. 9, pp 55-82
TL;DR: Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale, toute copie ou impressions de ce fichier doit contenir la présente mention of copyright.
Abstract: © AFCET, 1975, tous droits réservés. L’accès aux archives de la revue « Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique » implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal. php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the main research developments in the area of iterative methods for solving linear systems during the 20th century are described and compared, and the most signicant contributions during the past century are compared to one another.

467 citations


Cites background from "Algorithmes de relaxation chaotique..."

  • ...A few of the main contributions were by Chazan and Miranker [41], Miellou [128], Robert [147] and Robert et al. [148]....

    [...]

  • ...A few of the main contributions were by Chazan and Miranker [41], Miellou [128], Robert [147] and Robert et al [148]....

    [...]

Journal ArticleDOI
TL;DR: This work considers iterative algorithms of the form x := f ( x ), executed by a parallel or distributed computing system, and considers synchronous executions of such iterations and study their communication requirements, as well as issues related to processor synchronization.

169 citations


Cites background from "Algorithmes de relaxation chaotique..."

  • ...Miellou (1975a)....

    [...]

  • ...Miellou (1975b) and Bensekas (1982) for monotone iterations. and Bensekas (1983) for general iterations....

    [...]

Dissertation
21 Mar 1978
TL;DR: Theoremes de points fixes dans les treillis complets, etude du comportement d'un systeme dynamique discret, analyse semantique exacte des programmes and applications, andes constructives d'approximation de points fix d'operateurs monotone sur un treill is complet.
Abstract: Theoremes de points fixes dans les treillis complets, etude du comportement d'un systeme dynamique discret, analyse semantique exacte des programmes et applications. Methodes constructives d'approximation de points fixes d'operateurs monotone sur un treillis complet. Analyse semantique approchee des programmes et applications. Analyse semantique des procedures recursives.

144 citations

Journal ArticleDOI
TL;DR: In this article, the authors present convergence results for the asynchronous algorithms based essentially on the notion of classical contraction and generalize all convergence results of those algorithms which are based on the vectorial norm hypothesis.
Abstract: Nous presentons dans cet article des resultats de convergence des algorithmes asynchrones bases essentiellement sur la notion classique de contraction. Nous generalisons, en particulier, tous les resultats de convergence de ces algorithmes qui font l'hypothese de contraction en norme vectorielle qui recemment a ete tres souvant utilisee. Par ailleurs, l'hypothese de contraction en norme vectorielle peut se trouver difficile, voire impossible a verifier pour certains problemes qui peuvent etre cependant abordes dans le cadre de la contraction classique que nous adoptons. In this paper we present convergence results for the asynchronous algorithms based essentially on the notion of classical contraction. We generalize, in particular, all convergence results for those algorithms which are based on the vectorial norm hypothesis, in wide spread use recently. Certain problems, for which the vectorial norm hypothesis can be difficult or even impossible to verify, can nontheless be tackled within the scope of the classical contraction that we adopte.

144 citations


Cites background from "Algorithmes de relaxation chaotique..."

  • ...3.8. Proposition (cf. Mieltou, J.C. [ 13 ]). Soit G une application de D(G)cE dans E. Si u, veD(G) tels que l'on a (37) et (38) alors V fl~]p(T), I[, 3y1,o12, .--,G des nombres strictement positifs tels que :...

    [...]

  • ...[12, 13 ]. Par ailleurs l'hypothase de Tcontraction peut se trouver difficite, voire impossible ~ v&ifier pour certains probl4mes qui peuvent atre cependant abord6s dans le cadre de la contraction....

    [...]

  • ...[12, 13 ]. Nous donnerons aussi un r6sultat de convergence locale des algorithmes asynchrones (th6or~me 3.11.) g6n6ralisant le th6or~me classique d'Ostrowski (cf....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors study the convergence of iterative methods for algebraic linear systems of equations and present conditions on the splittings corresponding to the iterative method to guarantee convergence for any number of inner iterations.
Abstract: Classical iterative methods for the solution of algebraic linear systems of equations proceed by solving at each step a simpler system of equations. When this system is itself solved by an (inner) iterative method, the global method is called a two-stage iterative method. If this process is repeated, then the resulting method is called a nested iterative method. We study the convergence of such methods and present conditions on the splittings corresponding to the iterative methods to guarantee convergence forany number of inner iterations. We also show that under the conditions presented, the spectral radii of the global iteration matrices decrease when the number of inner iterations increases. The proof uses a new comparison theorem for weak regular splittings. We extend our results to larger classes of iterative methods, which include iterative block Gauss-Seidel. We develop a theory for the concatenation of such iterative methods. This concatenation appears when different numbers of inner interations are performed at each outer step. We also analyze block methods, where different numbers of inner iterations are performed for different diagonal blocks.

107 citations

References
More filters
Journal ArticleDOI

5,562 citations


"Algorithmes de relaxation chaotique..." refers background in this paper

  • ...Sur ce plan les références de base semblent être Rosenfeld J. [21], Chazan D. — Miranker W. [5], Donnelly J. DP. [6], dont les études portent sur le cas de l'approximation du point fixe d'une application affine sur Ra....

    [...]

  • ...Au § IV.2, on propose une classe d'algorithmes multiprocesseur à conduite libre, qui contient une généralisation "multiprocesseur" de l'algorithme de Gauss-Seidel, proposée et étudiée dans le cas d'un opérateur affine sur Ra par Chazan D. - Miranker W. [5] , Donnelly [6] , et sur le plan expérimental par Rosenfeld[21]....

    [...]

  • ...Il est donc vraisemblable que le matériel utilisé par Rosenfeld [27], pour expérimenter la relaxation chaotique comporte un dispositif "hardware" spécial pour assurer des fonctions du type sémaphore....

    [...]

Book
30 Nov 1961
TL;DR: In this article, the authors propose Matrix Methods for Parabolic Partial Differential Equations (PPDE) and estimate of Acceleration Parameters, and derive the solution of Elliptic Difference Equations.
Abstract: Matrix Properties and Concepts.- Nonnegative Matrices.- Basic Iterative Methods and Comparison Theorems.- Successive Overrelaxation Iterative Methods.- Semi-Iterative Methods.- Derivation and Solution of Elliptic Difference Equations.- Alternating-Direction Implicit Iterative Methods.- Matrix Methods for Parabolic Partial Differential Equations.- Estimation of Acceleration Parameters.

5,317 citations

Book
01 Jan 1971
TL;DR: The ASM preconditioner B is characterized by three parameters: C0, ρ(E) , and ω , which enter via assumptions on the subspaces Vi and the bilinear forms ai(·, ·) (the approximate local problems).
Abstract: theory for ASM. In the following we characterize the ASM preconditioner B by three parameters: C0 , ρ(E) , and ω , which enter via assumptions on the subspaces Vi and the bilinear forms ai(·, ·) (the approximate local problems). Assumption 14.6 (stable decomposition) There exists a constant C0 > 0 such that every u ∈ V admits a decomposition u = ∑N i=0 ui with ui ∈ Vi such that N ∑ i=0 ai(ui, ui) ≤ C 0 a(u, u) (14.29) Assumption 14.7 (strengthened Cauchy-Schwarz inequality) For i, j = 1 . . . N , let Ei,j = Ej,i ∈ [0, 1] be defined by the inequalities |a(ui, uj)| ≤ Ei,j a(ui, ui) a(uj, uj) ∀ ui ∈ Vi, uj ∈ Vj (14.30) By ρ(E) we denote the spectral radius of the symmetric matrix E = (Ei,j) ∈ RN×N . The particular assumption is that we have a nontrivial bound for ρ(E) to our disposal. Note that due to Ei,j ≤ 1 (Cauchy-Schwarz inequality), the trivial bound ρ(E) = ∥E∥2 ≤ √ ∥E∥1 ∥E∥∞ ≤ N always holds; for particular Schwarz methods one usually aims at bounds for ρ(E) which are independent of N . Ed. 2011 Iterative Solution of Large Linear Systems 14.2 Additive Schwarz methods (ASM) 159 Assumption 14.8 (local stability) There exists ω > 0 such that for all i = 1 . . . N : a(ui, ui) ≤ ω ai(ui, ui) ∀ ui ∈ Vi (14.31) Remark 14.9 The space V0 is not included in the definition of E ; as we will see below, this space is allowed to play a special role. Ei,j = 0 implies that the spaces Vi and Vj are orthogonal (in the a(·, ·) inner product). We will see below that small ρ(E) is desirable. We will also see below that a small C0 is desirable. The parameter ω represents a one-sided measure of the approximation properties of the approximate solvers ai . If the local solver is of (exact) Galerkin type, i.e, ai(u, v) ≡ a(u, v) for u, v ∈ Vi , then ω = 1 . However, this does not necessarily imply that Assumptions 14.6 and 14.7 are satisfied. Lemma 14.10 (P. L. Lions) Let PASM be defined by (14.23) resp. (14.24). Then, under Assumption 14.6, (i) PASM : V → V is a bijection, and a(u, u) ≤ C 0 a(PASM u, u) ∀ u ∈ V (14.32) (ii) Characterization of b(u, u) : b(u, u) = a(P−1 ASM u, u) = min { N ∑ i=0 ai(ui, ui) : u = N ∑ i=0 ui, ui ∈ Vi } (14.33) Proof: We make use of the fundamental identity (14.27) and Cauchy-Schwarz inequalites. Proof of (i): Let u ∈ V and u = ∑ i ui be a decomposition of the type guaranteed by Assumption 14.6. Then: a(u, u) = a(u, ∑ i ui) = ∑ i a(u, ui) = ∑ i ai(Pi u, ui) ≤ ∑ i √ ai(Pi u, Pi u) ai(ui, ui) = ∑ i √ a(u, Pi u) ai(ui, ui) ≤ √∑ i a(u, Pi u) √∑ i ai(ui, ui) = √ a(u, PASM u) √∑ i ai(ui, ui) ≤ √ a(u, PASM u)C0 √ a(u, u) This implies the estimate (14.32). In particular, it follows that PASM is injective, because with (14.32), PASM u = 0 implies a(u, u) = 0 , hence u = 0 . Due to finite dimension, we conclude that PASM is bijective. Proof of (ii): We first show that the minimum on the right-hand side of (14.33) cannot be smaller than a(P−1 ASM u, u) . To this end, we consider an arbitrary decomposition u = ∑ i ui with ui ∈ Vi and estimate a(P−1 ASM u, u) = ∑ i a(P −1 ASM u, ui) = ∑ i ai(PiP −1 ASM u, ui) ≤ √∑ i ai(PiP −1 ASM u, PiP −1 ASM u) √∑ i ai(ui, ui) = √∑ i a(P −1 ASM u, PiP −1 ASM u) √∑ i ai(ui, ui) = √ a(P−1 ASM u, u) √∑ i ai(ui, ui) In order to see that a(P−1 ASM u, u) is indeed the minimum of the right-hand side of (14.33), we define ui = PiP −1 ASM u . Obviously, ui ∈ Vi and ∑ i ui = u . Furthermore, ∑ i ai(ui, ui) = ∑ i ai(PiP −1 ASM u, PiP −1 ASM u) = ∑ i a(P −1 ASM u, PiP −1 ASM u) = a(P−1 ASM u, ∑ i PiP −1 ASM u) = a(P −1 ASM u, u) This concludes the proof. Iterative Solution of Large Linear Systems Ed. 2011 160 14 SUBSTRUCTURING METHODS The matrix P ′ ASM = B −1A from (14.23) is the matrix representation of the operator PASM . Since PASM is self-adjoint in the A -inner product (see Lemma 14.2), we can estimate the smallest and the largest eigenvalue of B−1A by: λmin(B −1A) = inf 0 ̸=u ∈V a(PASM u, u) a(u, u) , λmax(B −1A) = sup 0 ̸=u ∈V a(PASM u, u) a(u, u) (14.34) Lemma 14.10, (i) in conjunction with Assumption 14.6 readily yields λmin(B −1A) ≥ 1 C 0 An upper bound for λmax(B −1A) is obtained with the help of the following lemma. Lemma 14.11 Under Assumptions 14.7 and 14.8 we have ∥Pi∥A ≤ ω, i = 0 . . . N (14.35) a(PASM u, u) ≤ ω (1 + ρ(E)) a(u, u) for all u ∈ V (14.36) Proof: Again we make use of identity (14.27). We start with the proof of (14.35): From Assumption 14.8, (14.31) we infer for all u ∈ V : ∥Pi u∥2A = a(Pi u, Pi u) ≤ ω ai(Pi u, Pi u) = ω a(u, Pi u) ≤ ω ∥u∥A ∥Pi u∥A which implies (14.35). For the proof of (14.36), we observe that the space V0 is assumed to play a special role. We define

2,527 citations