scispace - formally typeset
Search or ask a question
Author

Mouhamed Nabih El Tarazi

Bio: Mouhamed Nabih El Tarazi is an academic researcher from Kuwait University. The author has contributed to research in topics: Fundamental Resolution Equation. The author has an hindex of 3, co-authored 4 publications receiving 157 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors present convergence results for the asynchronous algorithms based essentially on the notion of classical contraction and generalize all convergence results of those algorithms which are based on the vectorial norm hypothesis.
Abstract: Nous presentons dans cet article des resultats de convergence des algorithmes asynchrones bases essentiellement sur la notion classique de contraction. Nous generalisons, en particulier, tous les resultats de convergence de ces algorithmes qui font l'hypothese de contraction en norme vectorielle qui recemment a ete tres souvant utilisee. Par ailleurs, l'hypothese de contraction en norme vectorielle peut se trouver difficile, voire impossible a verifier pour certains problemes qui peuvent etre cependant abordes dans le cadre de la contraction classique que nous adoptons. In this paper we present convergence results for the asynchronous algorithms based essentially on the notion of classical contraction. We generalize, in particular, all convergence results for those algorithms which are based on the vectorial norm hypothesis, in wide spread use recently. Certain problems, for which the vectorial norm hypothesis can be difficult or even impossible to verify, can nontheless be tackled within the scope of the classical contraction that we adopte.

144 citations

Journal ArticleDOI
TL;DR: In this article, the authors introduce two classes of iterative algorithms which they call "Asynchronous mixed algorithms" and study their convergence under partial ordering, along with their convergence study, constitute a generalization of the mixed classical "Newton-relaxation" algorithms.
Abstract: Nous introduisons dans cet article deux classes d'algorithmes iteratifs que nous appelons «Algorithmes mixtes asynchrones» et nous en etudierons la convergence selon un ordre partiel. Ces algorithmes sont implementables aussi bien sur les monoprocesseurs que sur les multiprocesseurs, et avec leur etude de convergence constituent une generalisation des algorithmes mixtes classiques «Newton-relaxation». In this paper we introduce two classes of iterative algorithms which we call "Asynchronous mixed algorithms" and we study their convergence under partial ordering. These algorithms can be implemented just as well on monoprocessors as on multiprocessors, and, along with their convergence study, constitute a generalization of the mixed classical "Newton-relaxation" algorithms.

13 citations

Journal ArticleDOI
TL;DR: On etudie l'applicabilite des algorithmes asynchrones iteratifs aux systemes d'equations differentielles d’ordre 1
Abstract: On etudie l'applicabilite des algorithmes asynchrones iteratifs aux systemes d'equations differentielles d'ordre 1

3 citations

Journal ArticleDOI
TL;DR: Etude theorique et numerique du processus d'acceleration de la vitesse de la methode des approximations successives appliquee aux systemes lineaires dans le cas de convergence monotone as discussed by the authors.
Abstract: Etude theorique et numerique du processus d'acceleration de la vitesse de la methode des approximations successives appliquee aux systemes lineaires dans le cas de convergence monotone

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Certain models of asynchronous iterations, using a common theoretical framework, are reviewed, including nonsingular linear systems, nonlinear systems, and initial value problems that arise naturally on parallel computers.

262 citations

Journal ArticleDOI
TL;DR: Theoretically, it is shown that if the nonexpansive operator $T$ has a fixed point, then with probability one, ARock generates a sequence that converges to a fixed points of $T$.
Abstract: Finding a fixed point to a nonexpansive operator, i.e., $x^*=Tx^*$, abstracts many problems in numerical linear algebra, optimization, and other areas of scientific computing. To solve fixed-point problems, we propose ARock, an algorithmic framework in which multiple agents (machines, processors, or cores) update $x$ in an asynchronous parallel fashion. Asynchrony is crucial to parallel computing since it reduces synchronization wait, relaxes communication bottleneck, and thus speeds up computing significantly. At each step of ARock, an agent updates a randomly selected coordinate $x_i$ based on possibly out-of-date information on $x$. The agents share $x$ through either global memory or communication. If writing $x_i$ is atomic, the agents can read and write $x$ without memory locks. Theoretically, we show that if the nonexpansive operator $T$ has a fixed point, then with probability one, ARock generates a sequence that converges to a fixed points of $T$. Our conditions on $T$ and step sizes are weaker than comparable work. Linear convergence is also obtained. We propose special cases of ARock for linear systems, convex optimization, machine learning, as well as distributed and decentralized consensus problems. Numerical experiments of solving sparse logistic regression problems are presented.

205 citations

Journal ArticleDOI
TL;DR: This work considers iterative algorithms of the form x := f ( x ), executed by a parallel or distributed computing system, and considers synchronous executions of such iterations and study their communication requirements, as well as issues related to processor synchronization.

169 citations

Journal ArticleDOI
TL;DR: In this paper, the convergence rate of distributed augmented Lagrangian (AL) methods has been studied and the dependence of the convergence rates on the underlying network parameters has been established.
Abstract: We study distributed optimization where nodes cooperatively minimize the sum of their individual, locally known, convex costs $f_{i}(x)$ 's; $x\in\BBR^{d} $ is global. Distributed augmented Lagrangian (AL) methods have good empirical performance on several signal processing and learning applications, but there is limited understanding of their convergence rates and how it depends on the underlying network. This paper establishes globally linear (geometric) convergence rates of a class of deterministic and randomized distributed AL methods, when the $f_{i} $ 's are twice continuously differentiable and have a bounded Hessian. We give explicit dependence of the convergence rates on the underlying network parameters. Simulations illustrate our analytical findings.

121 citations

Posted Content
TL;DR: In this article, the convergence rate of distributed augmented Lagrangian (AL) methods has been analyzed and the dependence of the convergence rates on the underlying network parameters has been shown.
Abstract: We study distributed optimization where nodes cooperatively minimize the sum of their individual, locally known, convex costs $f_i(x)$'s, $x \in {\mathbb R}^d$ is global. Distributed augmented Lagrangian (AL) methods have good empirical performance on several signal processing and learning applications, but there is limited understanding of their convergence rates and how it depends on the underlying network. This paper establishes globally linear (geometric) convergence rates of a class of deterministic and randomized distributed AL methods, when the $f_i$'s are twice continuously differentiable and have a bounded Hessian. We give explicit dependence of the convergence rates on the underlying network parameters. Simulations illustrate our analytical findings.

106 citations