scispace - formally typeset
Search or ask a question

Showing papers by "Irad Yavneh published in 2019"


Posted Content
TL;DR: In this article, the authors propose a framework for learning multigrid solvers for solving large-scale partial differential equations (PDEs) by learning a mapping from a family of parameterized PDEs to prolongation operators using an efficient and unsupervised loss function.
Abstract: Constructing fast numerical solvers for partial differential equations (PDEs) is crucial for many scientific disciplines. A leading technique for solving large-scale PDEs is using multigrid methods. At the core of a multigrid solver is the prolongation matrix, which relates between different scales of the problem. This matrix is strongly problem-dependent, and its optimal construction is critical to the efficiency of the solver. In practice, however, devising multigrid algorithms for new problems often poses formidable challenges. In this paper we propose a framework for learning multigrid solvers. Our method learns a (single) mapping from a family of parameterized PDEs to prolongation operators. We train a neural network once for the entire class of PDEs, using an efficient and unsupervised loss function. Experiments on a broad class of 2D diffusion problems demonstrate improved convergence rates compared to the widely used Black-Box multigrid scheme, suggesting that our method successfully learned rules for constructing prolongation matrices.

48 citations


Proceedings Article
24 May 2019
TL;DR: This paper proposes a framework for learning multigrid solvers, and learns a (single) mapping from a family of parameterized PDEs to prolongation operators, using an efficient and unsupervised loss function.

33 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply a general framework called weighted proximal methods (WPMs) to solve RED efficiently, and show that slightly more sophisticated variants of WPM can lead to reduced run times for RED by requiring a significantly smaller number of calls to the denoiser.
Abstract: REgularization by Denoising (RED) is an attractive framework for solving inverse problems by incorporating state-of-the-art denoising algorithms as the priors. A drawback of this approach is the high computational complexity of denoisers, which dominate the computation time. In this paper, we apply a general framework called weighted proximal methods (WPMs) to solve RED efficiently. We first show that two recently introduced RED solvers (using the fixed point and accelerated proximal gradient methods) are particular cases of WPMs. Then we show by numerical experiments that slightly more sophisticated variants of WPM can lead to reduced run times for RED by requiring a significantly smaller number of calls to the denoiser.

1 citations