scispace - formally typeset
Open AccessPosted Content

Algorithms for Solving High Dimensional PDEs: From Nonlinear Monte Carlo to Machine Learning.

Reads0
Chats0
TLDR
It is demonstrated to the reader that studying PDEs as well as control and variational problems in very high dimensions might very well be among the most promising new directions in mathematics and scientific computing in the near future.
Abstract
In recent years, tremendous progress has been made on numerical algorithms for solving partial differential equations (PDEs) in a very high dimension, using ideas from either nonlinear (multilevel) Monte Carlo or deep learning. They are potentially free of the curse of dimensionality for many different applications and have been proven to be so in the case of some nonlinear Monte Carlo methods for nonlinear parabolic PDEs. In this paper, we review these numerical and theoretical advances. In addition to algorithms based on stochastic reformulations of the original problem, such as the multilevel Picard iteration and the Deep BSDE method, we also discuss algorithms based on the more traditional Ritz, Galerkin, and least square formulations. We hope to demonstrate to the reader that studying PDEs as well as control and variational problems in very high dimensions might very well be among the most promising new directions in mathematics and scientific computing in the near future.

read more

Citations
More filters
Journal Article

Approximating quantum many-body wave-functions using artificial neural networks

TL;DR: In this article, the authors demonstrate the expressibility of ANNs in quantum many-body physics by showing that a feed-forward neural network with a small number of hidden layers can be trained to approximate with high precision the ground states of some notable quantum manybody systems.
Posted Content

Asymptotic Expansion as Prior Knowledge in Deep Learning Method for high dimensional BSDEs (Forthcoming in Asia-Pacific Financial Markets)

TL;DR: In this article, the authors demonstrate that the use of asymptotic expansion as prior knowledge in the "deep BSDE solver", which is a deep learning method for high dimensional BSDEs proposed by Weinan E, Han & Jentzen (2017), drastically reduces the loss function and accelerates the speed of convergence.
Journal ArticleDOI

Machine Learning and Computational Mathematics

Weinan E
TL;DR: This article describes some of the most important progress that has been made on computational mathematics issues and aims to put things into a perspective that will help to integrate machine learning with computational mathematics.
Journal ArticleDOI

Learning nonlocal constitutive models with neural networks

TL;DR: A neural network representing a region to point mapping to describe a nonlocal constitutive model that learns the embedded submodel without using data from that level, thanks to its interpretable mathematical structure, which makes it a promising alternative to traditional non local constitutive models.
Posted Content

Deep PPDEs for rough local stochastic volatility

TL;DR: Numerical simulations suggest that the pricing function is the solution to a path-dependent PDE, for which a numerical scheme is developed based on Deep Learning techniques that is extremely efficient, and provides a good alternative to classical Monte Carlo simulations.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Book

Dynamic Programming

TL;DR: The more the authors study the information processing aspects of the mind, the more perplexed and impressed they become, and it will be a very long time before they understand these processes sufficiently to reproduce them.
Book

Brownian Motion and Stochastic Calculus

TL;DR: In this paper, the authors present a characterization of continuous local martingales with respect to Brownian motion in terms of Markov properties, including the strong Markov property, and a generalized version of the Ito rule.
Related Papers (5)