scispace - formally typeset
Search or ask a question
Author

von Wurstemberger Wurstemberger

Bio: von Wurstemberger Wurstemberger is an academic researcher. The author has contributed to research in topics: Curse of dimensionality & Nonlinear system. The author has an hindex of 1, co-authored 1 publications receiving 31 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: An MLP algorithm is introduced for the approximation of solutions of semilinear Black-Scholes equations and it is proved that the computational effort of the method grows at most polynomially both in the dimension and the reciprocal of the prescribed approximation accuracy.
Abstract: Parabolic partial differential equations (PDEs) are widely used in the mathematical modeling of natural phenomena and man-made complex systems. In particular, parabolic PDEs are a fundamental tool to approximately determine fair prices of financial derivatives in the financial engineering industry. The PDEs appearing in financial engineering applications are often nonlinear (e.g., in PDE models which take into account the possibility of a defaulting counterparty) and high-dimensional since the dimension typically corresponds to the number of considered financial assets. A major issue in the scientific literature is that most approximation methods for nonlinear PDEs suffer from the so-called curse of dimensionality in the sense that the computational effort to compute an approximation with a prescribed accuracy grows exponentially in the dimension of the PDE or in the reciprocal of the prescribed approximation accuracy and nearly all approximation methods for nonlinear PDEs in the scientific literature have not been shown not to suffer from the curse of dimensionality. Recently, a new class of approximation schemes for semilinear parabolic PDEs, termed full history recursive multilevel Picard (MLP) algorithms, were introduced and it was proven that MLP algorithms do overcome the curse of dimensionality for semilinear heat equations. In this paper we extend and generalize those findings to a more general class of semilinear PDEs which includes as special cases the important examples of semilinear Black-Scholes equations used in pricing models for financial derivatives with default risks. In particular, we introduce an MLP algorithm for the approximation of solutions of semilinear Black-Scholes equations and prove, under the assumption that the nonlinearity in the PDE is globally Lipschitz continuous, that the computational effort of the proposed method grows at most polynomially in both the dimension and the reciprocal of the prescribed approximation accuracy. We thereby establish, for the first time, that the numerical approximation of solutions of semilinear Black-Scholes equations is a polynomially tractable approximation problem.

31 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 2020
TL;DR: In this paper, it was shown that the number of parameters of the employed deep neural networks grows at most polynomially in both the PDE dimension and the reciprocal of the prescribed approximation accuracy.
Abstract: Deep neural networks and other deep learning methods have very successfully been applied to the numerical approximation of high-dimensional nonlinear parabolic partial differential equations (PDEs), which are widely used in finance, engineering, and natural sciences. In particular, simulations indicate that algorithms based on deep learning overcome the curse of dimensionality in the numerical approximation of solutions of semilinear PDEs. For certain linear PDEs it has also been proved mathematically that deep neural networks overcome the curse of dimensionality in the numerical approximation of solutions of such linear PDEs. The key contribution of this article is to rigorously prove this for the first time for a class of nonlinear PDEs. More precisely, we prove in the case of semilinear heat equations with gradient-independent nonlinearities that the numbers of parameters of the employed deep neural networks grow at most polynomially in both the PDE dimension and the reciprocal of the prescribed approximation accuracy. Our proof relies on recently introduced full history recursive multilevel Picard approximations for semilinear PDEs.

108 citations

Posted Content
TL;DR: It is demonstrated to the reader that studying PDEs as well as control and variational problems in very high dimensions might very well be among the most promising new directions in mathematics and scientific computing in the near future.
Abstract: In recent years, tremendous progress has been made on numerical algorithms for solving partial differential equations (PDEs) in a very high dimension, using ideas from either nonlinear (multilevel) Monte Carlo or deep learning. They are potentially free of the curse of dimensionality for many different applications and have been proven to be so in the case of some nonlinear Monte Carlo methods for nonlinear parabolic PDEs. In this paper, we review these numerical and theoretical advances. In addition to algorithms based on stochastic reformulations of the original problem, such as the multilevel Picard iteration and the Deep BSDE method, we also discuss algorithms based on the more traditional Ritz, Galerkin, and least square formulations. We hope to demonstrate to the reader that studying PDEs as well as control and variational problems in very high dimensions might very well be among the most promising new directions in mathematics and scientific computing in the near future.

99 citations

Journal ArticleDOI
TL;DR: A numerical method that divides the PDE approximation problem into a sequence of separate learning problems that combines operator splitting with deep learning and can handle extremely high-dimensional PDEs.
Abstract: In this paper we introduce a numerical method for nonlinear parabolic PDEs that combines operator splitting with deep learning. It divides the PDE approximation problem into a sequence of separate learning problems. Since the computational graph for each of the subproblems is comparatively small, the approach can handle extremely high-dimensional PDEs. We test the method on different examples from physics, stochastic control and mathematical finance. In all cases, it yields very good results in up to 10,000 dimensions with short run times.

94 citations

Journal ArticleDOI
TL;DR: This expository review introduces and contrast three important recent approaches attractive in their simplicity and their suitability for high‐dimensional problems: physics‐informed neural networks, methods based on the Feynman–Kac formula and methodsbased on the solution of backward stochastic differential equations.
Abstract: Neural networks are increasingly used to construct numerical solution methods for partial differential equations. In this expository review, we introduce and contrast three important recent approaches attractive in their simplicity and their suitability for high-dimensional problems: physics-informed neural networks, methods based on the Feynman-Kac formula and methods based on the solution of backward stochastic differential equations. The article is accompanied by a suite of expository software in the form of Jupyter notebooks in which each basic methodology is explained step by step, allowing for a quick assimilation and experimentation. An extensive bibliography summarizes the state of the art.

92 citations

Journal ArticleDOI
TL;DR: A three-hidden-layer neural network with super approximation power is introduced, which overcomes the curse of dimensionality in approximation power when the variation of ωf(r) as r→0 is moderate and is extended to general bounded continuous functions on a bounded set E⊆Rd.

74 citations