scispace - formally typeset
Open AccessPosted Content

Low-rank tensor approximation for Chebyshev interpolation in parametric option pricing

TLDR
In this article, a tensor train interpolation based on tensor completion is proposed for parametric option pricing with high dimensionality, which is based on Chebyshev interpolation.
Abstract
Treating high dimensionality is one of the main challenges in the development of computational methods for solving problems arising in finance, where tasks such as pricing, calibration, and risk assessment need to be performed accurately and in real-time. Among the growing literature addressing this problem, Gass et al. [14] propose a complexity reduction technique for parametric option pricing based on Chebyshev interpolation. As the number of parameters increases, however, this method is affected by the curse of dimensionality. In this article, we extend this approach to treat high-dimensional problems: Additionally exploiting low-rank structures allows us to consider parameter spaces of high dimensions. The core of our method is to express the tensorized interpolation in tensor train (TT) format and to develop an efficient way, based on tensor completion, to approximate the interpolation coefficients. We apply the new method to two model problems: American option pricing in the Heston model and European basket option pricing in the multi-dimensional Black-Scholes model. In these examples we treat parameter spaces of dimensions up to 25. The numerical results confirm the low-rank structure of these problems and the effectiveness of our method compared to advanced techniques.

read more

Citations
More filters
Posted Content

Deep calibration of rough stochastic volatility models

TL;DR: This work showcases a direct comparison of different potential approaches to the learning stage and presents algorithms that provide a suffcient accuracy for practical use and provides a first neural network-based calibration method for rough volatility models for which calibration can be done on the y.
Posted Content

Variance Reduction Applied to Machine Learning for Pricing Bermudan/American Options in High Dimension

TL;DR: An efficient method to compute the price of multi-asset American options, based on Machine Learning, Monte Carlo simulations and variance reduction technique, and employs the European option price as a control variate, which allows to treat very large baskets and moreover to reduce the variance of price estimators.
Posted Content

Improved error bound for multivariate Chebyshev polynomial interpolation

TL;DR: For tensorized Chebyshev interpolation, this work presents an error bound that improves existing results significantly and is essential for efficiency for high-dimensional applications.
Posted Content

The Deep Parametric PDE Method: Application to Option Pricing

TL;DR: A single neural network approximates the solution of a whole family of PDEs after being trained without the need of sample solutions and a comparison with alternative machine learning approaches confirms the effectiveness of the approach.
Journal ArticleDOI

A Block-Sparse Tensor Train Format for Sample-Efficient High-Dimensional Polynomial Regression

TL;DR: This work proposes to extend the low-rank tensors framework by including the concept of block-sparsity, in the context of polynomial regression, to adapt the ansatz space to align better with known sample complexity results.
References
More filters
Journal ArticleDOI

Tensor Decompositions and Applications

TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Journal ArticleDOI

A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options

TL;DR: In this paper, a closed-form solution for the price of a European call option on an asset with stochastic volatility is derived based on characteristi c functions and can be applied to other problems.
Book

Monte Carlo Methods in Financial Engineering

TL;DR: This paper presents a meta-modelling procedure that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of manually computing random numbers and random Variables.
Journal ArticleDOI

Tensor-Train Decomposition

TL;DR: The new form gives a clear and convenient way to implement all basic operations efficiently, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.
Journal ArticleDOI

A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States

TL;DR: This is a partly non-technical introduction to selected topics on tensor network methods, based on several lectures and introductory seminars given on the subject, that should be a good place for newcomers to get familiarized with some of the key ideas in the field.
Related Papers (5)