scispace - formally typeset
Open accessPosted Content

Pricing high-dimensional Bermudan options with hierarchical tensor formats.

Abstract: An efficient compression technique based on hierarchical tensors for popular option pricing methods is presented. It is shown that the "curse of dimensionality" can be alleviated for the computation of Bermudan option prices with the Monte Carlo least-squares approach as well as the dual martingale method, both using high-dimensional tensorized polynomial expansions. This discretization allows for a simple and computationally cheap evaluation of conditional expectations. Complexity estimates are provided as well as a description of the optimization procedures in the tensor train format. Numerical experiments illustrate the favourable accuracy of the proposed methods. The dynamical programming method yields results comparable to recent Neural Network based methods.

... read more

Citations
  More

5 results found


Open accessPosted Content
Abstract: High-dimensional partial differential equations (PDEs) are ubiquitous in economics, science and engineering. However, their numerical treatment poses formidable challenges since traditional grid-based methods tend to be frustrated by the curse of dimensionality. In this paper, we argue that tensor trains provide an appealing approximation framework for parabolic PDEs: the combination of reformulations in terms of backward stochastic differential equations and regression-type methods in the tensor format holds the promise of leveraging latent low-rank structures enabling both compression and efficient computation. Following this paradigm, we develop novel iterative schemes, involving either explicit and fast or implicit and accurate updates. We demonstrate in a number of examples that our methods achieve a favorable trade-off between accuracy and computational efficiency in comparison with state-of-the-art neural network based approaches.

... read more

Topics: Tensor (intrinsic definition) (54%), Tensor (53%), Partial differential equation (51%) ... read more

5 Citations


Open accessPosted Content
Philipp Trunschke1Institutions (1)
Abstract: We consider the problem of approximating a function in general nonlinear subsets of $L^2$ when only a weighted Monte Carlo estimate of the $L^2$-norm can be computed. Of particular interest in this setting is the concept of sample complexity, the number of samples that are necessary to recover the best approximation. Bounds for this quantity have been derived in a previous work and depend primarily on the model class and are not influenced positively by the regularity of the sought function. This result however is only a worst-case bound and is not able to explain the remarkable performance of iterative hard thresholding algorithms that is observed in practice. We reexamine the results of the previous paper and derive a new bound that is able to utilize the regularity of the sought function. A critical analysis of our results allows us to derive a sample efficient algorithm for the model set of low-rank tensors. The viability of this algorithm is demonstrated by recovering quantities of interest for a classical high-dimensional random partial differential equation.

... read more

Topics: Function (mathematics) (54%), Non-linear least squares (52%), Nonlinear system (51%) ... read more

1 Citations


Open accessPosted Content
Sebastian Krämer1Institutions (1)
Abstract: Affine sum-of-ranks minimization (ASRM) generalizes the affine rank minimization (ARM) problem from matrices to tensors. Here, the interest lies in the ranks of a family $\mathcal{K}$ of different matricizations. Transferring our priorly discussed results on asymptotic log-det rank minimization, we show that iteratively reweighted least squares with weight strength $p = 0$ remains a, theoretically and practically, particularly viable method denoted as $\mathrm{IRLS}$-$0\mathcal{K}$. As in the matrix case, we prove global convergence of asymptotic minimizers of the log-det sum-of-ranks function to desired solutions. Further, we show local convergence of $\mathrm{IRLS}$-$0\mathcal{K}$ in dependence of the rate of decline of the therein appearing regularization parameter $\gamma \searrow 0$. For hierarchical families $\mathcal{K}$, we show how an alternating version ($\mathrm{AIRLS}$-$0\mathcal{K}$, related to prior work under the name $\mathrm{SALSA}$) can be evaluated solely through tensor tree network based operations. The method can thereby be applied to high dimensions through the avoidance of exponential computational complexity. Further, the otherwise crucial rank adaption process becomes essentially superfluous even for completion problems. In numerical experiments, we show that the therefor required subspace restrictions and relaxation of the affine constraint cause only a marginal loss of approximation quality. On the other hand, we demonstrate that $\mathrm{IRLS}$-$0\mathcal{K}$ allows to observe the theoretical phase transition also for generic tensor recoverability in practice. Concludingly, we apply $\mathrm{AIRLS}$-$0\mathcal{K}$ to larger scale problems.

... read more

1 Citations


Open accessPosted Content
Abstract: We present a novel method to approximate optimal feedback laws for nonlinear optimal control based on low-rank tensor train (TT) decompositions. The approach is based on the Dirac-Frenkel variational principle with the modification that the optimisation uses an empirical risk. Compared to current state-of-the-art TT methods, our approach exhibits a greatly reduced computational burden while achieving comparable results. A rigorous description of the numerical scheme and demonstrations of its performance are provided.

... read more


Open accessPosted Content
Abstract: Low-rank tensors are an established framework for high-dimensional least-squares problems. We propose to extend this framework by including the concept of block-sparsity. In the context of polynomial regression each sparsity pattern corresponds to some subspace of homogeneous multivariate polynomials. This allows us to adapt the ansatz space to align better with known sample complexity results. The resulting method is tested in numerical experiments and demonstrates improved computational resource utilization and sample efficiency.

... read more

References
  More

41 results found


Open accessBook
07 Aug 2003-
Abstract: Foundations.- Generating Random Numbers and Random Variables.- Generating Sample Paths.- Variance Reduction Techniques.- Quasi-Monte Carlo Methods.- Discretization Methods.- Estimating Sensitivities.- Pricing American Options.- Applications in Risk Management.- Appendices

... read more

3,414 Citations


Open accessBook
23 Dec 2007-
Abstract: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.

... read more

Topics: Numerical linear algebra (62%), Optimization problem (57%), Linear algebra (55%) ... read more

2,492 Citations


Open accessJournal ArticleDOI: 10.1093/RFS/14.1.113
Abstract: This article presents a simple yet powerful new approach for approximating the value of American options by simulation. The key to this approach is the use of least squares to estimate the conditional expected payoff to the optionholder from continuation. This makes this approach readily applicable in path-dependent and multifactor situations where traditional finite difference techniques cannot be used. We illustrate this technique with several realistic examples including valuing an option when the underlying asset follows a jump-diffusion process and valuing an American swaption in a 20-factor string model of the term structure.

... read more

Topics: Swaption (57%), Binomial options pricing model (56%), Real options valuation (55%) ... read more

2,409 Citations


Open accessJournal ArticleDOI: 10.1103/PHYSREVLETT.91.147902
Guifre Vidal1Institutions (1)
Abstract: We present a classical protocol to efficiently simulate any pure-state quantum computation that involves only a restricted amount of entanglement. More generally, we show how to classically simulate pure-state quantum computations on n qubits by using computational resources that grow linearly in n and exponentially in the amount of entanglement in the quantum computer. Our results imply that a necessary condition for an exponential computational speedup (with respect to classical computations) is that the amount of entanglement increases with the size n of the computation, and provide an explicit lower bound on the required growth.

... read more

Topics: Quantum algorithm (65%), Amplitude damping channel (63%), Quantum capacity (63%) ... read more

1,704 Citations


Journal ArticleDOI: 10.1137/090752286
Abstract: A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.

... read more

Topics: Singular value decomposition (61%), Tucker decomposition (57%), Matricization (54%) ... read more

1,516 Citations