scispace - formally typeset
Open AccessPosted Content

Hyperbolic cross approximation in infinite dimensions

TLDR
In this article, the authors give tight upper and lower bounds of the cardinality of the index sets of certain hyperbolic crosses which reflect mixed Sobolev-Korobov-type smoothness and mixed SSA-analytic type smoothness in the infinite-dimensional case.
Abstract
We give tight upper and lower bounds of the cardinality of the index sets of certain hyperbolic crosses which reflect mixed Sobolev-Korobov-type smoothness and mixed Sobolev-analytic-type smoothness in the infinite-dimensional case where specific summability properties of the smoothness indices are fulfilled. These estimates are then applied to the linear approximation of functions from the associated spaces in terms of the $\varepsilon$-dimension of their unit balls. Here, the approximation is based on linear information. Such function spaces appear for example for the solution of parametric and stochastic PDEs. The obtained upper and lower bounds of the approximation error as well as of the associated $\varepsilon$-complexities are completely independent of any dimension. Moreover, the rates are independent of the parameters which define the smoothness properties of the infinite-variate parametric or stochastic part of the solution. These parameters are only contained in the order constants. This way, linear approximation theory becomes possible in the infinite-dimensional case and corresponding infinite-dimensional problems get tractable.

read more

Citations
More filters
BookDOI

Hyperbolic Cross Approximation

TL;DR: A survey on classical methods developed in multivariate approximation theory, which are known to work very well for moderate dimensions and which have potential for applications in really high dimensions.
Journal ArticleDOI

Polynomial approximation via compressed sensing of high-dimensional functions on lower sets

TL;DR: In this paper, a compressed sensing approach to polynomial approximation of complex-valued functions in high dimensions is proposed and analyzed, where the target function is smooth, characterized by a rapidly decaying orthonormal expansion, whose most important terms are captured by a lower (or downward closed) set.
Journal ArticleDOI

Multi-Index Stochastic Collocation for random PDEs

TL;DR: This work proposes an optimization procedure to select the most effective mixed differences to include in the MISC estimator, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem.
Journal ArticleDOI

On tensor product approximation of analytic functions

TL;DR: In this article, two-sided bounds on sums of the form? k? N 0 d? D a (T ) exp ( -? j = 1 d a j k j ) were proved.
Journal ArticleDOI

Multi-index Stochastic Collocation Convergence Rates for Random PDEs with Parametric Regularity

TL;DR: This work analyzes the recent Multi-index Stochastic Collocation method for computing statistics of the solution of a partial differential equation (PDE) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion.
References
More filters
Journal ArticleDOI

A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data

TL;DR: A rigorous convergence analysis is provided and exponential convergence of the “probability error” with respect to the number of Gauss points in each direction in the probability space is demonstrated, under some regularity assumptions on the random input data.
Journal ArticleDOI

A Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data

TL;DR: This work demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates, indicating for which problems the sparse grid stochastic collocation method is more efficient than Monte Carlo.
Journal ArticleDOI

When Are Quasi-Monte Carlo Algorithms Efficient for High Dimensional Integrals?

TL;DR: It is proved that the minimalworst case error of quasi-Monte Carlo algorithms does not depend on the dimensiondiff the sum of the weights is finite, and the minimal number of function values in the worst case setting needed to reduce the initial error by ? is bounded byC??p, where the exponentp? 1, 2], andCdepends exponentially on thesum of weights.
Related Papers (5)