scispace - formally typeset
Search or ask a question
Topic

Sparse grid

About: Sparse grid is a research topic. Over the lifetime, 1013 publications have been published within this topic receiving 20664 citations.


Papers
More filters
Dissertation
01 Jan 2010
TL;DR: The curse of dimensionality, i.e. the exponential dependency of the overall computational effort on the number of dimensions, is still a roadblock for the numerical treatment of sparse grids.
Abstract: Disclaimer: This pdf version differs slightly from the printed version. Few typos have been corrected! Acknowledgments This thesis would not have been possible without the direct and indirect contributions of several colleagues and friends to which I owe my greatest gratitude. Foremost, I am heartily thankful to my supervisor Hans-Joachim Bungartz for " panem et circenses " , enabling me to work on the topic of sparse grids in a positive and collaborative atmosphere at his chair. I have much appreciated his unconditional support, the open atmosphere, and the full confidence he always offered. His encouragement to look at the bigger picture led to exciting collaborations with other groups and to the challenging applications I was able to deal with. During my work I have learned a lot beyond the mere scientific scope. I would like to thank my reviewers Markus Hegland for inspiring discussions and his warm welcome and hospitality in Down Under (even enriched by a funny hunt for a fluffy Australian spider), and Christoph Zenger, the " grand seigneur of sparse grids " , for references to previous works and related problems. I am very grateful to all colleagues and collaborators. Special thanks go to Stefan Zim-mer for all his support, providing everything from a constant source of ideas and excellent fast (and, if necessary, last-minute) feedback, up to jelly beans and cappuccino. I would like to mention especially the sparse grid group at our chair with Janos Benk, Gerrit Buse, Daniel Butnaru, and Stefanie Schraufstetter. I am very pleased to thank Mona Frommert and Torsten Ensslin from the Max Planck Institute for Astrophysics for providing challenging applications and an excellent collaboration, and to Stefan Dirnstorfer and Andreas Grau for input and feedback regarding financial applications. Furthermore, I would like to thank all students who contributed to the SG ++ toolbox and to applications , in particular Finally, and most importantly, I would like to express my deepest gratefulness to my beloved wife Doris and my parents and sisters for all their constant support and patience throughout the last years. Without them, this thesis would have hardly been possible. Furthermore, I am very grateful to Martin who has been best friend, colleague, housemate, support, and much more, all in one. Abstract The curse of dimensionality, i.e. the exponential dependency of the overall computational effort on the number of dimensions, is still a roadblock for the numerical treatment …

164 citations

Journal ArticleDOI
TL;DR: This work uses the Stochastic Collocation method, and the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that features better convergence properties compared to standard Smolyak or tensor product grids.
Abstract: In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids.

160 citations

Journal ArticleDOI
TL;DR: A computationally efficient machine learning framework is developed based on multi-level recursive co-kriging with sparse precision matrices of Gaussian–Markov random fields that is able to construct response surfaces of complex dynamical systems by blending multiple information sources via auto-regressive stochastic modelling.
Abstract: We propose a new framework for design under uncertainty based on stochastic computer simulations and multi-level recursive co-kriging. The proposed methodology simultaneously takes into account multi-fidelity in models, such as direct numerical simulations versus empirical formulae, as well as multi-fidelity in the probability space (e.g. sparse grids versus tensor product multi-element probabilistic collocation). We are able to construct response surfaces of complex dynamical systems by blending multiple information sources via auto-regressive stochastic modelling. A computationally efficient machine learning framework is developed based on multi-level recursive co-kriging with sparse precision matrices of Gaussian–Markov random fields. The effectiveness of the new algorithms is demonstrated in numerical examples involving a prototype problem in risk-averse design, regression of random functions, as well as uncertainty quantification in fluid mechanics involving the evolution of a Burgers equation from a random initial state, and random laminar wakes behind circular cylinders.

158 citations

Journal ArticleDOI
TL;DR: A multilevel approach for the solution of partial differential equations based on a multiscale basis which is constructed from a one-dimensional multiscales basis by the tensor product approach, which is well suited for higher dimensional problems.
Abstract: We present a multilevel approach for the solution of partial differential equations. It is based on a multiscale basis which is constructed from a one-dimensional multiscale basis by the tensor product approach. Together with the use of hash tables as data structure, this allows in a simple way for adaptive refinement and is, due to the tensor product approach, well suited for higher dimensional problems. Also, the adaptive treatment of partial differential equations, the discretization (involving finite differences) and the solution (here by preconditioned BiCG) can be programmed easily. We describe the basic features of the method, discuss the discretization, the solution and the refinement procedures and report on the results of different numerical experiments. — Author's Abstract

153 citations

Journal ArticleDOI
TL;DR: In this paper, Kernel ridge regression is used to approximate the kinetic energy of non-interacting fermions in a one dimensional box as a functional of their density, and a projected gradient descent algorithm is derived using local principal component analysis.
Abstract: Machine learning (ML) is an increasingly popular statistical tool for analyzing either measured or calculated data sets. Here, we explore its application to a well-defined physics problem, investigating issues of how the underlying physics is handled by ML, and how self-consistent solutions can be found by limiting the domain in which ML is applied. The particular problem is how to find accurate approximate density functionals for the kinetic energy (KE) of noninteracting electrons. Kernel ridge regression is used to approximate the KE of non-interacting fermions in a one dimensional box as a functional of their density. The properties of different kernels and methods of cross-validation are explored, reproducing the physics faithfully in some cases, but not others. We also address how self-consistency can be achieved with information on only a limited electronic density domain. Accurate constrained optimal densities are found via a modified Euler-Lagrange constrained minimization of the machine-learned total energy, despite the poor quality of its functional derivative. A projected gradient descent algorithm is derived using local principal component analysis. Additionally, a sparse grid representation of the density can be used without degrading the performance of the methods. The implications for machine-learned density functional approximations are discussed. © 2015 Wiley Periodicals, Inc.

143 citations

Network Information
Related Topics (5)
Discretization
53K papers, 1M citations
89% related
Iterative method
48.8K papers, 1.2M citations
83% related
Numerical analysis
52.2K papers, 1.2M citations
83% related
Partial differential equation
70.8K papers, 1.6M citations
82% related
Differential equation
88K papers, 2M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202314
202242
202157
202040
201960
201872