scispace - formally typeset
Search or ask a question

Showing papers on "Sparse grid published in 2018"


Journal ArticleDOI
TL;DR: A novel Ensemble Kalman Filter data assimilation method based on a parameterised non-intrusive reduced order model (P-NIROM) which is independent of the original computational code and significantly reduced by several orders of magnitude in comparison to the full EnKF.

34 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an anisotropic sparse grid quadrature for functions which are analytically extendable into a tensor product domain and provided a dimension independent error versus cost estimate.

30 citations


Journal ArticleDOI
TL;DR: This work addresses the propagation of sizable errors from the use of approximate Density Functional Theory to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis and opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
Abstract: In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.

26 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed algorithm has a consistently reliable performance for the vast majority of test problems, and this is attributed to the use of Chebyshev-based Sparse Grids and polynomial interpolants, which have not gained significant attention in surrogate-based optimization thus far.
Abstract: A surrogate-based optimization method is presented, which aims to locate the global optimum of box-constrained problems using input–output data. The method starts with a global search of the n-dimensional space, using a Smolyak (Sparse) grid which is constructed using Chebyshev extrema in the one-dimensional space. The collected samples are used to fit polynomial interpolants, which are used as surrogates towards the search for the global optimum. The proposed algorithm adaptively refines the grid by collecting new points in promising regions, and iteratively refines the search space around the incumbent sample until the search domain reaches a minimum hyper-volume and convergence has been attained. The algorithm is tested on a large set of benchmark problems with up to thirty dimensions and its performance is compared to a recent algorithm for global optimization of grey-box problems using quadratic, kriging and radial basis functions. It is shown that the proposed algorithm has a consistently reliable performance for the vast majority of test problems, and this is attributed to the use of Chebyshev-based Sparse Grids and polynomial interpolants, which have not gained significant attention in surrogate-based optimization thus far.

24 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyzed the dimension-independent convergence property of an abstract sparse quadrature scheme for numerical integration of functions of high-dimensional parameters with Gaussian measure.
Abstract: In this work we analyze the dimension-independent convergence property of an abstract sparse quadrature scheme for numerical integration of functions of high-dimensional parameters with Gaussian measure. Under certain assumptions on the exactness and boundedness of univariate quadrature rules as well as on the regularity assumptions on the parametric functions with respect to the parameters, we prove that the convergence of the sparse quadrature error is independent of the number of the parameter dimensions. Moreover, we propose both an a priori and an a posteriori schemes for the construction of a practical sparse quadrature rule and perform numerical experiments to demonstrate their dimension-independent convergence rates.

22 citations


Journal ArticleDOI
TL;DR: A new dual interval-and-fuzzy response analysis method for the thermal engineering system by using interval variables and fuzzy variables to characterize the hybrid uncertainties with only boundary information and membership function is proposed.

22 citations



Journal ArticleDOI
TL;DR: The time-splitting Fourier pseudospectral method on the generalized sparse grids is applied to solve the space-fractional Schrodinger equation and a containment relation between different level-index sets is given, which can be used in designing the reference generalized sparse grid which are finer than other considered grids.
Abstract: In this paper, the time-splitting Fourier pseudospectral method on the generalized sparse grids is applied to solve the space-fractional Schrodinger equation. We give a containment relation between different level-index sets of the generalized sparse grids, and it can be used in designing the reference generalized sparse grids which are finer than other considered grids. Thus the numerical solution on the reference generalized sparse grids can be used as the reference true solution of the equation. Then, the fully discrete algorithm is obtained. In the numerical experiments, we compare the numerical results on the generalized sparse grids with those on the full grids. For the interpolation of the Gaussian multiplied by a factor and for the computation of the Schrodinger equation with two kinds of non-smooth potentials, the advantages of the Fourier pseudospectral method on the generalized sparse grids with the level-index set of parameter K = 1 , 2 , 3 are manifest in the approximation with high resolution. Here the sparsity of the generalized sparse grids will become weak when the parameter K becomes large. Moreover, the advantage of the generalized sparse grids is more pronounced in solving the Schrodinger equation with the higher dimension, the square well potential or the fractional Laplacian.

21 citations


Journal ArticleDOI
TL;DR: In this paper, an Uncertainty Quantification methodology is proposed for sedimentary basins evolution under mechanical and geochemical compaction processes, which is modeled as a coupled, time-dependent, non-linear, monodimensional (depth-only) system of PDEs with uncertain parameters.

20 citations


Journal ArticleDOI
TL;DR: A convergence proof for the approximation by sparse collocation of Hilbert-space-valued functions depending on countably many Gaussian random variables based on previous work on general $L^2$-convergence theory is given.
Abstract: We give a convergence proof for the approximation by sparse collocation of Hilbert-space-valued functions depending on countably many Gaussian random variables. Such functions appear as solutions of elliptic PDEs with lognormal diffusion coefficients. We outline a general $L^2$-convergence theory based on previous work by Bachmayr et al. [ESAIM Math. Model. Numer. Anal., 51 (2017), pp. 341--363] and Chen [ESAIM Math. Model. Numer. Anal., in press, 2018, https://doi.org/10.1051/m2an/2018012] and establish an algebraic convergence rate for sufficiently smooth functions assuming a mild growth bound for the univariate hierarchical surpluses of the interpolation scheme applied to Hermite polynomials. We specifically verify for Gauss--Hermite nodes that this assumption holds and also show algebraic convergence with respect to the resulting number of sparse grid points for this case. Numerical experiments illustrate the dimension-independent convergence rate.

20 citations


Journal ArticleDOI
TL;DR: The proposed methodology, building on an existing work on adaptive hierarchical sparse grid collocation algorithm, is able to track localized behavior while also avoiding unnecessary function evaluations in smoother regions of the stochastic space by using a finite difference based one-dimensional derivative evaluation technique in all the dimensions.

Journal ArticleDOI
TL;DR: The numerical results show the ability of the proposal to provide smooth and clearly defined structural boundaries and show that the method provides structural designs satisfying a trade-off between conflicting objectives in the RTO problem.

Book ChapterDOI
01 Jan 2018
TL;DR: This work presents an alternative to the classical surplus refinement techniques, where the more flexible refinement strategy improves stability and reduces the total number of expensive simulations, resulting in significant computational saving.
Abstract: We consider general strategy for hierarchical multidimensional interpolation based on sparse grids, where the interpolation nodes and locally supported basis functions are constructed from tensors of a one dimensional hierarchical rule. We consider four different hierarchies that are tailored towards general functions, high or low order polynomial approximation, or functions that satisfy homogeneous boundary conditions. The main advantage of the locally supported basis is the ability to choose a set of functions based on the observed behavior of the target function. We present an alternative to the classical surplus refinement techniques, where we exploit local anisotropy and refine using functions with not strictly decreasing support. The more flexible refinement strategy improves stability and reduces the total number of expensive simulations, resulting in significant computational saving. We demonstrate the advantages of the different hierarchies and refinement techniques by application to series of simple functions as well as a system of ordinary differential equations given by the Kermack-McKendrick SIR model.


Journal ArticleDOI
TL;DR: The proposed framework is the first activation‐driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3‐dimensional, continuum‐mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem.
Abstract: Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles.

Journal ArticleDOI
TL;DR: A computationally efficient sparse grid approach is presented to allow for multiscale simulations of non-Newtonian polymeric fluids and leads to computing times in the order of months even on massively parallel computers.

Journal ArticleDOI
TL;DR: This paper applies the introduced hierarchical basis WENO interpolation within a non-intrusive collocation method and presents first results on 2- and 3-dimensional sparse grids.
Abstract: In this paper, we introduce a third order hierarchical basis WENO interpolation, which possesses similar accuracy and stability properties as usual WENO interpolations. The main motivation for the hierarchical approach is the direct applicability on sparse grids. This is for instance of large practical interest in the numerical solution of conservation laws with uncertain data, where discontinuities in the physical domain often carry over to the (potentially high-dimensional) stochastic domain. For this, we apply the introduced hierarchical basis WENO interpolation within a non-intrusive collocation method and present first results on 2- and 3-dimensional sparse grids.

Journal ArticleDOI
TL;DR: This article makes an indirect ansatz based on the thermal single layer potential which yields a first kind integral equation which is discretized by Galerkin’s method with respect to the sparse tensor product of the spatial and temporal ansatz spaces.
Abstract: This article presents a fast sparse grid based space–time boundary element method for the solution of the nonstationary heat equation. We make an indirect ansatz based on the thermal single layer potential which yields a first kind integral equation. This integral equation is discretized by Galerkin’s method with respect to the sparse tensor product of the spatial and temporal ansatz spaces. By employing the $$\mathcal {H}$$ -matrix and Toeplitz structure of the resulting discretized operators, we arrive at an algorithm which computes the approximate solution in a complexity that essentially corresponds to that of the spatial discretization. Nevertheless, the convergence rate is nearly the same as in case of a traditional discretization in full tensor product spaces.

Journal ArticleDOI
TL;DR: A fast, low complexity, high-dimensional positive-weight quadrature formula based on Q-MuSIKSapproximation of the integrand is proposed, which is generally superior to the MuSIK methods in terms of run time.
Abstract: Motivated by the recent multilevel sparse kernel-based interpolation (MuSIK) algorithm proposed in Georgoulis et al. (SIAM J. Sci. Comput. 35, 815–832, 2013), we introduce the new quasi-multilevel sparse interpolation with kernels (Q-MuSIK) via the combination technique. The Q-MuSIK scheme achieves better convergence and run time when compared with classical quasi-interpolation. Also, the Q-MuSIK algorithm is generally superior to the MuSIK methods in terms of run time in particular in high-dimensional interpolation problems, since there is no need to solve large algebraic systems. We subsequently propose a fast, low complexity, high-dimensional positive-weight quadrature formula based on Q-MuSIKSapproximation of the integrand. We present the results of numerical experimentation for both quasi-interpolation and quadrature in high dimensions.

Book ChapterDOI
Bastian Bohn1
01 Jan 2018
TL;DR: This paper presents a framework which will allow for a thorough theoretical analysis of stability properties, error decay behavior and appropriate couplings between the dataset size and the grid size and rigorously derive upper bounds on the expected error for sparse grid least squares regression.
Abstract: While sparse grid least squares regression algorithms have been frequently used to tackle Big Data problems with a huge number of input data in the last 15 years, a thorough theoretical analysis of stability properties, error decay behavior and appropriate couplings between the dataset size and the grid size has not been provided yet. In this paper, we will present a framework which will allow us to close this gap and rigorously derive upper bounds on the expected error for sparse grid least squares regression. Furthermore, we will verify that our theoretical convergence results also match the observed rates in numerical experiments.

Book ChapterDOI
TL;DR: A general framework is described that summarizes fundamental results and assumptions in a concise application-independent manner of Smolyak's algorithm for the acceleration of scientific computations.
Abstract: We provide a general discussion of Smolyak’s algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak’s work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak’s algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner.

Journal ArticleDOI
TL;DR: This letter first analyzes the phase unwrapping error of the residues and cuts related local methods when the cuts separate the sparse grids into several isolated regions, and the space distribution of the cuts is converted into constraints on optimizing the Delaunay triangulation network.
Abstract: Persistent scatterer interferometry (PSI) techniques exploit irregularly spaced permanent scatters (PSs) to extract the ground deformation. Sparse 2-D phase unwrapping is a significant procedure in PSI methods to reconstruct the phase function defined on a sparse data set given its value modulo $2\pi $ . This letter first analyzes the phase unwrapping error of the residues and cuts related local methods when the cuts separate the sparse grids into several isolated regions. Then the space distribution of the cuts is converted into constraints on optimizing the Delaunay triangulation network. Finally, the phase jumps introduced by PSs with low quality are removed due to the more reasonable flows obtained by the constrained $L^{1}$ -norm method. Two experiments performed on the real data sets are presented to show the effectiveness and robustness of our algorithm, especially in the long-span cable-stayed bridge applications.

Journal ArticleDOI
TL;DR: The proposed algorithm provides a computationally cheap alternative to previously introduced stochastic optimization methods based on Monte Carlo sampling by using adaptive sparse grids method and is utilized in the design of minimum compliance structure.
Abstract: The aim of this paper is to study the topology optimization for mechanical systems with hybrid material and geometric uncertainties. The random variations are modeled by a memory-less transformation of random fields which ensures their physical admissibility. The stochastic collocation method combined with the proposed material and geometry uncertainty models provides robust designs by utilizing already developed deterministic solvers. The computational cost is decreased by using of sparse grids and discretization refinement that are proposed and demonstrated as well. The method is utilized in the design of minimum compliance structure. The proposed algorithm provides a computationally cheap alternative to previously introduced stochastic optimization methods based on Monte Carlo sampling by using adaptive sparse grids method.

Journal ArticleDOI
TL;DR: By introducing a new concept of layer to sparse grid points, the sparse grid construction can become much efficient and a much easier data structure the array can be used to store the sparse grids.

Journal ArticleDOI
TL;DR: It is proved that MuSIK is interpolatory at these nodes, and, therefore, can be naturally used to define a quadrature scheme.
Abstract: A new stochastic collocation finite element method is proposed for the numerical solution of elliptic boundary value problems (BVP) with random coefficients, assuming that the randomness is well-approximated by a finite number of random variables with given probability distributions. The proposed method consists of a finite element approximation in physical space, along with a stochastic collocation quadrature approach utilizing the recent Multilevel Sparse Kernel-Based Interpolation (MuSIK) technique (Georgoulis et al., 2013). MuSIK is based on a multilevel sparse grid-type algorithm with the basis functions consisting of directionally anisotropic Gaussian radial basis functions (kernels) placed at directionally-uniform grid-points. We prove that MuSIK is interpolatory at these nodes, and, therefore, can be naturally used to define a quadrature scheme. Numerical examples are also presented, assessing the performance of the new algorithm in the context of high-dimensional stochastic collocation finite element methods.

Journal ArticleDOI
TL;DR: In this article, a hierarchical approach based on adaptive sparse grids quadrature (ASGQ) and quasi-Monte Carlo (QMC) is proposed for the rough Bergomi model.
Abstract: The rough Bergomi (rBergomi) model, introduced recently in [5], is a promising rough volatility model in quantitative finance. It is a parsimonious model depending on only three parameters, and yet remarkably fits with empirical implied volatility surfaces. In the absence of analytical European option pricing methods for the model, and due to the non-Markovian nature of the fractional driver, the prevalent option is to use the Monte Carlo (MC) simulation for pricing. Despite recent advances in the MC method in this context, pricing under the rBergomi model is still a time-consuming task. To overcome this issue, we have designed a novel, hierarchical approach, based on i) adaptive sparse grids quadrature (ASGQ), and ii) quasi-Monte Carlo (QMC). Both techniques are coupled with a Brownian bridge construction and a Richardson extrapolation on the weak error. By uncovering the available regularity, our hierarchical methods demonstrate substantial computational gains with respect to the standard MC method, when reaching a sufficiently small relative error tolerance in the price estimates across different parameter constellations, even for very small values of the Hurst parameter. Our work opens a new research direction in this field, i.e., to investigate the performance of methods other than Monte Carlo for pricing and calibrating under the rBergomi model.

Posted Content
TL;DR: It is shown that a 12D parameter space can be scanned very efficiently, gaining more than an order of magnitude in computational cost over the standard adaptive approach, which allows for the uncertainty propagation and sensitivity analysis in higher-dimensional plasma microturbulence problems, which would be almost impossible to tackle with standard screening approaches.
Abstract: Quantifying uncertainty in predictive simulations for real-world problems is of paramount importance - and far from trivial, mainly due to the large number of stochastic parameters and significant computational requirements. Adaptive sparse grid approximations are an established approach to overcome these challenges. However, standard adaptivity is based on global information, thus properties such as lower intrinsic stochastic dimensionality or anisotropic coupling of the input directions, which are common in practical applications, are not fully exploited. We propose a novel structure-exploiting dimension-adaptive sparse grid approximation methodology using Sobol' decompositions in each subspace to introduce a sensitivity scoring system to drive the adaptive process. By employing local sensitivity information, we explore and exploit the anisotropic coupling of the stochastic inputs as well as the lower intrinsic stochastic dimensionality. The proposed approach is generic, i.e., it can be formulated in terms of arbitrary approximation operators and point sets. In particular, we consider sparse grid interpolation and pseudo-spectral projection constructed on (L)-Leja sequences. The power and usefulness of the proposed method is demonstrated by applying it to the analysis of gyrokinetic microinstabilities in fusion plasmas, one of the key scientific problems in plasma physics and fusion research. In this context, it is shown that a 12D parameter space can be scanned very efficiently, gaining more than an order of magnitude in computational cost over the standard adaptive approach. Moreover, it allows for the uncertainty propagation and sensitivity analysis in higher-dimensional plasma microturbulence problems, which would be almost impossible to tackle with standard screening approaches.

Book ChapterDOI
01 Jan 2018
TL;DR: This paper designs an MLSC approach in terms of adaptive sparse grids for stochastic discretization and compares two sparse grid variants, one with spatial and the other with dimension adaptivity, and test the approach in two problems, finding the dimension-adaptive interpolants proved superior interms of accuracy and required computational cost.
Abstract: We present a multilevel stochastic collocation (MLSC) with a dimensionality reduction approach to quantify the uncertainty in computationally intensive applications. Standard MLSC typically employs grids with predetermined resolutions. Even more, stochastic dimensionality reduction has not been considered in previous MLSC formulations. In this paper, we design an MLSC approach in terms of adaptive sparse grids for stochastic discretization and compare two sparse grid variants, one with spatial and the other with dimension adaptivity. In addition, while performing the uncertainty propagation, we analyze, based on sensitivity information, whether the stochastic dimensionality can be reduced. We test our approach in two problems. The first one is a linear oscillator with five or six stochastic inputs. The dimensionality is reduced from five to two and from six to three. Furthermore, the dimension-adaptive interpolants proved superior in terms of accuracy and required computational cost. The second test case is a fluid-structure interaction problem with five stochastic inputs, in which we quantify the uncertainty at two instances in the time domain. The dimensionality is reduced from five to two and from five to four.

Journal ArticleDOI
TL;DR: This paper proposes a novel methodology to compute the expansion's coefficients using spatially adaptive sparse grids and products of one-dimensional integrals, which exploits the tensor structure of both sparse grid and probabilistic space.
Abstract: The propagation of uncertainty in physical parameters of fluid-structure interaction problems is a challenging task---both mathematically and in terms of computational workload. In this paper, we employ nonintrusive polynomial chaos expansion and model the uncertainty in five independent input parameters that characterize both fluid and structure. We propose a novel methodology to compute the expansion's coefficients using spatially adaptive sparse grids and products of one-dimensional integrals, which exploits the tensor structure of both sparse grids and probabilistic space. Furthermore, with spatial adaptivity and modified basis functions, we keep the number of sparse grid points small. We test our approach in two test cases: (i) an elastic vertical flap in a channel flow and (ii) a computationally challenging, well-established benchmark. The outputs of interest are the x-deflection and total force on the structure. In the first test case, we consider six implementations of our methodology and two esta...

Book ChapterDOI
TL;DR: This paper focuses on Bellman equations used in finance, specifically to model dynamic portfolio choice over the life cycle, by employing local linear basis functions to a spatially adaptive sparse grid approximation scheme on the value function.
Abstract: In this paper, I propose a dynamic programming approach with value function iteration to solve Bellman equations in discrete time using spatially adaptive sparse grids. In doing so, I focus on Bellman equations used in finance, specifically to model dynamic portfolio choice over the life cycle. Since the complexity of the dynamic programming approach—and other approaches—grows exponentially in the dimension of the (continuous) state space, it suffers from the so called curse of dimensionality. Approximation on a spatially adaptive sparse grid can break this curse to some extent. Extending recent approaches proposed in the economics and computer science literature, I employ local linear basis functions to a spatially adaptive sparse grid approximation scheme on the value function. As economists are interested in the optimal choices rather than the value function itself, I discuss how to obtain these optimal choices given a solution to the optimization problem on a sparse grid. I study the numerical properties of the proposed scheme by computing Euler equation errors to an exemplary dynamic portfolio choice model with varying state space dimensionality.