scispace - formally typeset
Search or ask a question

Showing papers on "Sparse grid published in 2017"


Journal ArticleDOI
TL;DR: The results show that this P-NIROM has captured the quasi-totality of the details of the flow with CPU speedup of three orders of magnitude.

85 citations


Journal ArticleDOI
TL;DR: A new computational method for finding feedback optimal control and solving HJB equations which is able to mitigate the curse of dimensionality is presented and an upper bound for the approximation error is proved.
Abstract: We address finding the semi-global solutions to optimal feedback control and the Hamilton–Jacobi–Bellman (HJB) equation. Using the solution of an HJB equation, a feedback optimal control law can be implemented in real-time with minimum computational load. However, except for systems with two or three state variables, using traditional techniques for numerically finding a semi-global solution to an HJB equation for general nonlinear systems is infeasible due to the curse of dimensionality. Here we present a new computational method for finding feedback optimal control and solving HJB equations which is able to mitigate the curse of dimensionality. We do not discretize the HJB equation directly, instead we introduce a sparse grid in the state space and use the Pontryagin’s maximum principle to derive a set of necessary conditions in the form of a boundary value problem, also known as the characteristic equations, for each grid point. Using this approach, the method is spatially causality free, which enjoys the advantage of perfect parallelism on a sparse grid. Compared with dense grids, a sparse grid has a significantly reduced size which is feasible for systems with relatively high dimensions, such as the 6-D system shown in the examples. Once the solution obtained at each grid point, high-order accurate polynomial interpolation is used to approximate the feedback control at arbitrary points. We prove an upper bound for the approximation error and approximate it numerically. This sparse grid characteristics method is demonstrated with three examples of rigid body attitude control using momentum wheels.

80 citations


Journal ArticleDOI
TL;DR: An approach to solve finite time horizon suboptimal feedback control problems for partial differential equations is proposed by solving dynamic programming equations on adaptive sparse grids with semi-discrete optimal control problem and the feedback control is derived from the corresponding value function.
Abstract: An approach to solve finite time horizon suboptimal feedback control problems for partial differential equations is proposed by solving dynamic programming equations on adaptive sparse grids. A semi-discrete optimal control problem is introduced and the feedback control is derived from the corresponding value function. The value function can be characterized as the solution of an evolutionary Hamilton---Jacobi Bellman (HJB) equation which is defined over a state space whose dimension is equal to the dimension of the underlying semi-discrete system. Besides a low dimensional semi-discretization it is important to solve the HJB equation efficiently to address the curse of dimensionality. We propose to apply a semi-Lagrangian scheme using spatially adaptive sparse grids. Sparse grids allow the discretization of the value functions in (higher) space dimensions since the curse of dimensionality of full grid methods arises to a much smaller extent. For additional efficiency an adaptive grid refinement procedure is explored. The approach is illustrated for the wave equation and an extension to equations of Schrodinger type is indicated. We present several numerical examples studying the effect the parameters characterizing the sparse grid have on the accuracy of the value function and the optimal trajectory.

77 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a scalable method for computing global solutions of high-dimensional stochastic dynamic models using an adaptive sparse grid algorithm, where grid points are added only where they are most needed.
Abstract: We present a exible and scalable method for computing global solutions of high-dimensional stochastic dynamic models. Within a time iteration or value function iteration setup, we interpolate functions using an adaptive sparse grid algorithm. With increasing dimensions, sparse grids grow much more slowly than standard tensor product grids. Moreover, adaptivity adds a second layer of sparsity, as grid points are added only where they are most needed, for instance in regions with steep gradients or at non-differentiabilities. To further speed up the solution process, our implementation is fully hybrid parallel, combining distributed and shared memory parallelization paradigms, and thus permits an efficient use of high-performance computing architectures. To demonstrate the broad applicability of our method, we solve two very different types of dynamic models: first, high-dimensional international real business cycle models with capital adjustment costs and irreversible investment; second, multiproduct menu-cost models with temporary sales and economies of scope in price setting.

47 citations


Journal ArticleDOI
TL;DR: Applied Modelling and Computation Group, Department of Ear th Science and Engineering, Imperial College London, Prince Consort Road, London, SW7 2BP, URL: http://amcg.imperial.ac.uk
Abstract: Applied Modelling and Computation Group, Department of Ear th Science and Engineering,Imperial College London, Prince Consort Road, London, SW7 2BP, URL: http://amcg.ese.imperial.ac.uk China University of Geosciences, Wuhan, 430074, China Zhejiang university Department of Scientific Computing, Florida State Universi ty, Tallahassee, FL, 32306-4120, USA 5 Department of Earth Science and Engineering, Imperial Coll ege London

41 citations


Journal ArticleDOI
TL;DR: In this article, a hybrid method of polynomial chaos expansion and dimension-wise analysis (PCE-DW) is proposed for structural-acoustic analysis with hybrid uncertainties, where the sparse grid collocation strategy is utilized to handle the random uncertainty while the DW procedure is employed to evaluate the interval bounds of statistical characteristics of the system response.

38 citations


Journal ArticleDOI
TL;DR: Zhao et al. as discussed by the authors proposed the sparse grid spatial discretization, where the Gaussian-Hermite quadrature rule is used to approximate the conditional expectations and for the associated high-dimensional interpolations, they adopt a spectral expansion of functions in polynomial spaces with respect to the spatial variables, and use sparse grid approximations to recover the expansion coefficients.
Abstract: This is the second part of a series papers on multi-step schemes for solving coupled forward backward stochastic differential equations (FBSDEs). We extend the basic idea in our former paper [W. Zhao, Y. Fu and T. Zhou, SIAM J. Sci. Comput., 36 (2014), pp. A1731-A1751] to solve high-dimensional FBSDEs, by using the spectral sparse grid approximations. The main issue for solving high-dimensional FBSDEs is to build an efficient spatial discretization, and deal with the related high-dimensional conditional expectations and interpolations. In this work, we propose the sparse grid spatial discretization. The sparse grid Gaussian-Hermite quadrature rule is used to approximate the conditional expectations. And for the associated high-dimensional interpolations, we adopt a spectral expansion of functions in polynomial spaces with respect to the spatial variables, and use the sparse grid approximations to recover the expansion coefficients. The FFT algorithm is used to speed up the recovery procedure, and the entire algorithm admits efficient and highly accurate approximations in high dimensions. Several numerical examples are presented to demonstrate the efficiency of the proposed methods.

33 citations


Journal ArticleDOI
TL;DR: In cases where polynomials are suitable approximations to the true function, the new all-at-once approach is found to reduce error in the surrogate faster than the method of weighted combinations.

33 citations


Journal ArticleDOI
TL;DR: An adaptive multiresolution discontinuous Galerkin (DG) scheme for time-dependent transport equations in multidimensions that can automatically capture fine local structures when the solution is no longer smooth, and is very suitable for deterministic kinetic simulations.
Abstract: In this paper, we develop an adaptive multiresolution discontinuous Galerkin (DG) scheme for time-dependent transport equations in multidimensions. The method is constructed using multiwavlelets on tensorized nested grids. Adaptivity is realized by error thresholding based on the hierarchical surplus, and the Runge--Kutta DG scheme is employed as the reference time evolution algorithm. We show that the scheme performs similarly to a sparse grid DG method when the solution is smooth, reducing computational cost in multidimensions. When the solution is no longer smooth, the adaptive algorithm can automatically capture fine local structures. The method is therefore very suitable for deterministic kinetic simulations. Numerical results including several benchmark tests, the Vlasov--Poisson (VP), and oscillatory VP systems are provided.

32 citations


Journal ArticleDOI
TL;DR: A non‐intrusive reduced order model for general, dynamic partial differential equations based upon proper orthogonal decomposition (POD) and Smolyak sparse grid collocation that captures the dominant features of the high‐fidelity models with reasonable accuracy while the computation complexity is reduced by several orders of magnitude.
Abstract: This article presents a non-intrusive reduced order model (NIROM) for general, dynamic partial differential equations. Based upon proper orthogonal decomposition (POD) and Smolyak sparse grid collocation, the method first projects the unknowns with full space and time coordinates onto a reduced POD basis. Then we introduce a new least squares fitting procedure to approximate the dynamical transition of the POD coefficients between subsequent time steps taking only a set of full model solution snapshots as the training data during the construction. Thus, neither the physical details nor further numerical simulations of the original PDE model is required by this methodology and the level of non-intrusiveness is improved compared to existing ROMs. Furthermore, we take adaptive measures to address the instability issue arising from reduced order iterations of the POD coefficients. This model can be applied to a wide range of physical and engineering scenarios and we test it on a couple problems in fluid dynamics. It is demonstrated that this reduced order approach captures the dominant features of the high fidelity models with reasonable accuracy while the computation complexity is reduced by several orders of magnitude.

31 citations


Journal ArticleDOI
TL;DR: Comparing result with traditional Monte Carlo simulation and parameter perturbation method, two numerical examples evidence the remarkable accuracy and effectiveness of proposed methods for fuzzy temperature field prediction in engineering.

Journal ArticleDOI
TL;DR: An additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation.

Journal ArticleDOI
TL;DR: This article combines high-order (HO) finite difference discretisations with alternating direction implicit (ADI) schemes for parabolic partial differential equations with mixed derivatives in a sparse grid setting to construct the so called sparse grid solution.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed the use of sparse grids to accelerate particle-in-cell (PIC) schemes by using the so-called "combination technique" from the sparse grids literature, which can dramatically increase the size of the spatial cells in multi-dimensional PIC schemes while paying only a slight penalty in grid-based error.
Abstract: We propose the use of sparse grids to accelerate particle-in-cell (PIC) schemes. By using the so-called 'combination technique' from the sparse grids literature, we are able to dramatically increase the size of the spatial cells in multi-dimensional PIC schemes while paying only a slight penalty in grid-based error. The resulting increase in cell size allows us to reduce the statistical noise in the simulation without increasing total particle number. We present initial proof-of-principle results from test cases in two and three dimensions that demonstrate the new scheme's efficiency, both in terms of computation time and memory usage.

Posted Content
TL;DR: This article shows that the weights of a kernel quadrature rule can be computed efficiently and exactly for up to tens of millions of nodes if the kernel, integration domain, and measure are fully symmetric and the node set is a union of fully symmetrical sets.
Abstract: Kernel quadratures and other kernel-based approximation methods typically suffer from prohibitive cubic time and quadratic space complexity in the number of function evaluations. The problem arises because a system of linear equations needs to be solved. In this article we show that the weights of a kernel quadrature rule can be computed efficiently and exactly for up to tens of millions of nodes if the kernel, integration domain, and measure are fully symmetric and the node set is a union of fully symmetric sets. This is based on the observations that in such a setting there are only as many distinct weights as there are fully symmetric sets and that these weights can be solved from a linear system of equations constructed out of row sums of certain submatrices of the full kernel matrix. We present several numerical examples that show feasibility, both for a large number of nodes and in high dimensions, of the developed fully symmetric kernel quadrature rules. Most prominent of the fully symmetric kernel quadrature rules we propose are those that use sparse grids.

Journal ArticleDOI
TL;DR: A second generation wavelet-based adaptive finite-difference Lattice Boltzmann method (FD-LBM) is developed, and a good agreement between the present results and the data in previous literatures is obtained, which demonstrates the accuracy and effectiveness of the present AWCM-IB-L BM.

Journal ArticleDOI
TL;DR: A computationally inexpensive kp-based interpolation scheme is developed that can extend the eigenvalues and momentum matrix elements of a sparsely sampled k-point grid into a densely sampled one and can be used to generate accurate band structures, density of states, and dielectric functions.


Posted Content
TL;DR: A theorem concerning the approximation of multivariate functions by deep ReLU networks is proved and new error estimates for which the curse of the dimensionality is lessened by establishing a connection with sparse grids are presented.
Abstract: We prove a theorem concerning the approximation of multivariate functions by deep ReLU networks. We present new error estimates for which the curse of the dimensionality is lessened by establishing a connection with sparse grids.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a novel strategy that exploits smoothness (regularity) in parameter space to improve computational efficiency of Markov Chain Monte Carlo (MCMC) algorithms.
Abstract: Markov Chain Monte Carlo (MCMC) algorithms play an important role in statistical inference problems dealing with intractable probability distributions. Recently, many MCMC algorithms such as Hamiltonian Monte Carlo (HMC) and Riemannian Manifold HMC have been proposed to provide distant proposals with high acceptance rate. These algorithms, however, tend to be computationally intensive which could limit their usefulness, especially for big data problems due to repetitive evaluations of functions and statistical quantities that depend on the data. This issue occurs in many statistic computing problems. In this paper, we propose a novel strategy that exploits smoothness (regularity) in parameter space to improve computational efficiency of MCMC algorithms. When evaluation of functions or statistical quantities are needed at a point in parameter space, interpolation from precomputed values or previous computed values is used. More specifically, we focus on HMC algorithms that use geometric information for faster exploration of probability distributions. Our proposed method is based on precomputing the required geometric information on a set of grids before running sampling algorithm and approximating the geometric information for the current location of the sampler using the precomputed information at nearby grids at each iteration of HMC. Sparse grid interpolation method is used for high dimensional problems. Tests on computational examples are shown to illustrate the advantages of our method.

Journal ArticleDOI
TL;DR: This article considers an optimal control problem for an elliptic partial differential equation with random inputs and proves the existence of optimal states, adjoint states and optimality conditions for each cases.
Abstract: In this article, we consider an optimal control problem for an elliptic partial differential equation with random inputs. To determine an applicable deterministic control f(x), we consider the four cases which we compare for efficiency and feasibility. We prove the existence of optimal states, adjoint states and optimality conditions for each cases. We also derive the optimality systems for the four cases. The optimality system is then discretized by a standard finite element method and sparse grid collocation method for physical space and probability space, respectively. The numerical experiments are performed for their efficiency and feasibility.

Journal ArticleDOI
TL;DR: A novel self-calibration approach for underdetermined DOA estimation with nested and coprime arrays, that makes extensive use of the redundancies (or repeated elements) in the virtual array, and reduces the bi-affine problem to a linear underd determined problem in source powers, from which the DOAs can be exactly recovered under suitable conditions.

Journal ArticleDOI
TL;DR: The reduced order model developed here from other existing reduced order ocean models is the first implementation of non-intrusive reduced order method in ocean modelling, and the inclusion of 3D dynamics with a free surface means the change of the computational domain with the free surface movement is taken into account in reduced order modelling.

Journal ArticleDOI
TL;DR: This paper poses original PDE formulations associated to the SABR/LMMs proposed by Hagan and Lesniewsk, Mercurio and Morini, and Rebonato and White and proposes an alternative pricing based on partial differential equations (PDEs).
Abstract: SABR models have been used to incorporate stochastic volatility to LIBOR market models (LMM) in order to describe interest rate dynamics and price interest rate derivatives. From the numerical point of view, the pricing of derivatives with SABR/LIBOR market models (SABR/LMMs) is mainly carried out with Monte Carlo simulation. However, this approach could involve excessively long computational times. For first time in the literature, in the present paper we propose an alternative pricing based on partial differential equations (PDEs). Thus, we pose original PDE formulations associated to the SABR/LMMs proposed by Hagan and Lesniewsk (2008), Mercurio and Morini (2009) and Rebonato and White (2008). Moreover, as the PDEs associated to these SABR/LMMs are high dimensional in space, traditional full grid methods (like standard finite differences or finite elements) are not able to price derivatives over more than three or four underlying interest rates. In order to overcome this curse of dimensionality, a sparse grid combination technique is proposed. A comparison between Monte Carlo simulation results and the ones obtained with the sparse grid technique illustrates the performance of the method.

Journal ArticleDOI
TL;DR: Results of this study show that the sparse grid method is capable of constructing efficient representations to an accuracy that is considered acceptable in practical applications, if the anisotropy feature is used.

Journal ArticleDOI
TL;DR: This paper will concentrate on a special construction for sparse grid constructions, namely Smolyak’s method, and provide sampling inequalities for mixed regularity functions on such sparse grids in terms of the number of points in the sparse grid.
Abstract: Sampling inequalities play an important role in deriving error estimates for various reconstruction processes. They provide quantitative estimates on a Sobolev norm of a function, defined on a bounded domain, in terms of a discrete norm of the function's sampled values and a smoothness term which vanishes if the sampling points become dense. The density measure, which is typically used to express these estimates, is the mesh norm or Hausdorff distance of the discrete points to the bounded domain. Such a density measure intrinsically suffers from the curse of dimension. The curse of dimension can be circumvented, at least to a certain extend, by considering additional structures. Here, we will focus on bounded mixed regularity. In this situation sparse grid constructions have been proven to overcome the curse of dimension to a certain extend. In this paper, we will concentrate on a special construction for such sparse grids, namely Smolyak's method and provide sampling inequalities for mixed regularity functions on such sparse grids in terms of the number of points in the sparse grid. Finally, we will give some applications of these sampling inequalities.

Proceedings ArticleDOI
26 Jun 2017
TL;DR: The results demonstrate how an LSA approach within a PC framework can be an effective method for UQ, with a significant reduction in computational cost with respect to full tensor and sparse grid quadratures.
Abstract: Non-intrusive Polynomial Chaos (NIPC) methods have become popular for uncertainty quantification, as they have the potential to achieve a significant reduction in computational cost (number of evaluations) with respect to traditional techniques such as the Monte Carlo approach, while allowing the model to be still treated as a black box.This work makes use of Least Squares Approximations (LSA) in the context of appropriately selected PC bases. An efficient technique based on QR column pivoting has been employed to reduce the number of evaluations required to construct the approximation, demonstrating the superiority of the method with respect to sparse grid quadratures and to LSA with randomly selected quadrature points. Orthogonal (or orthonormal) polynomials used for the PC expansion are calculated numerically based on the given uncertainty distribution, making the approach optimal for any type of input uncertainty.The benefits of the proposed techniques are verified on a number of analytical test functions of increasing complexity and on two engineering test problem (uncertainty quantification of the deflection of a 3- and a 10-bar structure with up to 15 uncertain parameters). The results demonstrate how an LSA approach within a PC framework can be an effective method for UQ, with a significant reduction in computational cost with respect to full tensor and sparse grid quadratures.Copyright © 2017 by Rolls-Royce plc

Proceedings ArticleDOI
26 Jun 2017
TL;DR: A generic, highly scalable computational method to solve high-dimensional dynamic stochastic economic models on high-performance computing platforms using an adaptive sparse grid algorithm with d-linear basis functions that is combined with a dimensional decomposition scheme.
Abstract: We introduce and deploy a generic, highly scalable computational method to solve high-dimensional dynamic stochastic economic models on high-performance computing platforms. Within an MPI---TBB parallel, nonlinear time iteration framework, we approximate economic policy functions using an adaptive sparse grid algorithm with d-linear basis functions that is combined with a dimensional decomposition scheme. Numerical experiments on "Piz Daint" (Cray XC30) at the Swiss National Supercomputing Centre show that our framework scales nicely to at least 1,000 compute nodes. As an economic application, we compute global solutions to international real business cycle models up to 200 continuous dimensions with significant speedup values over state-of-the-art techniques.

Journal ArticleDOI
TL;DR: A sparse grid collocation method which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions and is found to be much more efficient than the Monte-Carlo simulations.

Journal ArticleDOI
TL;DR: The discretization error for the regression setting is discussed and error bounds are derived relying on the approximation properties of the discretized space and two examples based on tensor product spaces are presented which provide a suitable approach in the case of large sample sets in moderate dimensions.
Abstract: In this paper, we will discuss the discretization error for the regression setting and derive error bounds relying on the approximation properties of the discretized space. Furthermore, we will point out how the sampling error and the discretization error interact and how they can be balanced appropriately. We will present two examples based on tensor product spaces (sparse grids, hyperbolic crosses) which provide a suitable approach in the case of large sample sets in moderate dimensions.