scispace - formally typeset
Search or ask a question

Showing papers on "Sparse grid published in 2020"


Journal ArticleDOI
TL;DR: A new computational method to evaluate comprehensively the positional accuracy reliability for single coordinate, single point, multipoint and trajectory accuracy of industrial robots is proposed using the sparse grid numerical integration method and the saddlepoint approximation method.

51 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the convergence rate of Smolyak integration for parametric maps and showed that dimension-independent convergence rates superior to N -term approximation rates under the same sparsity are achievable.
Abstract: We analyse convergence rates of Smolyak integration for parametric maps u : U → X taking values in a Banach space X , defined on the parameter domain U = [−1,1]N . For parametric maps which are sparse, as quantified by summability of their Taylor polynomial chaos coefficients, dimension-independent convergence rates superior to N -term approximation rates under the same sparsity are achievable. We propose a concrete Smolyak algorithm to a priori identify integrand-adapted sets of active multiindices (and thereby unisolvent sparse grids of quadrature points) via upper bounds for the integrands’ Taylor gpc coefficients. For so-called “(b ,e )-holomorphic” integrands u with b ∈lp (∕) for some p ∈ (0, 1), we prove the dimension-independent convergence rate 2/p − 1 in terms of the number of quadrature points. The proposed Smolyak algorithm is proved to yield (essentially) the same rate in terms of the total computational cost for both nested and non-nested univariate quadrature points. Numerical experiments and a mathematical sparsity analysis accounting for cancellations in quadratures and in the combination formula demonstrate that the asymptotic rate 2/p − 1 is realized computationally for a moderate number of quadrature points under certain circumstances. By a refined analysis of model integrand classes we show that a generally large preasymptotic range otherwise precludes reaching the asymptotic rate 2/p − 1 for practically relevant numbers of quadrature points.

30 citations


Journal ArticleDOI
TL;DR: In this paper, a hierarchical sequence of adaptive mesh refinements for the spatial approximation is combined with adaptive anisotropic sparse Smolyak grids in the stochastic space in such a way as to minimize the computational cost.

17 citations


Journal ArticleDOI
TL;DR: In this article, the authors construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions, and they use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximates.
Abstract: Deep neural network with rectified linear units (ReLU) is getting more and more popular recently. However, the derivatives of the function represented by a ReLU network are not continuous, which limit the usage of ReLU network to situations only when smoothness is not required. In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions. Optimal algorithms are proposed to explicitly build neural networks with sparsely connected RePUs, which we call PowerNets, to represent polynomials with no approximation error. For general smooth functions, we first project the function to their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an upper bound of the best RePU network approximation error. For smooth functions in higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep neural networks: a PowerNet with $n$ layers can exactly represent polynomials up to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is required.

12 citations


Journal ArticleDOI
TL;DR: In this paper, a structure-exploiting dimension-adaptive sparse grid approximation methodology using Sobol' decompositions in each subspace is proposed to drive the adaptive process.

12 citations


Journal ArticleDOI
TL;DR: To quantify waveguide dispersion uncertainty, a stochastic analysis technique based on the polynomial chaos method is proposed, where Smolyak sparse grids are used as sample spaces for the recently developed nonintrusive stochastically testing (ST) technique, allowing for a smaller set of candidate collocation nodes in the ST algorithm.
Abstract: To quantify waveguide dispersion uncertainty, we propose a stochastic analysis technique based on the polynomial chaos method, where Smolyak sparse grids are used as sample spaces for the recently developed nonintrusive stochastic testing (ST) technique. This strategy allows for a smaller set of candidate collocation nodes in the ST algorithm compared with the computationally expensive tensor grid approach. The method is tailored to the analysis of waveguide structures with a statistically spatially varying dielectric-property profile, which is then captured by a discrete set of uncorrelated random variables by means of Karhunen–Loeve transforms. In this way, the propagation properties of realistic stochastic waveguides can be predicted, which is of major importance in the electronic design process. The presented method, which is constructed around a second-order full-wave deterministic finite element solver, is illustrated through the uncertainty quantification of two waveguide dispersion problems and is validated using commercial electromagnetic field solvers and the Monte Carlo method.

11 citations


Posted Content
TL;DR: The proposed model order reduction approach for non-intrusive surrogate modeling of parametric dynamical systems can be applied even in a high-dimensional setting, by employing locally-refined sparse grids to weaken the curse of dimensionality.
Abstract: We propose a model order reduction approach for non-intrusive surrogate modeling of parametric dynamical systems. The reduced model over the whole parameter space is built by combining surrogates in frequency only, built at few selected values of the parameters. This, in particular, requires matching the respective poles by solving an optimization problem. If the frequency surrogates are constructed by a suitable rational interpolation strategy, frequency and parameters can both be sampled in an adaptive fashion. This, in general, yields frequency surrogates with different numbers of poles, a situation addressed by our proposed algorithm. Moreover, we explain how our method can be applied even in high-dimensional settings, by employing locally-refined sparse grids in parameter space to weaken the curse of dimensionality. Numerical examples are used to showcase the effectiveness of the method, and to highlight some of its limitations in dealing with unbalanced pole matching, as well as with a large number of parameters.

9 citations


Journal ArticleDOI
TL;DR: This work approximate economic policy functions using an adaptive, high-dimensional model representation scheme, combined with adaptive sparse grids, within an MPI–TBB parallel time-iteration framework, and introduces a performant vectorization scheme of the interpolation compute kernel.
Abstract: We propose a generic and scalable method for computing global solutions of nonlinear, high-dimensional dynamic stochastic economic models. First, within an MPI–TBB parallel time-iteration framework, we approximate economic policy functions using an adaptive, high-dimensional model representation scheme, combined with adaptive sparse grids. With increasing dimensions, the number of points in this efficiently-chosen combination of low-dimensional grids grows much more slowly than standard tensor product grids, sparse grids, or even adaptive sparse grids. Moreover, the adaptivity within the individual component functions adds an additional layer of sparsity, since grid points are added only where they are most needed — that is to say, in regions of the computational domain with steep gradients or at non-differentiabilities. Second, we introduce a performant vectorization scheme of the interpolation compute kernel. Third, we validate our claims with numerical experiments conducted on “Piz Daint" (Cray XC50) at the Swiss National Super-computing Center. We observe significant speedups over the state-of-the-art techniques, and almost ideal strong scaling up to at least 1, 000 compute nodes. Fourth, to demonstrate the broad applicability of our method, we compute global solutions to two different versions of a dynamic stochastic economic model: a high-dimensional international real business cycle model with capital adjustment costs, and with or without irreversible investment. We solve these models up to 300 continuous state variables globally.

9 citations


Journal ArticleDOI
TL;DR: In this paper, the Biot problem with uncertain poroelastic coefficients is modeled using a finite set of parameters with prescribed probability distribution, and a deterministic solver is based on a Hybrid High-Order discretization supporting general polyhedral meshes and arbitrary approximation orders.

8 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a class of adaptive multiresolution (also called adaptive sparse grid) discontinuous Galerkin (DG) methods for simulating scalar wave equations in second order form in space.
Abstract: In this paper, we propose a class of adaptive multiresolution (also called adaptive sparse grid) discontinuous Galerkin (DG) methods for simulating scalar wave equations in second order form in space. The two key ingredients of the schemes include an interior penalty DG formulation in the adaptive function space and two classes of multiwavelets for achieving multiresolution. In particular, the orthonormal Alpert’s multiwavelets are used to express the DG solution in terms of a hierarchical structure, and the interpolatory multiwavelets are further introduced to enhance computational efficiency in the presence of variable wave speed or nonlinear source. Some theoretical results on stability and accuracy of the proposed method are presented. Benchmark numerical tests in 2D and 3D are provided to validate the performance of the method.

6 citations


Book ChapterDOI
01 Jan 2020
TL;DR: A sparse grid approach based on the sparse grid combination technique which splits the simulation grid into multiple smaller grids of varying resolution to increase the maximum resolution as well as the parallel efficiency of the current solvers is proposed.
Abstract: Plasma fusion is one of the promising candidates for an emission-free energy source and is heavily investigated with high-resolution numerical simulations. Unfortunately, these simulations suffer from the curse of dimensionality due to the five-plus-one-dimensional nature of the equations. Hence, we propose a sparse grid approach based on the sparse grid combination technique which splits the simulation grid into multiple smaller grids of varying resolution. This enables us to increase the maximum resolution as well as the parallel efficiency of the current solvers. At the same time we introduce fault tolerance within the algorithmic design and increase the resilience of the application code. We base our implementation on a manager-worker approach which computes multiple solver runs in parallel by distributing tasks to different process groups. Our results demonstrate good convergence for linear fusion runs and show high parallel efficiency up to 180k cores. In addition, our framework achieves accurate results with low overhead in faulty environments. Moreover, for nonlinear fusion runs, we show the effectiveness of the combination technique and discuss existing shortcomings that are still under investigation.

Journal ArticleDOI
TL;DR: This research presents a novel approach to solve the challenge of uncertainty propagation of the frequency response in the context of viscoelastic damping structures with real-time requirements.
Abstract: Uncertainty propagation (UP) of the frequency response is essential for the robust design of viscoelastic damping structures One challenge in solving this problem is enormous computation cost from

Book ChapterDOI
01 Jan 2020
TL;DR: The ME-PC method for efficiently constructing a surrogate model of a rapidly varying QoI is presented, and the iterative HDMR technique for EM problems involving large numbers of random variables is detailed.
Abstract: In this chapter, efficient collocation methods for EM analysis are reviewed. Traditional SC methods leveraging tensor-product, sparse grid, and Stroud cubature rules are described first. These methods are rather straightforward to implement and suitable for EM problems involving smoothly varying QoI. Then, the ME-PC method for efficiently constructing a surrogate model of a rapidly varying QoI is presented. Also detailed is the iterative HDMR technique for EM problems involving large numbers of random variables. Finally, an approximation technique based on the spectral quantic TT (QTT) (SQTT) for constructing a surrogate model in a high-dimensional random domain is briefly reviewed, before the chapter is concluded by numerical examples demonstrating applications of cutting-edge UQ methods to various EM problems.

Journal ArticleDOI
TL;DR: This method uses statistical information extracted from an initial fit on a sparse grid to select optimal grid points in an iterative manner and is, therefore, called the iterative variance minimizing grid approach.
Abstract: We present a method for the generation of points in space needed to create training data for fitting of nonlinear parametric models. This method uses statistical information extracted from an initial fit on a sparse grid to select optimal grid points in an iterative manner and is, therefore, called the iterative variance minimizing grid approach. We demonstrate the method in the case of six-dimensional intermolecular potential energy surfaces (PESs) fitted to ab initio computed interaction energies. The number of required grid points is reduced by roughly a factor of two in comparison to alternative systematic sampling methods. The method is not limited to fitting PESs and can be applied to any cases of fitting parametric models where data points may be chosen freely but are expensive to obtain.

Posted Content
TL;DR: This paper proposes a fast matrix-vector multiplication, the grouped Fourier transform, that finds theoretical foundation in the context of the analysis of variance (ANOVA) decomposition where there is a one-to-one correspondence from the ANOVA terms to the proposed groups.
Abstract: Many applications are based on the use of efficient Fourier algorithms such as the fast Fourier transform and the nonequispaced fast Fourier transform. In a high-dimensional setting it is also already possible to deal with special sampling sets such as sparse grids or rank-1 lattices. In this paper we propose fast algorithms for high-dimensional scattered data points with corresponding frequency sets that consist of groups along the dimensions in the frequency domain. From there we propose a fast matrix-vector multiplication, the grouped Fourier transform, that finds theoretical foundation in the context of the analysis of variance (ANOVA) decomposition where there is a one-to-one correspondence from the ANOVA terms to our proposed groups. An application can be found in function approximation for high-dimensional functions where the number of the variable interactions is limited. We tested two different approximation approaches: Classical Tikhonov-regularization, namely, regularized least squares, and the technique of group lasso, which promotes sparsity in the groups. As for the latter, there are no explicit solution formulas which is why we applied the fast iterative shrinking-thresholding algorithm to obtain the minimizer. Numerical experiments in under-, overdetermined, and noisy settings indicate the applicability of our algorithms.

Journal ArticleDOI
TL;DR: A new spectral procedure is used in this paper, called the implied spectral equation (ISE), which allows for some collocation points to use any finite difference scheme of high order and the time derivatives of other spectral coefficients are implied.
Abstract: This paper investigates sparse grids on a hexagonal cell structure using a Local-Galerkin method (LGM) or generalized spectral element method (SEM). Such methods allow sparse grids to be used, known as serendipity grids in square cells. This means that not all points of the full grid are used. Using a high-order polynomial, some points of each cell are eliminated in the discretization, and thus saving Central Processing Unit (CPU) time. Here a sparse SEM scheme is proposed for hexagonal cells. It uses a representation of fields by second-order polynomials and achieves third-order accuracy. As SEM, LGM is strictly local for explicit time integration. This makes LGM more suitable for multiprocessing computers compared with classical Galerkin methods. The computer time depends on the possible timestep and program implementation. Assuming that these do not change when going to a sparse grid, the potential saving of computer time due to sparseness is 1:2. The projected CPU saving in 3-D from sparseness is by a factor of 3:8. A new spectral procedure is used in this paper, called the implied spectral equation (ISE). This procedure allows for some collocation points to use any finite difference scheme of high order and the time derivatives of other spectral coefficients are implied.

Posted Content
TL;DR: The main novelties of this effort are the integration of the sparse-grid method into the probabilistic numerical scheme for computing escape probability, as well as the demonstration in computing RE probabilities.
Abstract: Runaway electrons (RE) generated during magnetic disruptions present a major threat to the safe operation of plasma nuclear fusion reactors. A critical aspect of understanding RE dynamics is to calculate the runaway probability, i.e., the probability that an electron in the phase space will runaway on, or before, a prescribed time. Such probability can be obtained by solving the adjoint equation of the underlying Fokker-Planck equation that controls the electron dynamics. In this effort, we present a sparse-grid probabilistic scheme for computing the runaway probability. The key ingredient of our approach is to represent the solution of the adjoint equation as a conditional expectation, such that discretizing the differential operator reduces to the approximation of a set of integrals. Adaptive sparse grid interpolation is utilized to approximate the map from the phase space to the runaway probability. The main novelties of this effort are the integration of the sparse-grid method into the probabilistic numerical scheme for computing escape probability, as well as the demonstration in computing RE probabilities. Two numerical examples are given to illustrate that the proposed method can achieve $\mathcal{O}(\Delta t)$ convergence, as well as the adaptive refinement strategy can effectively handle the sharp transition layer between the runaway and non-runaway regions.

Journal ArticleDOI
TL;DR: A time-explicit sparse grid discontinuous Galerkin method for solving the three-dimensional time-domain Maxwell equations and the conservation properties and convergence rates are established for different choices of numerical fluxes.

Proceedings ArticleDOI
05 Jul 2020
TL;DR: In this article, the singular integration problem is recast into a multidimensional interpolation problem, where the quadruple singular integral corresponds to a six-dimensional function, which is interpolated by a set of pre-computed universal (frequency, materials independent) look-up tables.
Abstract: An efficient approach for populating the near weakly-singular and near singular integral interactions in the Boundary Element Method (BEM) solution of the Electric (EFIE) and Magnetic Field Integral equation (MFIE) on flat triangular meshes is outlined. The singular integration problem is recast into a multidimensional interpolation problem, where the quadruple singular integral, corresponds to a six-dimensional function, which is interpolated by a set of pre-computed universal (frequency, materials independent) look-up tables. In a previous version of SIBI this interpolation was performed via multidimensional sparse grids. In this paper, a low-rank tensor, the Tensor Train (TT) decomposition is used. The proposed method is compared to singularity subtraction, as well as Sparse Grids SIBI. The proposed method offers improved accuracy and computational cost.

Journal ArticleDOI
TL;DR: The randomized setting is studied, i.e., Smolyak's method is randomized, and the spaces for which randomized algorithms for infnte-dimensional integration are superior to deterministic ones are characterized.

Proceedings ArticleDOI
18 Mar 2020
TL;DR: A strong tracking reduced dimension sparse grid Gauss Hermite filtering method is presented, which reduces the nonlinear integration dimension through separation of the linear and nonlinear terms, further reduces the number of Gauss hermit filtering integration points by sparse grid method, and ensures the stability of the system by strong tracking method.
Abstract: The large misalignment angle error model of strapdown inertial navigation system (SINS) is a kind of nonlinear Gauss error model. The Gauss Hermite method can approach Gauss integral with any precision, and achieve high precision filtering effect. The number of Gauss Hermite integral points is exponentially related to the dimension of nonlinear system. In the process of real-time filtering, a large number of integral points of multidimensional system may lead to dimension disaster. According to the initial alignment error model of SINS with large misalignment angle, this paper presents a strong tracking reduced dimension sparse grid Gauss Hermite filtering method, which reduces the nonlinear integration dimension through separation of the linear and nonlinear terms, further reduces the number of Gauss hermit filtering integration points by sparse grid method, and ensures the stability of the system by strong tracking method.

Journal ArticleDOI
TL;DR: Numerical tests are presented to illustrate the performance of the three algorithms for uncertainty quantification in modeling coupled Stokes and Darcy flows, with the stochastic multiscale flux basis showing significant savings in computational cost.

Journal ArticleDOI
TL;DR: A numerical framework for computing nested quadrature rules for various weight functions and develops a bi-level optimization scheme to solve moment-matching conditions for two levels of main and nested rule and uses a penalty method to enforce the limits of the nodes and weights.

Journal ArticleDOI
TL;DR: Structural optimization searches for the optimal shape and topology of components such that specific physical quantities are optimized, for instance, the stability of the resulting structure.
Abstract: Structural optimization searches for the optimal shape and topology of components such that specific physical quantities are optimized, for instance, the stability of the resulting structure. Probl...

Proceedings ArticleDOI
19 May 2020
TL;DR: A novel conditional density driven grid (CDDG) design is proposed, which improves the point-mass approximation of the conditional densities and offers better estimation performance compared to the standard equidistant grid with the same number of points and, thus, with the the same computational complexity.
Abstract: The paper is devoted to the state estimation of nonlinear stochastic dynamic systems. The stress is laid on a grid-based numerical solution to the Bayesian recursive relations using the point-mass filter (PMF). In the paper, a novel conditional density driven grid (CDDG) design is proposed. The CDDG design takes advantage of non-equidistant grid points by combination of two grids; dense and sparse. The dense grid is designed to cover the state space region, where the significant mass of one or both conditional (i.e., predictive and filtering) densities is anticipated. The sparse grid covers the support of the conditional distribution tails only. As a consequence, the CDDG design improves the point-mass approximation of the conditional densities and offers better estimation performance compared to the standard equidistant grid with the same number of points and, thus, with the same computational complexity. Performance of the CDDG-based PMF is illustrated in a terrain-aided navigation scenario.

Journal ArticleDOI
06 Aug 2020-Energies
TL;DR: The grid design is discussed and the novel density difference grid is proposed, which covers such regions of the state-space where the conditional density is significantly spatially varying, by the dense grid.
Abstract: The paper deals with the Bayesian state estimation of nonlinear stochastic dynamic systems. The stress is laid on the point-mass filter, solving the Bayesian recursive relations for the state estimate conditional density computation using the deterministic grid-based numerical integration method. In particular, the grid design is discussed and the novel density difference grid is proposed. The proposed grid design covers such regions of the state-space where the conditional density is significantly spatially varying, by the dense grid. In other regions, a sparse grid is used to keep the computational complexity low. The proposed grid design is thoroughly discussed, analyzed, and illustrated in a numerical study.

Journal ArticleDOI
TL;DR: In this article, the authors introduce concepts from uncertainty quantification and numerical analysis for the efficient evaluation of stochastic high dimensional Newton iterates, and develop complex analytic regularity theory of the solution with respect to the random variables.
Abstract: In this paper we introduce concepts from uncertainty quantification (UQ) and numerical analysis for the efficient evaluation of stochastic high dimensional Newton iterates. In particular, we develop complex analytic regularity theory of the solution with respect to the random variables. This justifies the application of sparse grids for the computation of statistical measures. Convergence rates are derived and are shown to be subexponential or algebraic with respect to the number of realizations of random perturbations. Due the accuracy of the method, sparse grids are well suited for computing low probability events with high confidence. We apply our method to the power flow problem. Numerical experiments on the non-trivial 39 bus New England power system model with large stochastic loads are consistent with the theoretical convergence rates. Moreover, compared to the Monte Carlo method our approach is at least 1011 times faster for the same accuracy.

Journal ArticleDOI
TL;DR: For rank-one (separable) functions, which are the product of two univaraiate functions, it is shown that the hyperbolic cross truncation is indeed the best if the functions have weak singularities on the domain or boundaries so that the spectral series has a finite order of power-law convergence.
Abstract: The key to most successful applications of Chebyshev and Fourier spectral methods in high space dimension are a combination of a Smolyak sparse grid together with so-called “hyperbolic cross” truncation. It is easy to find counterexamples for which the hyperbolic cross truncation is far from optimal. An important question is: what characteristics of a function make it “crossy”, that is, suitable for the hyperbolic crosss truncation? We have not been able to find a complete answer to this question. However, by combining low-rank SVD approximation, Poisson summation and imbricate series, hyperbolic coordinates and numerical experimentation, we are, to borrow from Fermi, “confused at a higher level”. For rank-one (separable) functions, which are the product of two univaraiate functions, we show that the hyperbolic cross truncation is indeed the best if the functions have weak singularities on the domain or boundaries so that the spectral series has a finite order of power-law convergence. For functions smooth on the domain, and therefore blessed with exponentially convergent spectral series, we have failed to find any reasonable examples where the hyperbolic cross truncation is best.

Posted Content
TL;DR: This work develops a novel methodology that dramatically alleviates the curse of dimensionality, and demonstrates via extensive numerical experiments that the methodology can handle problems with a design space of hundreds of dimensions, improving both prediction accuracy and computational efficiency by orders of magnitude relative to typical alternative methods in practice.
Abstract: Stochastic kriging has been widely employed for simulation metamodeling to predict the response surface of a complex simulation model. However, its use is limited to cases where the design space is low-dimensional, because the number of design points required for stochastic kriging to produce accurate prediction, in general, grows exponentially in the dimension of the design space. The large sample size results in both a prohibitive sample cost for running the simulation model and a severe computational challenge due to the need of inverting large covariance matrices. Based on tensor Markov kernels and sparse grid experimental designs, we develop a novel methodology that dramatically alleviates the curse of dimensionality. We show that the sample complexity of the proposed methodology grows very mildly in the dimension, even under model misspecification. We also develop fast algorithms that compute stochastic kriging in its exact form without any approximation schemes. We demonstrate via extensive numerical experiments that our methodology can handle problems with a design space of more than 10,000 dimensions, improving both prediction accuracy and computational efficiency by orders of magnitude relative to typical alternative methods in practice.

Posted Content
TL;DR: The fifth order sparse grid WENO method is applied to kinetic problems modeled by high dimensional Vlasov based PDEs to further demonstrate large savings of computational costs by comparing with simulations on regular single grids.
Abstract: The weighted essentially non-oscillatory (WENO) schemes are a popular class of high order accurate numerical methods for solving hyperbolic partial differential equations (PDEs). However when the spatial dimensions are high, the number of spatial grid points increases significantly. It leads to large amount of operations and computational costs in the numerical simulations by using nonlinear high order accuracy WENO schemes such as a fifth order WENO scheme. How to achieve fast simulations by high order WENO methods for high spatial dimension hyperbolic PDEs is a challenging and important question. In the literature, sparse-grid technique has been developed as a very efficient approximation tool for high dimensional problems. In a recent work [Lu, Chen and Zhang, Pure and Applied Mathematics Quarterly, 14 (2018) 57-86], a third order finite difference WENO method with sparse-grid combination technique was designed to solve multidimensional hyperbolic equations including both linear advection equations and nonlinear Burgers' equations. In application problems, higher than third order WENO schemes are often preferred in order to efficiently resolve the complex solution structures. In this paper, we extend the approach to higher order WENO simulations specifically the fifth order WENO scheme. A fifth order WENO interpolation is applied in the prolongation part of the sparse-grid combination technique to deal with discontinuous solutions. Benchmark problems are first solved to show that significant CPU times are saved while both fifth order accuracy and stability of the WENO scheme are preserved for simulations on sparse grids. The fifth order sparse grid WENO method is then applied to kinetic problems modeled by high dimensional Vlasov based PDEs to further demonstrate large savings of computational costs by comparing with simulations on regular single grids.