scispace - formally typeset
Search or ask a question

Showing papers on "Sparse grid published in 2014"


Journal ArticleDOI
TL;DR: In this article, an adaptive sparse grid stochastic collocation approach based upon Leja interpolation sequences for approximation of parameterized functions with high-dimensional parameters is proposed, where the weights are determined by the probability densities of the random variables.
Abstract: We propose an adaptive sparse grid stochastic collocation approach based upon Leja interpolation sequences for approximation of parameterized functions with high-dimensional parameters. Leja sequences are arbitrarily granular (any number of nodes may be added to a current sequence, producing a new sequence) and thus are a good choice for the univariate composite rule used to construct adaptive sparse grids in high dimensions. When undertaking stochastic collocation one is often interested in constructing weighted approximation where the weights are determined by the probability densities of the random variables. This paper establishes that a certain weighted formulation of one-dimensional Leja sequences produces a sequence of nodes whose empirical distribution converges to the corresponding limiting distribution of the Gauss quadrature nodes associated with the weight function. This property is true even for unbounded domains. We apply the Leja sparse grid approach to several high-dimensional problems and...

113 citations


Journal ArticleDOI
19 Nov 2014
TL;DR: A new method for fluid simulation on high-resolution adaptive grids which rivals the throughput and parallelism potential of methods based on uniform grids is introduced and an adaptive multigrid-preconditioned Conjugate Gradient solver is demonstrated that achieves resolution-independent convergence rates while admitting a lightweight implementation with a modest memory footprint.
Abstract: We introduce a new method for fluid simulation on high-resolution adaptive grids which rivals the throughput and parallelism potential of methods based on uniform grids. Our enabling contribution is SPGrid, a new data structure for compact storage and efficient stream processing of sparsely populated uniform Cartesian grids. SPGrid leverages the extensive hardware acceleration mechanisms inherent in the x86 Virtual Memory Management system to deliver sequential and stencil access bandwidth comparable to dense uniform grids. Second, we eschew tree-based adaptive data structures in favor of storing simulation variables in a pyramid of sparsely populated uniform grids, thus avoiding the cost of indirect memory access associated with pointer-based representations. We show how the costliest algorithmic kernels of fluid simulation can be implemented as a composition of two kernel types: (a) stencil operations on a single sparse uniform grid, and (b) structured data transfers between adjacent levels of resolution, even when modeling non-graded octrees. Finally, we demonstrate an adaptive multigrid-preconditioned Conjugate Gradient solver that achieves resolution-independent convergence rates while admitting a lightweight implementation with a modest memory footprint. Our method is complemented by a new interpolation scheme that reduces dissipative effects and simplifies dynamic grid adaptation. We demonstrate the efficacy of our method in end-to-end simulations of smoke flow.

105 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a more efficient implementation of the Smolyak method for interpolation, namely, they show how to avoid costly evaluations of repeated basis functions in the conventional SMolyak formula, and they extend the SMOLYAK method to include anisotropic constructions that allow to target higher quality of approximation in some dimensions than in others.

98 citations


Journal ArticleDOI
TL;DR: In this article, a sparse grid stochastic collocation method for the reliability analysis of structures with uncertain parameters and loads is developed. But the method does not need the evaluation of the first- or second-order partial derivatives of the limit state function considered and does not suffer from the problem of multiple design points.

65 citations


Journal ArticleDOI
TL;DR: The algorithm and convergence theory are extended to allow the use of low-fidelity adaptive sparse-grid models in objective function evaluations by extending conditions on inexact function evaluations used in previous trust-region frameworks.
Abstract: This paper improves the trust-region algorithm with adaptive sparse grids introduced in [SIAM J. Sci. Comput., 35 (2013), pp. A1847--A1879] for the solution of optimization problems governed by partial differential equations (PDEs) with uncertain coefficients. The previous algorithm used adaptive sparse-grid discretizations to generate models that are applied in a trust-region framework to generate a trial step. The decision whether to accept this trial step as the new iterate, however, required relatively high-fidelity adaptive discretizations of the objective function. In this paper, we extend the algorithm and convergence theory to allow the use of low-fidelity adaptive sparse-grid models in objective function evaluations. This is accomplished by extending conditions on inexact function evaluations used in previous trust-region frameworks. Our algorithm adaptively builds two separate sparse grids: one to generate optimization models for the step computation and one to approximate the objective function...

64 citations


Journal ArticleDOI
TL;DR: This work has developed an extension of the standard Smolyak scheme in which this scheme is combined with multidimensional grids, such as cubatures, to obtain new sparse grids for the study of the torsional energy levels of methanol in full dimensionality (12D).

64 citations


Journal ArticleDOI
TL;DR: A graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain is presented.

50 citations


Journal ArticleDOI
TL;DR: New developments of the research done at TU Delft on Polynomial Chaos (PC) techniques are presented, including a new adaptive sparse grid algorithm designed for estimating the PC coefficients and two techniques for constructing the sparse PCE of responses of interest.

46 citations


Journal ArticleDOI
TL;DR: The sparse grids produced by the WAMR method exhibit an impressive compression of the solution, reducing the number of collocation points used by factors of many orders of magnitude when compared to uniform grids of equivalent resolution.

43 citations


Book ChapterDOI
01 Jan 2014
TL;DR: An algorithm for trigonometric interpolation of multivariate functions on generalized sparse grids and its application for the approximation of functions in periodic Sobolev spaces of dominating mixed smoothness is studied.
Abstract: In this paper, we present an algorithm for trigonometric interpolation of multivariate functions on generalized sparse grids and study its application for the approximation of functions in periodic Sobolev spaces of dominating mixed smoothness. In particular, we derive estimates for the error and the cost. We construct interpolants with a computational cost complexity which is substantially lower than for the standard full grid case. The associated generalized sparse grid interpolants have the same approximation order as the standard full grid interpolants, provided that certain additional regularity assumptions on the considered functions are fulfilled. Numerical results validate our theoretical findings.

42 citations


Journal ArticleDOI
TL;DR: This paper establishes that a certain weighted formulation of one-dimensional Leja sequences produces a sequence of nodes whose empirical distribution converges to the corresponding limiting distribution of the Gauss quadrature nodes associated with the weight function.
Abstract: We propose an adaptive sparse grid stochastic collocation approach based upon Leja interpolation sequences for approximation of parameterized functions with high-dimensional parameters. Leja sequences are arbitrarily granular (any number of nodes may be added to a current sequence, producing a new sequence) and thus are a good choice for the univariate composite rule used to construct adaptive sparse grids in high dimensions. When undertaking stochastic collocation one is often interested in constructing weighted approximation where the weights are determined by the probability densities of the random variables. This paper establishes that a certain weighted formulation of one-dimensional Leja sequences produces a sequence of nodes whose empirical distribution converges to the corresponding limiting distribution of the Gauss quadrature nodes associated with the weight function. This property is true even for unbounded domains. We apply the Leja-sparse grid approach to several high-dimensional and problems and demonstrate that Leja sequences are often superior to more standard sparse grid constructions (e.g. Clenshaw-Curtis), at least for interpolatory metrics.

Journal ArticleDOI
TL;DR: A hybrid finite difference algorithm for the Zakai equation is constructed that combines the splitting-up finite difference scheme and hierarchical sparse grid method to solve moderately high-dimensional nonlinear filtering problems.
Abstract: A hybrid finite difference algorithm for the Zakai equation is constructed to solve nonlinear filtering problems. The algorithm combines the splitting-up finite difference scheme and hierarchical sparse grid method to solve moderately high-dimensional nonlinear filtering problems. When applying hierarchical sparse-grid methods to approximate bell-shaped solutions in most applications of nonlinear filtering problems, we introduce a logarithmic approximation to reduce the approximation errors. Some space adaptive methods are also introduced to make the algorithm more efficient. Numerical experiments are carried out to demonstrate the performance and efficiency of our algorithm.

Book ChapterDOI
01 Jan 2014
TL;DR: This work proposes an explicit a-priori/a-posteriori procedure for the construction of a quasi-optimal sparse grids method using an estimate of the decay of the Hermite coefficients of the solution and an efficient nested quadrature rule with respect to the Gaussian weight.
Abstract: In this work we explore the extension of the quasi-optimal sparse grids method proposed in our previous work “On the optimal polynomial approximation of stochastic PDEs by Galerkin and Collocation methods” to a Darcy problem where the permeability is modeled as a lognormal random field. We propose an explicit a-priori/a-posteriori procedure for the construction of such quasi-optimal grid and show its effectiveness on a numerical example. In this approach, the two main ingredients are an estimate of the decay of the Hermite coefficients of the solution and an efficient nested quadrature rule with respect to the Gaussian weight.

Proceedings ArticleDOI
01 Jan 2014
TL;DR: This work presents an adaptive sparse-grid-based density estimation method which discretizes the estimated density function on basis functions centered at grid points rather than on kernels centered at the data points, so that the costs of evaluating the estimateddensity function are independent from the number of data points.
Abstract: Nonparametric density estimation is a fundamental problem of statistics and data mining. Even though kernel density estimation is the most widely used method, its performance highly depends on the choice of the kernel bandwidth, and it can become computationally expensive for large data sets. We present an adaptive sparse-grid-based density estimation method which discretizes the estimated density function on basis functions centered at grid points rather than on kernels centered at the data points. Thus, the costs of evaluating the estimated density function are independent from the number of data points. We give details on how to estimate density functions on sparse grids and develop a cross validation technique for the parameter selection. We show numerical results to confirm that our sparse-grid-based method is well-suited for large data sets, and, finally, employ our method for the classification of astronomical objects to demonstrate that it is competitive to current kernel-based density estimation approaches with respect to classification accuracy and runtime.

Posted Content
TL;DR: In this article, kernel ridge regression is used to approximate the kinetic energy of non-interacting fermions in a one-dimensional box as a functional of their density, and the properties of different kernels and methods of cross-validation are explored.
Abstract: Kernel ridge regression is used to approximate the kinetic energy of non-interacting fermions in a one-dimensional box as a functional of their density. The properties of different kernels and methods of cross-validation are explored, and highly accurate energies are achieved. Accurate {\em constrained optimal densities} are found via a modified Euler-Lagrange constrained minimization of the total energy. A projected gradient descent algorithm is derived using local principal component analysis. Additionally, a sparse grid representation of the density can be used without degrading the performance of the methods. The implications for machine-learned density functional approximations are discussed.

Book ChapterDOI
25 Aug 2014
TL;DR: This project tackles high-dimensional problems by a hierarchical extrapolation approach, the sparse grid combination technique, and finds novel ways to deal with central problems in high-performance computing such as scalability and resilience.
Abstract: High-dimensional problems pose a challenge for tomorrow’s supercomputing. Problems that require the joint discretization of more dimensions than space and time are among the most compute-hungry ones and thus standard candidates for exascale computing and even beyond. This project tackles such problems by a hierarchical extrapolation approach, the sparse grid combination technique. The method not only enables their treatment in the first place. The hierarchical approach also provides novel ways to deal with central problems in high-performance computing such as scalability and resilience: Global communication can be avoided and reduced to a small subset, and faults can be compensated for without the need for recomputations or checkpoint-restart. As an exemplary prototype for high-dimensional problems, turbulence simulations in plasma physics are studied.

Journal ArticleDOI
TL;DR: Simulation results shows that using PCM only a small number of sparse grid points need to be sampled even when dealing with systems with a relatively large number of uncertain parameters, and PCM is, therefore, computationally more efficient than MC method.
Abstract: This paper proposes a probabilistic collocation method (PCM) to quantify the uncertainties with dynamic simulations in power systems. The appraoch was tested on a single-machine-infinite-bus system and the over 15,000 -bus Western Electricity Coordinating Council (WECC) system. Comparing to classic Monte-Carlo (MC) method, the proposed PCM applies the Smolyak algorithm to reduce the number of simulations that have to be performed. Therefore, the computational cost can be greatly reduced using PCM. The algorithm and procedures are described in the paper. Comparison was made with MC method on the single machine as well as the WECC system. The simulation results shows that using PCM only a small number of sparse grid points need to be sampled even when dealing with systems with a relatively large number of uncertain parameters. PCM is, therefore, computationally more efficient than MC method.

Journal ArticleDOI
TL;DR: In this paper, it was shown that one cannot improve the accuracy of sparse grids methods with 2π 2π nd-1 points in the grid by adding 2πn$ arbitrary points.
Abstract: Our main interest in this paper is to study some approximation problems for classes of functions with mixed smoothness. We use technique, based on a combination of results from hyperbolic cross approximation, which were obtained in 1980s -- 1990s, and recent results on greedy approximation to obtain sharp estimates for best $m$-term approximation with respect to the trigonometric system. We give some observations on numerical integration and approximate recovery of functions with mixed smoothness. We prove lower bounds, which show that one cannot improve accuracy of sparse grids methods with $\asymp 2^nn^{d-1}$ points in the grid by adding $2^n$ arbitrary points. In case of numerical integration these lower bounds provide best known lower bounds for optimal cubature formulas and for sparse grids based cubature formulas.

Posted Content
TL;DR: This paper develops a generalization of the combination technique for which arbitrary collections of coarse approximations may be combined to obtain an accurate approximation and provides bounds on the expected error for interpolati...
Abstract: This paper continues to develop a fault tolerant extension of the sparse grid combination technique recently proposed in [B. Harding and M. Hegland, ANZIAM J., 54 (CTAC2012), pp. C394-C411]. The approach is novel for two reasons, first it provides several levels in which one can exploit parallelism leading towards massively parallel implementations, and second, it provides algorithm-based fault tolerance so that solutions can still be recovered if failures occur during computation. We present a generalisation of the combination technique from which the fault tolerant algorithm is a consequence. Using a model for the time between faults on each node of a high performance computer we provide bounds on the expected error for interpolation with this algorithm. Numerical experiments on the scalar advection PDE demonstrate that the algorithm is resilient to faults on a real application. It is observed that the trade-off of recovery time to decreased accuracy of the solution is suitably small. A comparison with traditional checkpoint-restart methods applied to the combination technique show that our approach is highly scalable with respect to the number of faults.

Journal ArticleDOI
TL;DR: An algorithm is presented that for a local bilinear form evaluates in linear complexity the application of the stiffness matrix w.r.t. a collection of tensor product multiscale basis functions, assuming that this collection has a multi-tree structure.

Journal ArticleDOI
TL;DR: This work concerns with the numerical comparison between different kinds of design points in least square (LS) approach on polynomial spaces, and the QMC points, being deterministic, seem to be a good choice for higher dimensional problems not only for better convergence properties but also in the stability point of view.
Abstract: In this work, we concern with the numerical comparison between different kinds of design points in least square (LS) approach on polynomial spaces. Such a topic is motivated by uncertainty quantification (UQ). Three kinds of design points are considered, which are the Sparse Grid (SG) points, the Monte Carlo (MC) points and the Quasi Monte Carlo (QMC) points. We focus on three aspects during the comparison: (i) the convergence properties; (ii) the stability, i.e. the properties of the resulting condition number of the design matrix; (iii) the robustness when numerical noises are present in function values. Several classical high dimensional functions together with a random ODE model are tested. It is shown numerically that (i) neither the MC sampling nor the QMC sampling introduce the low convergence rate, namely, the approach achieves high order convergence rate for all cases provided that the underlying functions admit certain regularity and enough design points are used; (ii)The use of SG points admits better convergence properties only for very low dimensional problems (say d ≤ 2); (iii)The QMC points, being deterministic, seem to be a good choice for higher dimensional problems not only for better convergence properties but also in the stability point of view.

Journal ArticleDOI
TL;DR: This paper presents a method that isolates d < 3N - 6 molecular coordinates and continuously follows reaction paths on d-dimensional potential energy surfaces approximated by a Smolyak's sparse grid interpolation algorithm and presents simulation results for the isomerization of 2-butene with two, three, and six degrees of freedom.
Abstract: Computing the potential energy of an N-atom molecule is an expensive optimization process of 3N - 6 molecular coordinates, so following reaction pathways as a function of all 3N - 6 coordinates is unfeasible for large molecules. In this paper, we present a method that isolates d < 3N - 6 molecular coordinates and continuously follows reaction paths on d-dimensional potential energy surfaces approximated by a Smolyak's sparse grid interpolation algorithm.1 Compared to dense grids, sparse grids efficiently improve the ratio of invested storage and computing time to approximation accuracy and thus allow one to increase the number of coordinates d in molecular reaction path following simulations. Furthermore, evaluation of the interpolant is much less expensive than the evaluation of the actual energy function, so our technique offers a computationally efficient way to simulate reaction paths on ground and excited state potential energy surfaces. To demonstrate the capabilities of our method, we present simulation results for the isomerization of 2-butene with two, three, and six degrees of freedom.



Journal ArticleDOI
TL;DR: This work presents an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling and works directly on a set of given particle trajectories and without additional flow map derivatives.
Abstract: Lagrangian coherent structures LCSs have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations.

Journal ArticleDOI
TL;DR: This work presents how to generate scenarios based on a quadrature rule for the expectation value of an arbitrary economic objective function and shows how the use of sparse grids for the quadratures of the high-dimensional stochastic integrals yields a drastically smaller number of scenarios than the tensor grid approaches used so far.

Posted Content
TL;DR: The multilevel sampling algorithm is applied to sparse grid stochastic collocation methods, its numerical implementation is demonstrated and itsiency both theoretically and by means of numerical examples are demonstrated.
Abstract: Stochastic sampling methods are arguably the most direct and least intrusive means of incorporating parametric uncertainty into numerical simulations of partial differential equations with random inputs. However, to achieve an overall error that is within a desired tolerance, a large number of sample simulations may be required (to control the sampling error), each of which may need to be run at high levels of spatial fidelity (to contro l the spatial error). Multilevel sampling methods aim to achieve the same accuracy as traditional sampling methods, but at a reduced computational cost, through the use of a hierarchy of spatial discretization models. Multilevel algorithms coordinate the number of samples needed at each discretization level by minimizing the computational cost, subject to a given error tolerance. They can be applied to a variety of sampling schemes, exploit nesting when available, can be implemented in parallel and can be used to inform adaptive spatial refinement strategies. We ex tend the multilevel sampling algorithm to sparse grid stochastic collocation methods, discu ss its numerical implementation and demonstrate its effi ciency both theoretically and by means of numerical examples.

Journal ArticleDOI
TL;DR: In this paper, a sparse grid collocation method with a time discretiza- tion of the differential equations for computing expectations of functionals of solutions to differential equations perturbed by time-dependent white noise is proposed.
Abstract: We consider a sparse grid collocation method in conjunction with a time discretiza- tion of the differential equations for computing expectations of functionals of solutions to differential equations perturbed by time-dependent white noise. We first analyze the error of Smolyak's sparse grid collocation used to evaluate expectations of functionals of solutions to stochastic differential equations discretized by the Euler scheme. We show theoretically and numerically that this algo- rithm can have satisfactory accuracy for small noise magnitude or small integration time, however it does not converge either with decrease of the Euler scheme's time step size or with increase of Smolyak's sparse grid level. Subsequently, we use this method as a building block for proposing a new algorithm by combining sparse grid collocation with a recursive procedure. This approach allows us to numerically integrate linear stochastic partial differential equations over longer times, which is illustrated in numerical tests on a stochastic advection-diffusion equation.

BookDOI
01 Jan 2014
TL;DR: This work focuses on Adaptive Low-Rank Approximation Techniques in the Hierarchical Tensor Format, which has applications in Reinforcement Learning and Nonlinear Eigenproblems in Data Analysis.
Abstract: D. Belomestny, C. Bender, F. Dickmann, and N. Schweizer: Solving Stochastic Dynamic Programs by Convex Optimization and Simulation.- W. Dahmen, C. Huang, G. Kutyniok, W.-Q Lim, C. Schwab, and G. Welper: Efficient Resolution of Anisotropic Structures.- R. Ressel, P. Dulk, S. Dahlke, K. S. Kazimierski, and P. Maass: Regularity of the Parameter-to-state Map of a Parabolic Partial Differential Equation.- N. Chegini, S. Dahlke, U. Friedrich, and R. Stevenson: Piecewise Tensor Product Wavelet Bases by Extensions and Approximation Rates.- P. A. Cioica, S. Dahlke, N. Dohring, S. Kinzel, F. Lindner, T. Raasch, K. Ritter, and R. Schilling: Adaptive Wavelet Methods for SPDEs.- M. Altmayer, S. Dereich, S. Li, T. Muller-Gronbach, A. Neuenkirch, K. Ritter and L. Yaroslavtseva: Constructive Quantization and Multilevel Algorithms for Quadrature of Stochastic Differential Equations.- O. G. Ernst, B. Sprungk, and H.-J. Starkloff: Bayesian Inverse Problems and Kalman Filters.- J. Diehl, P. Friz, H. Mai, H. Oberhauser, S. Riedel, and W. Stannat: Robustness in Stochastic Filtering and Maximum Likelihood Estimation for SDEs.- J. Garcke and I. Klompmaker: Adaptive Sparse Grids in Reinforcement Learning.- J. Ballani, L. Grasedyck, and M. Kluge: A Review on Adaptive Low-Rank Approximation Techniques in the Hierarchical Tensor Format.- M. Griebel, J. Hamaekers, and F. Heber: A Bond Order Dissection ANOVA Approach for Efficient Electronic Structure Calculations.- W. Hackbusch and R. Schneider: Tensor Spaces and Hierarchical Tensor Representations.- L. Jost, S. Setzer, and M. Hein: Nonlinear Eigenproblems in Data Analysis - Balanced Graph Cuts and the Ratio DCA-Prox.- M. Guillemard, D. Heinen, A. Iske, S. Krause-Solberg, and G. Plonka: Adaptive Approximation Algorithms for Sparse Data Representation.- T. Jahnke and V. Sunkara: Error Bound for Hybrid Models of Two-scaled Stochastic Reaction Systems.- R. Kiesel, A. Rupp, and K. Urban: Valuation of Structured Financial Products by Adaptive Multi wavelet Methods in High Dimensions.- L Kammerer, S. Kunis, I. Melzer, D. Potts, and T. Volkmer: Computational Methods for the Fourier Analysis of Sparse High-Dimensional Functions.- E. Herrholz, D. Lorenz, G. Teschke, and D. Trede: Sparsity and Compressed Sensing in Inverse Problems.- C. Lubich: Low-Rank Dynamics.- E. Novak and D. Rudolf: Computation of Expectations by Markov Chain Monte Carlo Methods.- H. Yserentant: Regularity, Complexity, and Approximability of Electronic Wave functions.- Index.

Proceedings ArticleDOI
18 Sep 2014
TL;DR: In this paper, a fully automated procedure to estimate the uncertainty of compressor stage performance, due to impeller manufacturing variability, is presented, where 3D sample geometries are generated and 1D/2D aerodynamic models are used to predict the performance of each sample geometry.
Abstract: This paper presents a fully automated procedure to estimate the uncertainty of compressor stage performance, due to impeller manufacturing variability. The methodology was originally developed for 2D stages, i.e., stages for which the impeller blade angle and thickness distribution are only defined at the hub end-wall. Here, we extend the procedure to general 3D stages, for which blade angle and thickness distributions can be prescribed independently at the shroud and hub endwalls. Starting from the probability distribution of the impeller geometrical parameters, 3D sample geometries are generated and 1D/2D aerodynamic models are created, which are used to predict the performance of each sample geometry. The original procedure used the Monte Carlo method to propagate uncertainty. However, this requires a large number of samples to compute accurate performance statistics. Here we compare the results from Monte Carlo, with those obtained using Sparse Grid Polynomial Chaos Expansion (PCE) and a Multidimensional Cubature Rule for uncertainty propagation. PCE has exponential convergence in the stochastic space for smooth functions, and the use of sparse grids mitigates the increase of sample points due to the increase in the number of uncertain parameters. The cubature rule has accuracy limitations, but sample points increase only linearly with the number of parameters. For a 3D stage, the probability distributions of the performance characteristics are computed, as well as the sensitivity to the design parameters. The results show that PCE and Multidimensional Cubature give similar results to MC computations, with a much lower computational effort.Copyright © 2014 by ASME