scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data

01 May 2008-SIAM Journal on Numerical Analysis (Society for Industrial and Applied Mathematics)-Vol. 46, Iss: 5, pp 2309-2345
TL;DR: This work demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates, indicating for which problems the sparse grid stochastic collocation method is more efficient than Monte Carlo.
Abstract: This work proposes and analyzes a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems as in the Monte Carlo method. If the number of random variables needed to describe the input data is moderately large, full tensor product spaces are computationally expensive to use due to the curse of dimensionality. In this case the sparse grid approach is still expected to be competitive with the classical Monte Carlo method. Therefore, it is of major practical relevance to understand in which situations the sparse grid stochastic collocation method is more efficient than Monte Carlo. This work provides error estimates for the fully discrete solution using $L^q$ norms and analyzes the computational efficiency of the proposed method. In particular, it demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates. The derived estimates are then used to compare the method with Monte Carlo, indicating for which problems the former is more efficient than the latter. Computational evidence complements the present theory and shows the effectiveness of the sparse grid stochastic collocation method compared to full tensor and Monte Carlo approaches.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The new form gives a clear and convenient way to implement all basic operations efficiently, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.
Abstract: A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.

2,127 citations

Journal ArticleDOI
TL;DR: A rigorous convergence analysis is provided and exponential convergence of the “probability error” with respect to the number of Gauss points in each direction in the probability space is demonstrated, under some regularity assumptions on the random input data.
Abstract: In this paper we propose and analyze a stochastic collocation method to solve elliptic partial differential equations with random coefficients and forcing terms (input data of the model). The input data are assumed to depend on a finite number of random variables. The method consists in a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It can be seen as a generalization of the stochastic Galerkin method proposed in [I. Babuscka, R. Tempone, and G. E. Zouraris, SIAM J. Numer. Anal., 42 (2004), pp. 800-825] and allows one to treat easily a wider range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the “probability error” with respect to the number of Gauss points in each direction in the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method.

1,597 citations


Cites background or methods from "A Sparse Grid Stochastic Collocatio..."

  • ...The first result, from [46], estimates the convergence rate of the isotropic Smolyak method as defined in Table 1....

    [...]

  • ...The work [46] analyzed a sparse grid stochastic collocation method for solving PDEs whose coefficients and forcing terms depend on a finite number of random variables....

    [...]

  • ...In particular, the estimates derived in [46] demonstrate at least algebraic convergence with respect to the total number of collocation points of the type err ≤ Cη−r/(1+log(N)), thus proving a highly reduced curse of dimensionality with respect to full tensor collocation....

    [...]

  • ...2 states error estimates derived first in [46, 45] for the fully discrete solution, analyzing the computational efficiency of the sparse grid stochastic collocation method in terms of the number of collocation points (deterministic problems to solve)....

    [...]

  • ...In the framework of PDEs with random input data, the sparse grid stochastic collocation method has been proposed in [61] and analyzed in [46] (see also [29, 26])....

    [...]

ReportDOI
01 May 2010
TL;DR: This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Abstract: The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications. DAKOTA Version 5.0 Reference Manual generated on May 7, 2010

757 citations

Journal ArticleDOI
TL;DR: This review describes the use of PC expansions for the representation of random variables/fields and discusses their utility for the propagation of uncertainty in computational models, focusing on CFD models.
Abstract: The quantification of uncertainty in computational fluid dynamics (CFD) predictions is both a significant challenge and an important goal. Probabilistic uncertainty quantification (UQ) methods have been used to propagate uncertainty from model inputs to outputs when input uncertainties are large and have been characterized probabilistically. Polynomial chaos (PC) methods have found increased use in probabilistic UQ over the past decade. This review describes the use of PC expansions for the representation of random variables/fields and discusses their utility for the propagation of uncertainty in computational models, focusing on CFD models. Many CFD applications are considered, including flow in porous media, incompressible and compressible flows, and thermofluid and reacting flows. The review examines each application area, focusing on the demonstrated use of PC UQ and the associated challenges. Cross-cutting challenges with time unsteadiness and long time horizons are also discussed.

731 citations

Journal ArticleDOI
TL;DR: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest as discussed by the authors, and these different models have varying evaluation costs, i.e.
Abstract: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs...

678 citations

References
More filters
Book
01 Jan 1978
TL;DR: The finite element method has been applied to a variety of nonlinear problems, e.g., Elliptic boundary value problems as discussed by the authors, plate problems, and second-order problems.
Abstract: Preface 1. Elliptic boundary value problems 2. Introduction to the finite element method 3. Conforming finite element methods for second-order problems 4. Other finite element methods for second-order problems 5. Application of the finite element method to some nonlinear problems 6. Finite element methods for the plate problem 7. A mixed finite element method 8. Finite element methods for shells Epilogue Bibliography Glossary of symbols Index.

8,407 citations

Book
01 Apr 2002
TL;DR: In this article, Ciarlet presents a self-contained book on finite element methods for analysis and functional analysis, particularly Hilbert spaces, Sobolev spaces, and differential calculus in normed vector spaces.
Abstract: From the Publisher: This book is particularly useful to graduate students, researchers, and engineers using finite element methods. The reader should have knowledge of analysis and functional analysis, particularly Hilbert spaces, Sobolev spaces, and differential calculus in normed vector spaces. Other than these basics, the book is mathematically self-contained. About the Author Philippe G. Ciarlet is a Professor at the Laboratoire d'Analyse Numerique at the Universite Pierre et Marie Curie in Paris. He is also a member of the French Academy of Sciences. He is the author of more than a dozen books on a variety of topics and is a frequent invited lecturer at meetings and universities throughout the world. Professor Ciarlet has served approximately 75 visiting professorships since 1973, and he is a member of the editorial boards of more than 20 journals.

8,052 citations


Additional excerpts

  • ...[8,9])....

    [...]

Book
14 Feb 2013
TL;DR: In this article, the construction of a finite element of space in Sobolev spaces has been studied in the context of operator-interpolation theory in n-dimensional variational problems.
Abstract: Preface(2nd ed.).- Preface(1st ed.).- Basic Concepts.- Sobolev Spaces.- Variational Formulation of Elliptic Boundary Value Problems.- The Construction of a Finite Element of Space.- Polynomial Approximation Theory in Sobolev Spaces.- n-Dimensional Variational Problems.- Finite Element Multigrid Methods.- Additive Schwarz Preconditioners.- Max-norm Estimates.- Adaptive Meshes.- Variational Crimes.- Applications to Planar Elasticity.- Mixed Methods.- Iterative Techniques for Mixed Methods.- Applications of Operator-Interpolation Theory.- References.- Index.

7,158 citations


Additional excerpts

  • ...That is, s = 1 and C(s;φ) = c‖φ‖H2(D), see for example [7]....

    [...]

Book
01 Jan 1963
TL;DR: These notes cover the basic definitions of discrete probability theory, and then present some results including Bayes' rule, inclusion-exclusion formula, Chebyshev's inequality, and the weak law of large numbers.
Abstract: These notes cover the basic definitions of discrete probability theory, and then present some results including Bayes' rule, inclusion-exclusion formula, Chebyshev's inequality, and the weak law of large numbers. 1 Sample spaces and events To treat probability rigorously, we define a sample space S whose elements are the possible outcomes of some process or experiment. For example, the sample space might be the outcomes of the roll of a die, or flips of a coin. To each element x of the sample space, we assign a probability, which will be a non-negative number between 0 and 1, which we will denote by p(x). We require that x∈S p(x) = 1, so the total probability of the elements of our sample space is 1. What this means intuitively is that when we perform our process, exactly one of the things in our sample space will happen. Example. The sample space could be S = {a, b, c}, and the probabilities could be p(a) = 1/2, p(b) = 1/3, p(c) = 1/6. If all elements of our sample space have equal probabilities, we call this the uniform probability distribution on our sample space. For example, if our sample space was the outcomes of a die roll, the sample space could be denoted S = {x 1 , x 2 ,. .. , x 6 }, where the event x i correspond to rolling i. The uniform distribution, in which every outcome x i has probability 1/6 describes the situation for a fair die. Similarly, if we consider tossing a fair coin, the outcomes would be H (heads) and T (tails), each with probability 1/2. In this situation we have the uniform probability distribution on the sample space S = {H, T }. We define an event A to be a subset of the sample space. For example, in the roll of a die, if the event A was rolling an even number, then A = {x 2 , x 4 , x 6 }. The probability of an event A, denoted by P(A), is the sum of the probabilities of the corresponding elements in the sample space. For rolling an even number, we have P(A) = p(x 2) + p(x 4) + p(x 6) = 1 2 Given an event A of our sample space, there is a complementary event which consists of all points in our sample space that are not …

6,236 citations

Book
29 Mar 1977

6,171 citations