scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Adaptive sparse polynomial chaos expansion based on least angle regression

01 Mar 2011-Journal of Computational Physics (Academic Press Professional, Inc.)-Vol. 230, Iss: 6, pp 2345-2367
TL;DR: A non intrusive method that builds a sparse PC expansion, which may be obtained at a reduced computational cost compared to the classical ''full'' PC approximation.
About: This article is published in Journal of Computational Physics.The article was published on 2011-03-01. It has received 1112 citations till now. The article focuses on the topics: Polynomial chaos & Multivariate random variable.
Citations
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Proceedings ArticleDOI
07 Jul 2014
TL;DR: The modular platform comprises a highly optimized core probabilistic modelling engine and a simple programming interface that provides unified access to heterogeneous high performance computing resources and provides a content-management system that allows users to easily develop additional custom modules within the framework.
Abstract: Uncertainty quantification is a rapidly growing field in computer simulation-based scientific applications. The UQLAB project aims at the development of a MATLABbased software framework for uncertainty quantification. It is designed to encourage both academic researchers and field engineers to use and develop advanced and innovative algorithms for uncertainty quantification, possibly exploiting modern distributed computing facilities. Ease of use, extendibility and handling of non-intrusive stochastic methods are core elements of its development philosophy. The modular platform comprises a highly optimized core probabilistic modelling engine and a simple programming interface that provides unified access to heterogeneous high performance computing resources. Finally, it provides a content-management system that allows users to easily develop additional custom modules within the framework. In this contribution, we intend to demonstrate the features of the platform at its current development stage.

475 citations


Cites methods from "Adaptive sparse polynomial chaos ex..."

  • ...In order to make the previous problem solvable, we decided to use an adaptive sparse Polynomial Chaos Expansion (PCE) surrogate model (Blatman and Sudret 2011) evaluated on an experimental design limited to 200 model evaluations....

    [...]

Journal ArticleDOI
TL;DR: Sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices and a bootstrap technique is developed which eventually yields confidence intervals on the results.

332 citations

Journal ArticleDOI
TL;DR: PC-Kriging is derived as a new non-intrusive meta-modeling approach combining PCE and Kriging, which approximates the global behavior of the computational model whereas Kriged manages the local variability of the model output.
Abstract: Computer simulation has become the standard tool in many engineering fields for designing and optimizing systems, as well as for assessing their reliability. Optimization and uncertainty quantification problems typically require a large number of runs of the computational model at hand, which may not be feasible with high-fidelity models directly. Thus surrogate models (a.k.a metamodels) have been increasingly investigated in the last decade. Polynomial Chaos Expansions (PCE) and Kriging are two popular non-intrusive metamodelling techniques. PCE surrogates the computational model with a series of orthonormal polynomials in the input variables where polynomials are chosen in coherency with the probability distributions of those input variables. A least-square minimization technique may be used to determine the coefficients of the PCE. On the other hand, Kriging assumes that the computer model behaves as a realization of a Gaussian random process whose parameters are estimated from the available computer runs, i.e. input vectors and response values. These two techniques have been developed more or less in parallel so far with little interaction between the researchers in the two fields. In this paper, PC-Kriging is derived as a new non-intrusive meta-modeling approach combining PCE and Kriging. A sparse set of orthonormal polynomials (PCE) approximates the global behavior of the computational model whereas Kriging manages the local variability of the model output. An adaptive algorithm similar to the least angle regression algorithm determines the optimal sparse set of polynomials. PC-Kriging is validated on various benchmark analytical functions which are easy to sample for reference results. From the numerical investigations it is concluded that PC-Kriging performs better than or at least as good as the two distinct meta-modeling techniques. A larger gain in accuracy is obtained when the experimental design has a limited size, which is an asset when dealing with demanding computational models.

220 citations


Cites background or methods from "Adaptive sparse polynomial chaos ex..."

  • ...The set of polynomials is determined using the least-angle-regression (LAR) algorithm as in [32] together with hyperbolic index sets to obtain sparse sets of polynomials....

    [...]

  • ...In a first step the optimal set of polynomials is determined using the PCE framework: A is found by applying the LAR procedure as in [32]....

    [...]

  • ...…projection Ghiocel and Ghanem (2002); Le Mâıtre et al. (2002); Keese and Matthies (2005), stochastic collocation Xiu and Hesthaven (2005); Xiu (2009) and leastsquare minimization methods Chkifa et al. (2013); Migliorati et al. (2014); Berveiller et al. (2006b); Blatman and Sudret (2010a, 2011)....

    [...]

  • ...Further developments which combine spectral expansions and compressive sensing ideas have lead to so-called sparse polynomial chaos expansions Blatman and Sudret (2008, 2010a,b); Doostan and Owhadi (2011); Blatman and Sudret (2011); Doostan et al. (2013); Jakeman et al. (2014)....

    [...]

  • ...Different nonintrusive methods have been proposed in the last decade to calibrate PC meta-models, namely projection [20–22], stochastic collocation [23, 24], and least-square minimization methods [26, 29, 32, 54, 55]....

    [...]

Journal ArticleDOI
TL;DR: The coherence-optimal sampling scheme is proposed: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support.

217 citations


Cites background from "Adaptive sparse polynomial chaos ex..."

  • ..., [10, 8, 11, 12], and more recently in UQ, [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]....

    [...]

  • ...Here, ‖c‖0 = #(ck 6= 0) is the number of nonzero entries of c. Solutions to these problems are of great practical interest for sparse approximation and have received significant study in the field of Compressive Sampling/Compressed Sensing, see, e.g., [10, 8, 11, 12], and more recently in UQ, [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]....

    [...]

References
More filters
Book
Vladimir Vapnik1
01 Jan 1995
TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Abstract: Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory?.

40,147 citations


"Adaptive sparse polynomial chaos ex..." refers background in this paper

  • ...In this work, we focus on the estimation of the approximation error in the L(2)-norm: Err ≡ E [( M(X) − M̂A(X) )2] (31) The quantity Err is sometimes referred to as the generalization error in statistical learning [41]....

    [...]

Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations


"Adaptive sparse polynomial chaos ex..." refers methods in this paper

  • ...This leads to favor the main effects and low-order interactions, which are more likely to be significant than the high-order interactions in the governing equations of the model according to the sparsity-of-effects principle [39]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, two sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies and they are shown to be improvements over simple sampling with respect to variance for a class of estimators which includes the sample mean and the empirical distribution function.
Abstract: Two types of sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies. These plans are shown to be improvements over simple random sampling with respect to variance for a class of estimators which includes the sample mean and the empirical distribution function.

8,328 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations


"Adaptive sparse polynomial chaos ex..." refers background or methods in this paper

  • ...It is shown in [22] that LAR is noticeably efficient since it only requires O(NP 2 + P (3)) computations (i....

    [...]

  • ...Both quantities may be derived algebraically as shown in [22]....

    [...]

  • ...It is shown in [22] that hybrid LAR always increase the usual empirical measure of fit R(2) compared to the orginal LAR....

    [...]

  • ...Least Angle Regression (LAR) [22] is an efficient procedure for variable selection....

    [...]

  • ...The so-called hybrid LAR procedure is a variant of the original LAR [22]....

    [...]

Book
01 Jan 1963
TL;DR: These notes cover the basic definitions of discrete probability theory, and then present some results including Bayes' rule, inclusion-exclusion formula, Chebyshev's inequality, and the weak law of large numbers.
Abstract: These notes cover the basic definitions of discrete probability theory, and then present some results including Bayes' rule, inclusion-exclusion formula, Chebyshev's inequality, and the weak law of large numbers. 1 Sample spaces and events To treat probability rigorously, we define a sample space S whose elements are the possible outcomes of some process or experiment. For example, the sample space might be the outcomes of the roll of a die, or flips of a coin. To each element x of the sample space, we assign a probability, which will be a non-negative number between 0 and 1, which we will denote by p(x). We require that x∈S p(x) = 1, so the total probability of the elements of our sample space is 1. What this means intuitively is that when we perform our process, exactly one of the things in our sample space will happen. Example. The sample space could be S = {a, b, c}, and the probabilities could be p(a) = 1/2, p(b) = 1/3, p(c) = 1/6. If all elements of our sample space have equal probabilities, we call this the uniform probability distribution on our sample space. For example, if our sample space was the outcomes of a die roll, the sample space could be denoted S = {x 1 , x 2 ,. .. , x 6 }, where the event x i correspond to rolling i. The uniform distribution, in which every outcome x i has probability 1/6 describes the situation for a fair die. Similarly, if we consider tossing a fair coin, the outcomes would be H (heads) and T (tails), each with probability 1/2. In this situation we have the uniform probability distribution on the sample space S = {H, T }. We define an event A to be a subset of the sample space. For example, in the roll of a die, if the event A was rolling an even number, then A = {x 2 , x 4 , x 6 }. The probability of an event A, denoted by P(A), is the sum of the probabilities of the corresponding elements in the sample space. For rolling an even number, we have P(A) = p(x 2) + p(x 4) + p(x 6) = 1 2 Given an event A of our sample space, there is a complementary event which consists of all points in our sample space that are not …

6,236 citations


"Adaptive sparse polynomial chaos ex..." refers methods in this paper

  • ...The Gaussian field N(x, ω) is discretized using the Karhunen-Loève (KL) decomposition [30]....

    [...]

  • ...Then it may be discretized using the Karhunen-Loève expansion [30]:...

    [...]