scispace - formally typeset
Search or ask a question

Showing papers on "Sparse grid published in 2008"


Journal ArticleDOI
TL;DR: This work demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates, indicating for which problems the sparse grid stochastic collocation method is more efficient than Monte Carlo.
Abstract: This work proposes and analyzes a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems as in the Monte Carlo method. If the number of random variables needed to describe the input data is moderately large, full tensor product spaces are computationally expensive to use due to the curse of dimensionality. In this case the sparse grid approach is still expected to be competitive with the classical Monte Carlo method. Therefore, it is of major practical relevance to understand in which situations the sparse grid stochastic collocation method is more efficient than Monte Carlo. This work provides error estimates for the fully discrete solution using $L^q$ norms and analyzes the computational efficiency of the proposed method. In particular, it demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates. The derived estimates are then used to compare the method with Monte Carlo, indicating for which problems the former is more efficient than the latter. Computational evidence complements the present theory and shows the effectiveness of the sparse grid stochastic collocation method compared to full tensor and Monte Carlo approaches.

1,257 citations


Journal ArticleDOI
TL;DR: This work proposes and analyzes an anisotropic sparse grid stochastic collocation method for solving partial differential equations with random coefficients and forcing terms (input data of the model) and provides a rigorous convergence analysis of the fully discrete problem.
Abstract: This work proposes and analyzes an anisotropic sparse grid stochastic collocation method for solving partial differential equations with random coefficients and forcing terms (input data of the model). The method consists of a Galerkin approximation in the space variables and a collocation, in probability space, on sparse tensor product grids utilizing either Clenshaw-Curtis or Gaussian knots. Even in the presence of nonlinearities, the collocation approach leads to the solution of uncoupled deterministic problems, just as in the Monte Carlo method. This work includes a priori and a posteriori procedures to adapt the anisotropy of the sparse grids to each given problem. These procedures seem to be very effective for the problems under study. The proposed method combines the advantages of isotropic sparse collocation with those of anisotropic full tensor product collocation: the first approach is effective for problems depending on random variables which weigh approximately equally in the solution, while the benefits of the latter approach become apparent when solving highly anisotropic problems depending on a relatively small number of random variables, as in the case where input random variables are Karhunen-Loeve truncations of “smooth” random fields. This work also provides a rigorous convergence analysis of the fully discrete problem and demonstrates (sub)exponential convergence in the asymptotic regime and algebraic convergence in the preasymptotic regime, with respect to the total number of collocation points. It also shows that the anisotropic approximation breaks the curse of dimensionality for a wide set of problems. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo. In particular, for moderately large-dimensional problems, the sparse grid approach with a properly chosen anisotropy seems to be very efficient and superior to all examined methods.

552 citations


Journal ArticleDOI
TL;DR: Monte Carlo experiments for the mixed logit model indicate the superior performance of the proposed Gaussian quadrature extension over simulation techniques.

360 citations


Proceedings ArticleDOI
01 Apr 2008
TL;DR: Experience with non-intrusive PCE methods for algebraic and PDE-based benchmark test problems is presented, demonstrating the need for accurate, efficient coefficient estimation approaches that can be used for problems with significant numbers of random variables.
Abstract: Polynomial chaos expansions (PCE) are an attractive technique for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability. When tailoring the orthogonal polynomial bases to match the forms of the input uncertainties in a Wiener-Askey scheme, excellent convergence properties can be achieved for general probabilistic analysis problems. Non-intrusive PCE methods allow the use of simulations as black boxes within UQ studies, and involve the calculation of chaos expansion coefficients based on a set of response function evaluations. These methods may be characterized as being either Galerkin projection methods, using sampling or numerical integration, or regression approaches (also known as point collocation or stochastic response surfaces), using linear least squares. Numerical integration methods may be further categorized as either tensor product quadrature or sparse grid Smolyak cubature and as either isotropic or anisotropic. Experience with these approaches is presented for algebraic and PDE-based benchmark test problems, demonstrating the need for accurate, efficient coefficient estimation approaches that sca le for problems with significant numbers of random variables.

185 citations


Journal ArticleDOI
TL;DR: The design framework relies entirely on deterministic direct and sensitivity analysis of the continuum systems, thereby significantly enhancing the range of applicability of the framework for the design in the presence of uncertainty of many other systems usually analyzed with legacy codes.

102 citations


Journal ArticleDOI
TL;DR: An a priori error analysis shows that the sparse tensor product method is clearly superior to a discrete ordinates method, as it converges with essentially optimal asymptotic rates while its complexity grows essentially only as that for a linear transport problem in R^n.

55 citations


Journal ArticleDOI
TL;DR: In this paper, a Fourier-based sparse grid method for pricing multi-asset options is presented and evaluated by solving pricing equations for options dependent on up to seven underlying assets.
Abstract: In this paper we present and evaluate a Fourier-based sparse grid method for pricing multi-asset options. This involves computing multidimensional integrals efficiently and we do it by the Fast Fourier Transform. We also propose and evaluate ways to deal with the curse of dimensionality by means of parallel partitioning of the Fourier transform and by incorporating a parallel sparse grids method. Finally, we test the presented method by solving pricing equations for options dependent on up to seven underlying assets.

47 citations


Journal ArticleDOI
TL;DR: Applying piecewise smooth wavelets, this work verified the compressibility of dimension independent approximation rates for general, non-separable elliptic elliptic PDEs in tensor domains.
Abstract: With standard isotropic approximation by (piecewise) polynomials of fixed order in a domain D subset of R-d, the convergence rate in terms of the number N of degrees of freedom is inversely proportional to the space dimension d. This so-called curse of dimensionality can be circumvented by applying sparse tensor product approximation, when certain high order mixed derivatives of the approximated function happen to be bounded in L-2. It was shown by Nitsche (2006) that this regularity constraint can be dramatically reduced by considering best N-term approximation from tensor product wavelet bases. When the function is the solution of some well-posed operator equation, dimension independent approximation rates can be practically realized in linear complexity by adaptive wavelet algorithms, assuming that the infinite stiffness matrix of the operator with respect to such a basis is highly compressible. Applying piecewise smooth wavelets, we verify this compressibility for general, non-separable elliptic PDEs in tensor domains. Applications of the general theory developed include adaptive Galerkin discretizations of multiple scale homogenization problems and of anisotropic equations which are robust, i.e., independent of the scale parameters, resp. of the size of the anisotropy.

45 citations


Journal ArticleDOI
TL;DR: Sparse grids, implemented as a combination of aggregated grids are used to address the curse of dimensionality of the CME and two alternatives are described here using sparse grids and a hybrid method.
Abstract: The direct numerical solution of the chemical master equation (CME) is usually impossible due to the high dimension of the computational domain. The standard method for solution of the equation is to generate realizations of the chemical system by the stochastic simulation algorithm (SSA) by Gillespie and then taking averages over the trajectories. Two alternatives are described here using sparse grids and a hybrid method. Sparse grids, implemented as a combination of aggregated grids are used to address the curse of dimensionality of the CME. The aggregated components are selected using an adaptive procedure. In the hybrid method, some of the chemical species are represented macroscopically while the remaining species are simulated with SSA. The convergence of variants of the method is investigated for a growing number of trajectories. Two signaling cascades in molecular biology are simulated with the methods and compared to SSA results.

41 citations


Journal ArticleDOI
TL;DR: In this paper, coordinate transformation techniques in combination with grid stretching for pricing basket options in a sparse grid setting were evaluated for multi-asset examples with up to five underlying assets in the basket.

40 citations


Journal ArticleDOI
TL;DR: A new numerical integration procedure for exchange-correlation energies and potentials is proposed and "proof of principle" results are presented, which produces a "whole molecule" grid in contrast to conventional integration methods in density-functional theory, which use atom-in-molecule grids.
Abstract: A new numerical integration procedure for exchange-correlation energies and potentials is proposed and “proof of principle” results are presented. The numerical integration grids are built from sparse-tensor product grids (constructed according to Smolyak’s prescription [Dokl. Akad. Nauk. 4, 240 (1963)] ) on the unit cube. The grid on the unit cube is then transformed to a grid over real space with respect to a weight function, which we choose to be the promolecular density. This produces a “whole molecule” grid, in contrast to conventional integration methods in density-functional theory, which use atom-in-molecule grids. The integration scheme was implemented in a modified version of the DEMON2K density-functional theory program, where it is used to evaluate integrals of the exchange-correlation energy density and the exchange-correlation potential. Ground-state energies and molecular geometries are accurately computed. The biggest advantages of the grid are its flexibility (it is easy to change the num...

13 Jun 2008
TL;DR: A complete parallel algorithm which does not require communication between the sub-problems is developed, which subdivides the problem in a sophisticated way and in combination with the sparse grid technique, the numerical results have a satisfactory accuracy.
Abstract: Multi-asset options are based on more than one underlying asset, in contrast to standard vanilla options. A very significant problem within the pricing techniques for multi-asset options is the curse of dimensionality. This curse of dimensionality is the exponential growth of the complexity of the problem when the dimensionality increases, because the number of unknowns to solve simultaneously grows exponentially. Modern computer systems cannot handle this huge amount of data. In order to handle the multi-dimensional option pricing problem, the curse of dimensionality has to be dealt with. The sparse grid solution technique is one of the key techniques to do this. The sparse grid technique divides the original problem into many smaller sized sub-problems, which can be handled efficiently on a modern computer system. Because every sub-problem is independent of all others, this technique is parallelisable at a high efficiency rate. This means, that every sub-problem can be solved simultaneously. However, because of the dimensionality, the size of the sub-problems may remain too large to solve and should be parallelised further. The main restriction to the application of the sparse grid method is that the mixed derivative of the solution of a multi-dimensional option pricing problem has to be bounded. Because of the typical non-differentiability of the final condition of the option pricing problem, this restrictions has to be taken seriously. In the first part of this thesis, it is shown, experimentally, that indeed the sparse grid technique does not lead to a satisfactory accuracy without the use of advanced techniques. If a coordinate transformation is used, the accuracy increases significantly. This transformation aligns the non-differentiability along a grid line. Coordinate transformations are not applicable to any type of multi-asset option, which seriously restricts the sparse grid solution technique for real life financial applications. Sometimes, however, it is not necessary to use it, because the non-differentiability is already aligned with grid line. These types of options are the options based on the best or worst performing underlying asset. The boundary conditions of these contracts are unknown en henceforth these options are computed and analysed with a second alternative method in this thesis. This method arises from the risk-neutral expectation valuation of the final condition which can be written as a multi-dimensional integral over the transition density. By use of a discrete Fourier transform, we can solve this integral efficiently. The fast Fourier transform is a fast algorithm to compute the discrete Fourier transform. This algorithm serves as the basis for a sophisticated algorithm to parallelise the computation of the discrete Fourier transform, by dividing the transform in several parts. In this thesis, a complete parallel algorithm which does not require communication between the sub-problems is developed, which subdivides the problem in a sophisticated way. In combination with the sparse grid technique, the numerical results have a satisfactory accuracy.

Book ChapterDOI
01 Jan 2008
TL;DR: It was shown in [GaGTO1] that the task of classification in data mining can be tackled by employing ansatz functions associated to grid points in the (often high dimensional) feature-space rather than using data-centered Ansatz functions.
Abstract: It was shown in [GaGTO1] that the task of classification in data mining can be tackled by employing ansatz functions associated to grid points in the (often high dimensional) feature-space rather than using data-centered ansatz functions. To cope with the curse of dimensionality, sparse grids have been used.

Journal ArticleDOI
TL;DR: In this article, the authors apply probabilistic approaches to electromagnetic numerical dosimetry problems in order to take into account the variability of the input parameters, such as spectral expansion and nodal expansion.
Abstract: Purpose – The aim is to apply probabilistic approaches to electromagnetic numerical dosimetry problems in order to take into account the variability of the input parameters.Design/methodology/approach – A classic finite element method is coupled with probabilistic methods. These probabilistic methods are based on the expansion of the random parameters in two different ways: a spectral expansion and a nodal expansion.Findings – The computation of the mean and the variance on a simple scattering problem shows that only a few hundreds calculations are required when applying these methods while the Monte Carlo method uses several thousands of samples in order to obtain a comparable accuracy.Originality/value – The number of calculations is reduced using several techniques: a regression technique, sparse grids computed from Smolyak algorithm or a suited coordinate system.

BookDOI
05 Apr 2008
TL;DR: This paper proposes an efficient scenario generation method based on sparse grid, and proves it is epi-convergent, and shows numerically that the proposed method converges to the true optimal value fast in comparison with Monte Carlo and Quasi Monte Carlo methods.
Abstract: One central problem in solving stochastic programming problems is to generate moderate-sized scenario trees which represent well the risk faced by a decision maker. In this paper we propose an efficient scenario generation method based on sparse grid, and prove it is epi-convergent. Furthermore, we show numerically that the proposed method converges to the true optimal value fast in comparison with Monte Carlo and Quasi Monte Carlo methods.

Journal ArticleDOI
TL;DR: In this paper, a method for performing multi-dimensional integration of arbitrary functions is presented, which starts with Smolyak-type sparse grids as cubature formulae on the unit cube and uses a transformation of coordinates based on the conditional distribution method to adapt those formulaae to real space.
Abstract: We present a novel approach for performing multi-dimensional integration of arbitrary functions. The method starts with Smolyak-type sparse grids as cubature formulae on the unit cube and uses a transformation of coordinates based on the conditional distribution method to adapt those formulae to real space. Our method is tested on integrals in one, two, three and six dimensions. The three dimensional integration formulae are used to evaluate atomic interaction energies via the Gordon–Kim model. The six dimensional integration formulae are tested in conjunction with the nonlocal exchange-correlation energy functional proposed by Lee and Parr. This methodology is versatile and powerful; we contemplate application to frozen-density embedding, next-generation molecular-mechanics force fields, 'kernel-type' exchange-correlation energy functionals and pair-density functional theory.

Journal ArticleDOI
Radu-Alexandru Todor1
TL;DR: It is shown that the logarithmic factor in the standard error estimate for sparse finite element (FE) spaces in arbitrary dimension d is removable in the energy (H 1 ) norm.
Abstract: We show that the logarithmic factor in the standard error estimate for sparse finite element (FE) spaces in arbitrary dimension d is removable in the energy (H 1 ) norm. Via a penalized sparse grid condition, we then propose and analyse a new version of the energy-based sparse FE spaces introduced first in Bungartz (1992, Diinne Gitter und deren Anwendung bei der adaptiven Losung der dreidimensionalen Poisson-Gleichung. Dissertation. Munich, Germany: TU Munchen) and known to satisfy an optimal approximation property in the energy norm.

Dissertation
01 May 2008
TL;DR: The conventional formulation for the sparse grid, a collocation algorithm, is modified to yield improved performance and a dimension-adaptive collocations algorithm is implemented in an unscented Kalman filter, and improvement over extended Kalman and unscenting filters is seen in two examples.
Abstract: : The next-generation all-electric ship represents a class of design and control problems in which the system is too large to approach analytically, and even with many conventional computational techniques. Additionally, numerous environmental interactions and inaccurate system model information make uncertainty a necessary consideration. Characterizing systems under uncertainty is essentially a problem of representing the system as a function over a random space. This can be accomplished by sampling the function, where in the case of the electric ship a sample is a simulation with uncertain parameters set according to the location of the sample. For systems on the scale of the electric ship, simulation is expensive, so we seek an accurate representation of the system from a minimal number of simulations. To this end, collocation is employed to compute statistical moments, from which sensitivity can be inferred and to construct surrogate models with which interpolation can be used to propagate PDF's. These techniques are applied to three large-scale electric ship models. The conventional formulation for the sparse grid, a collocation algorithm, is modified to yield improved performance. Theoretical bounds and computational examples are given to support the modification. A dimension-adaptive collocation algorithm is implemented in an unscented Kalman filter, and improvement over extended Kalman and unscented filters is seen in two examples.

Proceedings ArticleDOI
26 May 2008
TL;DR: It is pointed out that the bulk of the computations associated with the ldquopredictionrdquo step can be done off-line and the transition probability tensor and the conditional probability density are effectively sparse and so can be efficiently stored and manipulated using sparse tensors.
Abstract: In many applications it is desired that discrete-discrete filtering problem can be solved in a reliable and computationally efficient manner. In particular, the signal and measurement models often include nonlinearity and/or non-Gaussian characteristics. In this paper, it is pointed out that this can be done efficiently by noting two key observations. Firstly, the bulk of the computations associated with the ldquopredictionrdquo step can be done off-line. The second key point is that the transition probability tensor and the conditional probability density are effectively sparse and so can be efficiently stored and manipulated using sparse tensors. These ideas are crucial for efficiently solving the higher dimensional filtering problems. The resulting technique, termed sparse grid filtering, is demonstrated by some examples, where it is shown that it works very well.

Dissertation
01 Jan 2008
TL;DR: A posteriori error estimates in the energy norm for the numerical solutions of parabolic obstacle problems allowing space/time mesh adaptive refinement are presented based on a posteriorierror indicators which can be computed from the solution of the discrete problem.
Abstract: In this work, we present some numerical methods to approximate Partial Differential Equation(PDEs) or Partial Integro-Differential Equations (PIDEs) commonly arising in finance. This thesis is split into three part. The first one deals with the study of Sparse Grid techniques. In an introductory chapter, we present the construction of Sparse Grid spaces and give some approximation properties. The second chapter is devoted to the presentation of a numerical algorithm to solve PDEs on these spaces. This chapter gives us the opportunity to clarify the finite difference method on Sparse Grid by looking at it as a collocation method. We make a few remarks on the practical implementation. The second part of the thesis is devoted to the application of Sparse Grid techniques to mathematical finance. We will consider two practical problems. In the first one, we consider a European vanilla contract with a multivariate generalisation of the one dimensional Ornstein-Ulenbeck-based stochastic volatility model. A relevant generalisation is to assume that the underlying asset is driven by a jump process, which leads to a PIDE. Due to the curse of dimensionality, standard deterministic methods are not competitive with Monte Carlo methods. We discuss sparse grid finite difference methods for solving the PIDE arising in this model up to dimension 4. In the second problem, we consider a Basket option on several assets (five in our example) in the Black & Scholes model. We discuss Galerkin methods in a sparse tensor product space constructed with wavelets. The last part of the thesis is concerned with a posteriori error estimates in the energy norm for the numerical solutions of parabolic obstacle problems allowing space/time mesh adaptive refinement. These estimates are based on a posteriori error indicators which can be computed from the solution of the discrete problem. We present the indicators for the variational inequality obtained in the context of the pricing of an American option on a two dimensional basket using the Black & Scholes model. All these techniques are illustrated by numerical examples.

Journal ArticleDOI
TL;DR: A novel adaptive method is proposed to perform SSTA with delays of gates and interconnects modeled by quadratic polynomials based on Homogeneous Chaos expansion that has 10x improvements in the accuracy while using the same order of computation time.
Abstract: In this paper, we propose an Adaptive Stochastic Collocation Method for block-based Statistical Static Timing Analysis (SSTA). A novel adaptive method is proposed to perform SSTA with delays of gates and interconnects modeled by quadratic polynomials based on Homogeneous Chaos expansion. In order to approximate the key atomic operator MAX in the full random space during timing analysis, the proposed method adaptively chooses the optimal algorithm from a set of stochastic collocation methods by considering different input conditions. Compared with the existing stochastic collocation methods, including the one using dimension reduction technique and the one using Sparse Grid technique, the proposed method has 10x improvements in the accuracy while using the same order of computation time. The proposed algorithm also shows great improvement in accuracy compared with a moment matching method. Compared with the 10,000 Monte Carlo simulations on ISCAS85 benchmark circuits, the results of the proposed method show less than 1% error in the mean and variance, and nearly 100x speeds up.

01 Jan 2008
TL;DR: This talk proposes and analyzes an anisotropic sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model).
Abstract: This talk proposes and analyzes an anisotropic sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). This method is an extension of the Sparse Grid Stochastic Collocation method analyzed in [7], which consists of a Galerkin approximation in the space variables and a collocation, in probability space, on sparse tensor product grids utilizing either Clenshaw-Curtis or Gaussian knots. Even in the presence of nonlinearities, the collocation approach leads to the solution of uncoupled deterministic problems, just as in the Monte Carlo method. Our previous sparse collocation procedure is very effective for problems whose input data depend on a moderate number of random variables, which “weigh equally” in the solution. For such an isotropic situation the displayed convergence is faster than standard collocation techniques built upon full tensor product spaces.



Posted Content
TL;DR: A numerical technique is developed which makes Ulam's approach to the treatment of higher dimensional partial differential equations using sparse grids applicable to systems with higher dimensional long term dynamics.
Abstract: The global macroscopic behaviour of a dynamical system is encoded in the eigenfunctions of a certain transfer operator associated to it. For systems with low dimensional long term dynamics, efficient techniques exist for a numerical approximation of the most important eigenfunctions, cf. DeJu99a. They are based on a projection of the operator onto a space of piecewise constant functions supported on a neighborhood of the attractor - Ulam's method. In this paper we develop a numerical technique which makes Ulam's approach applicable to systems with higher dimensional long term dynamics. It is based on ideas for the treatment of higher dimensional partial differential equations using sparse grids. We develop the technique, establish statements about its complexity and convergence and present two numerical examples.

Book ChapterDOI
01 Jan 2008
TL;DR: This investigation shows how overfitting arises when the mesh size goes to zero and the application of modified “optimal” combination coefficients provides an advantage over the ones used originally for the numerical solution of PDEs, who in this case simply amplify the sampling noise.
Abstract: Sparse grids, combined with gradient penalties provide an attractive tool for regularised least squares fitting. It has earlier been found that the combination technique, which allows the approximation of the sparse grid fit with a linear combination of fits on partial grids, is here not as effective as it is in the case of elliptic partial differential equations. We argue that this is due to the irregular and random data distribution, as well as the proportion of the number of data to the grid resolution. These effects are investigated both in theory and experiments. The application of modified “optimal” combination coefficients provides an advantage over the ones used originally for the numerical solution of PDEs, who in this case simply amplify the sampling noise. As part of this investigation we also show how overfitting arises when the mesh size goes to zero.

Journal Article
Zeng Xuan1
TL;DR: The Sparse Grid Based Stochastic Collocation Method is proposed to model the analog circuit with process variations because it has the exponential convergence rate, which reduces the computational complexity significantly.
Abstract: The Sparse Grid Based Stochastic Collocation Method is proposed to model the analog circuit with process variations.First,compared with the traditional Monte Carlo method which needs a large amount of sampling points,the Stochastic Collocation Method has the exponential convergence rate,which reduces the computational complexity significantly.Second,compared with direct tensor product in multi-dimensional space,which suffers from the exponential increase of the number of sampling points with the number of variables,the Sparse Grid technique decreases the number of collocation points dramatically while the accuracy is guaranteed,and further reduces the computational complexity.


Proceedings ArticleDOI
08 Sep 2008
TL;DR: This study reveals that, compared to traditional Monte Carlo simulations, the collocation-based stochastic approaches can accurately quantify uncertainty in petroleum reservoirs and greatly reduce the computational cost.
Abstract: This paper presents non-intrusive, efficient stochastic approaches for predicting uncertainties associated with petroleum reservoir simulations. The Monte Carlo simulation method, which is the most common and straightforward approach for uncertainty quantification in the industry, requires to perform a large number of reservoir simulations and is thus computationally expensive especially for large-scale problems. We propose an efficient and accurate alternative through the collocation-based stochastic approaches. The reservoirs are considered to exhibit randomly heterogeneous flow properties. The underlying random permeability field can be represented by the Karhunen-Loeve expansion (or principal component analysis), which reduces the dimensionality of random space. Two different collocation-based methods are introduced to propagate uncertainty of the reservoir response. The first one is the probabilistic collocation method that deals with the random reservoir responses by employing the orthogonal polynomial functions as the bases of the random space and utilizing the collocation technique in the random space. The second one is the sparse grid collocation method that is based on the multi-dimensional interpolation and high-dimensional quadrature techniques. They are non-intrusive in that the resulting equations have the exactly the same form as the original equations and can thus be solved with existing reservoir simulators. These methods are efficient since only a small number of simulations are required and the statistical moments and probability density functions of the quantities of interest in the oil reservoirs can be accurately estimated. The proposed approaches are demonstrated with a 3D reservoir model originating from the 9th SPE comparative project. The accuracy, efficiency, and compatibility are compared against Monte Carlo simulations. This study reveals that, compared to traditional Monte Carlo simulations, the collocation-based stochastic approaches can accurately quantify uncertainty in petroleum reservoirs and greatly reduce the computational cost.

Posted Content
TL;DR: In this paper, a comprehensive framework for Bayesian estimation of structural nonlinear dynamic economic models on sparse grids is presented, and the posterior of the structural parameters is estimated by a new Metropolis-Hastings algorithm with mixing parallel sequences.
Abstract: We present a comprehensive framework for Bayesian estimation of structural nonlinear dynamic economic models on sparse grids. TheSmolyak operator underlying the sparse grids approach frees global approximation from the curse of dimensionality and we apply it to a Chebyshev approximation of the model solution. The operator also eliminates the curse from Gaussian quadrature and we use it for the integrals arising from rational expectations and in three new nonlinear state space filters. The filters substantially decrease the computational burden compared to the sequential importance resampling particle filter. The posterior of the structural parameters is estimated by a new Metropolis-Hastings algorithm with mixing parallel sequences. The parallel extension improves the global maximization property of the algorithm, simplifies the choice of the innovation variances, allows for unbiased convergence diagnostics and for a simple implementation of the estimation on parallel computers. Finally, we provide all algorithms in the open source software JBendge4 for the solution and estimation of a general class of models.