scispace - formally typeset
Search or ask a question

Showing papers on "Sparse grid published in 2010"


Journal ArticleDOI
TL;DR: The proposed method provides accurate results for stochastic dimensionality as high as 500 even with large-input variability and the efficiency of the proposed method is examined by comparing with Monte Carlo (MC) simulation.

221 citations


Dissertation
01 Jan 2010
TL;DR: The curse of dimensionality, i.e. the exponential dependency of the overall computational effort on the number of dimensions, is still a roadblock for the numerical treatment of sparse grids.
Abstract: Disclaimer: This pdf version differs slightly from the printed version. Few typos have been corrected! Acknowledgments This thesis would not have been possible without the direct and indirect contributions of several colleagues and friends to which I owe my greatest gratitude. Foremost, I am heartily thankful to my supervisor Hans-Joachim Bungartz for " panem et circenses " , enabling me to work on the topic of sparse grids in a positive and collaborative atmosphere at his chair. I have much appreciated his unconditional support, the open atmosphere, and the full confidence he always offered. His encouragement to look at the bigger picture led to exciting collaborations with other groups and to the challenging applications I was able to deal with. During my work I have learned a lot beyond the mere scientific scope. I would like to thank my reviewers Markus Hegland for inspiring discussions and his warm welcome and hospitality in Down Under (even enriched by a funny hunt for a fluffy Australian spider), and Christoph Zenger, the " grand seigneur of sparse grids " , for references to previous works and related problems. I am very grateful to all colleagues and collaborators. Special thanks go to Stefan Zim-mer for all his support, providing everything from a constant source of ideas and excellent fast (and, if necessary, last-minute) feedback, up to jelly beans and cappuccino. I would like to mention especially the sparse grid group at our chair with Janos Benk, Gerrit Buse, Daniel Butnaru, and Stefanie Schraufstetter. I am very pleased to thank Mona Frommert and Torsten Ensslin from the Max Planck Institute for Astrophysics for providing challenging applications and an excellent collaboration, and to Stefan Dirnstorfer and Andreas Grau for input and feedback regarding financial applications. Furthermore, I would like to thank all students who contributed to the SG ++ toolbox and to applications , in particular Finally, and most importantly, I would like to express my deepest gratefulness to my beloved wife Doris and my parents and sisters for all their constant support and patience throughout the last years. Without them, this thesis would have hardly been possible. Furthermore, I am very grateful to Martin who has been best friend, colleague, housemate, support, and much more, all in one. Abstract The curse of dimensionality, i.e. the exponential dependency of the overall computational effort on the number of dimensions, is still a roadblock for the numerical treatment …

164 citations


Book
22 Oct 2010
TL;DR: Numerical experiments show that the approaches presented in this book can be faster and more accurate than (quasi-) Monte Carlo methods, even for integrands with hundreds of dimensions.
Abstract: This book deals with the numerical analysis and efficient numerical treatment of high-dimensional integrals using sparse grids and other dimension-wise integration techniques with applications to finance and insurance. The book focuses on providing insights into the interplay between coordinate transformations, effective dimensions and the convergence behaviour of sparse grid methods. The techniques, derivations and algorithms are illustrated by many examples, figures and code segments. Numerical experiments with applications from finance and insurance show that the approaches presented in this book can be faster and more accurate than (quasi-) Monte Carlo methods, even for integrands with hundreds of dimensions.

131 citations


Journal ArticleDOI
TL;DR: A new general class of methods for the computation of high-dimensional integrals, designed to exploit low effective dimensions and include sparse grid methods as special case, is presented.

131 citations


Journal ArticleDOI
TL;DR: Results show that the use of sparse grid methods works better than popular counterparts, and the automatic sampling, special interpolation process, and dimension-adaptivity feature make SGI more flexible and efficient than using the uniform sample based metamodeling techniques.
Abstract: Current methods for uncertainty propagation suffer from their limitations in providing accurate and efficient solutions to high-dimension problems with interactions of random variables. The sparse grid technique, originally invented for numerical integration and interpolation, is extended to uncertainty propagation in this work to overcome the difficulty. The concept of Sparse Grid Numerical Integration (SGNI) is extended for estimating the first two moments of performance in robust design, while the Sparse Grid Interpolation (SGI) is employed to determine failure probability by interpolating the limit-state function at the Most Probable Point (MPP) in reliability analysis. The proposed methods are demonstrated by high-dimension mathematical examples with notable variate interactions and one multidisciplinary rocket design problem. Results show that the use of sparse grid methods works better than popular counterparts. Furthermore, the automatic sampling, special interpolation process, and dimension-adaptivity feature make SGI more flexible and efficient than using the uniform sample based metamodeling techniques.

111 citations


Journal ArticleDOI
TL;DR: An extension of the classical sparse grid approach that allows us to tackle high-dimensional problems by spatially adaptive refinement, modified ansatz functions, and efficient regularization techniques is presented.

102 citations


Journal ArticleDOI
Jie Shen1, Haijun Yu
TL;DR: A fast algorithm for the discrete transform between the values at the sparse grid and the coefficients of expansion in a hierarchical basis is developed; and by using the aforementioned fast transform, two very efficient sparse spectral-Galerkin methods for a model elliptic equation are constructed.
Abstract: We develop in this paper some efficient algorithms which are essential to implementations of spectral methods on the sparse grid by Smolyak's construction based on a nested quadrature. More precisely, we develop a fast algorithm for the discrete transform between the values at the sparse grid and the coefficients of expansion in a hierarchical basis; and by using the aforementioned fast transform, we construct two very efficient sparse spectral-Galerkin methods for a model elliptic equation. In particular, the Chebyshev-Legendre-Galerkin method leads to a sparse matrix with a much lower number of nonzero elements than that of low-order sparse grid methods based on finite elements or wavelets, and can be efficiently solved by a suitable sparse solver. Ample numerical results are presented to demonstrate the efficiency and accuracy of our algorithms.

88 citations


Journal ArticleDOI
TL;DR: A comprehensive framework for Bayesian estimation of structural nonlinear dynamic economic models on sparse grids to overcome the curse of dimensionality for approximations and provides all algorithms in the open source software JBendge for the solution and estimation of a general class of models.
Abstract: We present a comprehensive framework for Bayesian estima- tion of structural nonlinear dynamic economic models on sparse grids. The Smolyak operator underlying the sparse grids approach frees global approx- imation from the curse of dimensionality and we apply it to a Chebyshev approximation of the model solution. The operator also eliminates the curse from Gaussian quadrature and we use it for the integrals arising from ratio- nal expectations and in three new nonlinear state space fllters. The fllters substantially decrease the computational burden compared to the sequential importance resampling particle fllter. The posterior of the structural pa- rameters is estimated by a new Metropolis-Hastings algorithm with mixing parallel sequences. The parallel extension improves the global maximization property of the algorithm, simplifles the choice of the innovation variances, allows for unbiased convergence diagnostics and for a simple implementation of the estimation on parallel computers. Finally, we provide all algorithms in the open source software JBendge 4 for the solution and estimation of a

66 citations


Journal ArticleDOI
TL;DR: A proposed two-scale framework combines a random domain decomposition (RDD) and a probabilistic collocation method (PCM) on sparse grids to quantify these two sources of uncertainty, respectively and yields efficient, robust and non-intrusive approximations for the statistics of diffusion in random composites.

60 citations



Book
23 Dec 2010
TL;DR: Efficient numerical algorithms are developed in order to analyze the long-term behavior of dynamical systems by transfer operator methods, and mean field theory is used to approximate the marginal dynamics on low-dimensional subsystems.
Abstract: In this thesis we develop efficient numerical algorithms in order to analyze the long-term behavior of dynamical systems by transfer operator methods. Three new approaches are represented. First, a discretization via sparse grids is introduced for general systems, aiming to overcome the curse of dimension. Second, for continuous-time systems the infinitesimal generator of the associated transfer operator semigroup is treated numerically. A robust cell-to-cell approach, and a spectral method approach for smooth problems are derived. Third, focusing on the detection of conformation changes in molecules, mean field theory is used to approximate the marginal dynamics on low-dimensional subsystems. Also, conditions are given under which the Galerkin projection of a transfer operator can be related to the small random perturbation of the underlying system.

Proceedings ArticleDOI
17 Feb 2010
TL;DR: This paper focuses on the design of Krylov subspace based iterative solvers to take advantage of massive parallelism of general purpose Graphics Processing Units (GPU)s and discusses data structures and efficient implementation of these solvers on the NVIDIA's CUDA platform.
Abstract: In many numerical applications resulting from computational science and engineering problems, the solution of sparse linear systems is the most prohibitively compute intensive task. Consequently, the linear solvers need to be carefully chosen and efficiently implemented in order to harness the available computing resources. Krylov subspace based iterative solvers have been widely used for solving large systems of linear equations. In this paper, we focus on the design of such iterative solvers to take advantage of massive parallelism of general purpose Graphics Processing Units (GPU)s. We will consider Stabilized BiConjugate Gradient (BiCGStab) and Conjugate Gradient Squared (CGS) methods for the solutions of sparse linear systems with unsymmetric coefficient matrices. We discuss data structures and efficient implementation of these solvers on the NVIDIA's CUDA platform. We evaluate scalability and performance of our implementations in the context of a financial engineering problem of solving multidimensional option pricing PDEs using sparse grid combination technique.

Journal ArticleDOI
TL;DR: In this article, a data-driven stochastic collocation approach to include the effect of uncertain design parameters during complex multi-physics simulation of Micro-ElectroMechanical Systems (MEMS) is presented.
Abstract: This work presents a data-driven stochastic collocation approach to include the effect of uncertain design parameters during complex multi-physics simulation of Micro-ElectroMechanical Systems (MEMS). The proposed framework comprises of two key steps: first, probabilistic characterization of the input uncertain parameters based on available experimental information, and second, propagation of these uncertainties through the predictive model to relevant quantities of interest. The uncertain input parameters are modeled as independent random variables, for which the distributions are estimated based on available experimental observations, using a nonparametric diffusion-mixing-based estimator, Botev (Nonparametric density estimation via diffusion mixing. Technical Report, 2007). The diffusion-based estimator derives from the analogy between the kernel density estimation (KDE) procedure and the heat dissipation equation and constructs density estimates that are smooth and asymptotically consistent. The diffusion model allows for the incorporation of the prior density and leads to an improved density estimate, in comparison with the standard KDE approach, as demonstrated through several numerical examples. Following the characterization step, the uncertainties are propagated to the output variables using the stochastic collocation approach, based on sparse grid interpolation, Smolyak (Soviet Math. Dokl. 1963; 4:240–243). The developed framework is used to study the effect of variations in Young's modulus, induced as a result of variations in manufacturing process parameters or heterogeneous measurements on the performance of a MEMS switch. Copyright © 2010 John Wiley & Sons, Ltd.

Journal ArticleDOI
Alfio Borzì1
TL;DR: A multigrid and sparse-grid computational approach to solving nonlinear elliptic optimal control problems with random coefficients and the influence of randomness of problem’s coefficients on the control provided by the optimal control theory is investigated.
Abstract: A multigrid and sparse-grid computational approach to solving nonlinear elliptic optimal control problems with random coefficients is presented. The proposed scheme combines multigrid methods with sparse-grids collocation techniques. Within this framework the influence of randomness of problem’s coefficients on the control provided by the optimal control theory is investigated. Numerical results of computation of stochastic optimal control solutions and formulation of mean control functions are presented.

Journal ArticleDOI
TL;DR: This sparse grid-based experiment design process provides a systematic and computationally efficient exploration over the entire uncertain parameter space of potential model structures to resolve the uncertainty in the non-linear systems biology model dynamics.
Abstract: The sparse grid-based experiment design algorithm sequentially selects an experimental design point to discriminate between hypotheses for given experimental conditions. Sparse grids efficiently screen the global uncertain parameter space to identify acceptable parameter subspaces. Clustering the located acceptable parameter vectors by the similarity of the simulated model trajectories characterises the data-compatible model dynamics. The experiment design algorithm capitalises on the diversity of the experimentally distinguishable system output dynamics to select the design point that best discerns between competing model-structure and parameter-encoded hypotheses. As opposed to designing the experiments to explicitly reduce uncertainty in the model parameters, this approach selects design points to differentiate between dynamical behaviours. This approach further differs from other experimental design methods in that it simultaneously addresses both parameter- and structural-based uncertainty that is applicable to some ill-posed problems where the number of uncertain parameters exceeds the amount of data, places very few requirements on the model type, available data and a priori parameter estimates, and is performed over the global uncertain parameter space. The experiment design algorithm is demonstrated on a mitogen-activated protein kinase cascade model. The results show that system dynamics are highly uncertain with limited experimental data. Nevertheless, the algorithm requires only three additional experimental data points to simultaneously discriminate between possible model structures and acceptable parameter values. This sparse grid-based experiment design process provides a systematic and computationally efficient exploration over the entire uncertain parameter space of potential model structures to resolve the uncertainty in the non-linear systems biology model dynamics.

Proceedings ArticleDOI
23 May 2010
TL;DR: In this paper, the authors proposed a model order reduction method for finite element approximations of passive electromagnetic devices under random input conditions, where the reduced order system matrices are represented in terms of their convergent orthogonal polynomial expansions of input random variables.
Abstract: A methodology is proposed for the model order reduction of finite element approximations of passive electromagnetic devices under random input conditions. In this approach, the reduced order system matrices are represented in terms of their convergent orthogonal polynomial expansions of input random variables. The coefficients of these polynomials, which are matrices, are obtained by repeated, deterministic model order reduction of finite element models generated for specific values of the input random variables. These values are chosen efficiently in a multi-dimensional grid using a Smolyak algorithm. The stochastic reduced order model is represented in the form of an augmented system which can be used for generating the desired statistics of the specific system response. The proposed method provides for significant improvement in computational efficiency over standard Monte Carlo.

Book ChapterDOI
21 Dec 2010
TL;DR: This chapter suggests an algorithm that is based on GHK but uses an adaptive version of sparse-grids integration (SGI) instead of simulation, which generalizes Gaussian quadrature in a way such that the computational costs do not grow exponentially with the number of dimensions.
Abstract: In empirical research, panel (and multinomial) probit models are leading examples for the use of maximum simulated likelihood estimators. The Geweke–Hajivassiliou–Keane (GHK) simulator is the most widely used technique for this type of problem. This chapter suggests an algorithm that is based on GHK but uses an adaptive version of sparse-grids integration (SGI) instead of simulation. It is adaptive in the sense that it uses an automated change-of-variables to make the integration problem numerically better behaved along the lines of efficient importance sampling (EIS) and adaptive univariate quadrature. The resulting integral is approximated using SGI that generalizes Gaussian quadrature in a way such that the computational costs do not grow exponentially with the number of dimensions. Monte Carlo experiments show an impressive performance compared to the original GHK algorithm, especially in difficult cases such as models with high intertemporal correlations.

Journal ArticleDOI
TL;DR: The key to this method is a dimension reduction technique based on a Karhunen–Loeve expansion, which is also known as proper orthogonal decomposition, using the eigenvectors of a covariance operator for the differential equation to be projected to a low-dimensional problem.
Abstract: Hilbert space-valued jump-diffusion models are employed for various markets and derivatives. Examples include swaptions, which depend on continuous forward curves, and basket options on stocks. Usually, no analytical pricing formulas are available for such products. Numerical methods, on the other hand, suffer from exponentially increasing computational effort with increasing dimension of the problem, the “curse of dimension.” In this paper, we present an efficient approach using partial integro-differential equations. The key to this method is a dimension reduction technique based on a Karhunen–Loeve expansion, which is also known as proper orthogonal decomposition. Using the eigenvectors of a covariance operator, the differential equation is projected to a low-dimensional problem. Convergence results for the projection are given, and the numerical aspects of the implementation are discussed. An approximate solution is computed using a sparse grid combination technique and discontinuous Galerkin discreti...

Journal ArticleDOI
Nils Reich1
TL;DR: A sparse tensor product wavelet compression scheme for the Galerkin finite element discretization of the corresponding integrodifferential equations Bu = f on (0, 1) n with possibly large n.
Abstract: For a class of anisotropic integrodifferential operators B arising as semigroup generators of Markov processes, we present a sparse tensor product wavelet compression scheme for the Galerkin finite element discretization of the corresponding integrodifferential equations Bu = f on (0, 1) n with possibly large n. Under certain conditions on B, the scheme is of essentially optimal and dimension independent complexity O(h −1 | log h| 2(n−1) ) without corrupting the convergence or smoothness requirements of the original sparse tensor finite element scheme. If the conditions on B are not satisfied, the complexity can be bounded by O(h −(1+e) ), where e � 1 tends to zero with increasing number of the wavelets' vanishing moments. Here h denotes the width of the corresponding finite element mesh. The operators under consideration are assumed to be of non-negative (anisotropic) order and admit a non-standard kernel κ(·, ·) that can be singular on all secondary diagonals. Practical examples of such operators from Mathematical Finance are given and some numerical results are presented.

Journal ArticleDOI
TL;DR: In this paper, the dimension-wise expansion model was used for cross-section parameterization and the components of the model were approximated with tensor products of orthogonal polynomials.

DOI
01 Jan 2010
TL;DR: This work solves the stationary monochromatic radiative transfer equation with a multi-level Galerkin FEM in physical space and a spectral discretization with harmonics in solid angle and shows that the benefits of the concept of sparse tensor products, known from the context of sparse grids, can also be leveraged in combination with a spectralDiscretization.
Abstract: The stationary monochromatic radiative transfer equation is a partial differential transport equation stated on a five-dimensional phase space. To obtain a well-posed problem, boundary conditions have to be prescribed on the inflow part of the domain boundary. We solve the equation with a multi-level Galerkin FEM in physical space and a spectral discretization with harmonics in solid angle and show that the benefits of the concept of sparse tensor products, known from the context of sparse grids, can also be leveraged in combination with a spectral discretization. Our method allows us to include high spectral orders without incurring the ''curse of dimension'' of a five-dimensional computational domain. Neglecting boundary conditions, we find analytically that for smooth solutions, the convergence rate of the full tensor product method is retained in our method up to a logarithmic factor, while the number of degrees of freedom grows essentially only as fast as for the purely spatial problem. For the case with boundary conditions, we propose a splitting of the physical function space and a conforming tensorization. Numerical experiments in two physical and one angular dimension show evidence for the theoretical convergence rates to hold in the latter case as well.

Proceedings ArticleDOI
13 Sep 2010
TL;DR: This work explores the use of polynomial order refinement (prefinement) approaches, both uniform and adaptive, in order to automate the assessment of UQ convergence and improve computational efficiency.
Abstract: Non-intrusive polynomial chaos expansion (NIPCE) methods based on orthogonal polynomials and stochastic collocation (SC) methods based on Lagrange interpolation polynomials are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic dependence. Both techniques reside in the collocation family, in that they sample the response metrics of interest at selected locations within the random domain without intrusion to simulation software. In this work, we explore the use of polynomial order refinement (prefinement) approaches, both uniform and adaptive, in order to automate the assessment of UQ convergence and improve computational efficiency. In the first class of p-refinement approaches, we employ a general-purpose metric of response covariance to control the uniform and adaptive refinement processes. For the adaptive case, we detect anisotropy in the importance of the random variables as determined through variance-based decomposition and exploit this decomposition through anisotropic tensor-product and anisotropic sparse grid constructions. In the second class of p-refinement approaches, we move from anisotropic sparse grids to generalized sparse grids and employ a goal-oriented refinement process using statistical quantities of interest. Since these refinement goals can frequently involve metrics that are not analytic functions of the expansions (i.e., beyond low order response moments), we additionally explore efficient mechanisms for accurately and efficiently estimated tail probabilities from the expansions based on importance sampling.

Journal ArticleDOI
Gisela Widmer1
TL;DR: An efficient solver based on the conjugate gradient method with a subspace correction preconditioner is presented and Numerical experiments show that the linear system can be solved at computational costs that are nearly proportional to the number of degrees of freedom N in the discretization.
Abstract: The stationary monochromatic radiative transfer equation is posed in five dimensions, with the intensity depending on both a position in a three-dimensional domain as well as a direction. For nonscattering radiative transfer, sparse finite elements [2007, "Sparse Finite Elements for Non-Scattering Radiative Transfer in Diffuse Regimes," ICHMT Fifth International Symposium of Radiative Transfer, Bodrum, Turkey; 2008, "Sparse Adaptive Finite Elements for Radiative Transfer," J. Comput. Phys., 227(12), pp. 6071-6105] have been shown to be an efficient discretization strategy if the intensity function is sufficiently smooth. Compared with the discrete ordinates method, they make it possible to significantly reduce the number of degrees of freedom N in the discretization with almost no loss of accuracy. However, using a direct solver to solve the resulting linear system requires O(N 3 ) operations. In this paper, an efficient solver based on the conjugate gradient method with a subspace correction preconditioner is presented. Numerical experiments show that the linear system can be solved at computational costs that are nearly proportional to the number of degrees of freedom N in the discretization.

Book ChapterDOI
01 Jan 2010
TL;DR: This paper combines the idea of Theta-calculus with an approach based on partial differential equations (PDE) to get a higher accuracy and deduce the resulting pricing algorithm that is general and independent from the type of product.
Abstract: In [An Introduction to Theta-calculus (2005)], Dirnstorfer introduced the Theta-notation for modeling financial contracts consistently by a sequence of operators. This easy-to-use modeling for financial engineers together with Monte Carlo methods is already applied successfully for option pricing. We combined the idea of Theta-calculus with an approach based on partial differential equations (PDE) to get a higher accuracy. In this paper, we give a short introduction to Theta-calculus and deduce the resulting pricing algorithm that is – in contrast to common PDE based pricing techniques – general and independent from the type of product. With the use of sparse grids, this method also works for higher dimensional problems. Thus, the approach allows an easy access to the numerical pricing of various types of multi-dimensional problems.

Proceedings ArticleDOI
19 Apr 2010
TL;DR: The algorithmical structure of efficient algorithms operating on sparse grids are introduced, and it is demonstrated how they can be used to derive an efficient parallelization with OpenMP of the Black-Scholes solver.
Abstract: We present the parallelization of a sparse grid finite element discretization of the Black-Scholes equation, which is commonly used for option pricing. Sparse grids allow to handle higher dimensional options than classical approaches on full grids, and can be extended to a fully adaptive discretization method. We introduce the algorithmical structure of efficient algorithms operating on sparse grids, and demonstrate how they can be used to derive an efficient parallelization with OpenMP of the Black-Scholes solver. We show results on different commodity hardware systems based on multi-core architectures with up to 8 cores, and discuss the parallel performance using Intel and AMD CPUs.

Journal ArticleDOI
TL;DR: A novel method to significantly speed up cosmological parameter sampling based on constructing an interpolation of the cosmic microwave background log-likelihood based on sparse grids, which is used as a shortcut for the likelihood evaluation.
Abstract: We present a novel method to significantly speed up cosmological parameter sampling. The method relies on constructing an interpolation of the cosmic microwave background log-likelihood based on sparse grids, which is used as a shortcut for the likelihood evaluation. We obtain excellent results over a large region in parameter space, comprising about 25 log-likelihoods around the peak, and we reproduce the one-dimensional projections of the likelihood almost perfectly. In speed and accuracy, our technique is competitive to existing approaches to accelerate parameter estimation based on polynomial interpolation or neural networks, while having some advantages over them. In our method, there is no danger of creating unphysical wiggles as it can be the case for polynomial fits of a high degree. Furthermore, we do not require a long training time as for neural networks, but the construction of the interpolation is determined by the time it takes to evaluate the likelihood at the sampling points, which can be parallelized to an arbitrary degree. Our approach is completely general, and it can adaptively exploit the properties of the underlying function. We can thus apply it to any problem where an accurate interpolation of a function is needed.

Book ChapterDOI
N. Zabaras1
12 Oct 2010
TL;DR: To solve large-scale problems involving high-dimensional stochastic spaces (in a scalable way) and to allow non-smooth variations of the solution in the random space, there have been recent efforts to couple the fast convergence of the Galerkin methods with the decoupled nature of Monte-Carlo sampling.
Abstract: In recent years there has been significant progress in quantifying and modeling the effect of input uncertainties in the response of PDEs using non-statistical methods. The presence of uncertainties is incorporated by transforming the PDEs representing the system into a set of stochastic PDEs (SPDEs). The spectral representation of the stochastic space resulted in the development of the Generalized Polynomial Chaos Expansion (GPCE) methods (Ghanem 1991; Ghanem and Spanos 1991; Ghanem and Sharkar 2002; Xiu and Karniadakis 2002, 2003). This approach requires extensive revamping of a deterministic simulator to convert it into its stochastic counterpart and has issues with scalability when applied to large scale problems. To solve large-scale problems involving high-dimensional stochastic spaces (in a scalable way) and to allow non-smooth variations of the solution in the random space, there have been recent efforts to couple the fast convergence of the Galerkin methods with the decoupled nature of Monte-Carlo sampling (Velamur Asokan and Zabaras 2005; Babuska et al. 2005a,b). The Smolyak algorithm has been used recently to build sparse grid interpolants in high-dimensional space (Nobile et al. 2008; Xiu

Journal ArticleDOI
TL;DR: A Sparse Grid Distance Transform is presented, an algorithm for computing and storing large distance fields that can be recovered from distance fields of sub-block cluster boundaries and the binary information of the cluster through a one-time distance transform.
Abstract: We present a Sparse Grid Distance Transform (SGDT), an algorithm for computing and storing large distance fields. Although SGDT is based on a divide-and-conquer algorithm for distance transforms, its data structure is quite simplified. Our observations revealed that distance fields can be recovered from distance fields of sub-block cluster boundaries and the binary information of the cluster through a one-time distance transform. This means that it is sufficient to consider only the cluster boundaries and to represent clusters as binary volumes. As a result, memory usage is less than 0.5% the size of raw files, and it works in-core.

Proceedings ArticleDOI
02 Aug 2010
TL;DR: In this paper, a new nonlinear filter based on Sparse Gauss-Hermite Quadrature (SGHQ) is proposed for orbit estimation, which uses the sparse grid method based on the Smolyak's Product Rule to alleviate the curse of dimensionality problem.
Abstract: In this paper, a new nonlinear filter based on Sparse Gauss-Hermite Quadrature (SGHQ) is proposed for orbit estimation. Although Gauss-Hermite Quadrature (GHQ) has been widely used in numerical integration, its usage in nonlinear filtering is relatively new with a few successful applications to one-dimensional problems. It is difficult to use for higher dimensional nonlinear filtering problems because the conventional GHQ based filter that uses product operations is difficult to implement as the number of points increases exponentially with dimension. In this work, we use the sparse grid method based on the Smolyak’s Product Rule to design a new sparse GHQ filter to alleviate the curse-of-dimensionality problem. The number of SGHQ points needed for higher dimensional problems is dramatically less than that of the original method. The performance of this new filter is demonstrated through the orbit estimation problem, which demonstrates better results than the Extended Kalman Filter (EKF).

Journal ArticleDOI
TL;DR: In this article, an interpolation of the CMB-log-likelihood based on sparse grids is proposed, which is used as a shortcut for the likelihood-evaluation.
Abstract: We present a novel method to significantly speed up cosmological parameter sampling. The method relies on constructing an interpolation of the CMB-log-likelihood based on sparse grids, which is used as a shortcut for the likelihood-evaluation. We obtain excellent results over a large region in parameter space, comprising about 25 log-likelihoods around the peak, and we reproduce the one-dimensional projections of the likelihood almost perfectly. In speed and accuracy, our technique is competitive to existing approaches to accelerate parameter estimation based on polynomial interpolation or neural networks, while having some advantages over them. In our method, there is no danger of creating unphysical wiggles as it can be the case for polynomial fits of a high degree. Furthermore, we do not require a long training time as for neural networks, but the construction of the interpolation is determined by the time it takes to evaluate the likelihood at the sampling points, which can be parallelised to an arbitrary degree. Our approach is completely general, and it can adaptively exploit the properties of the underlying function. We can thus apply it to any problem where an accurate interpolation of a function is needed.