scispace - formally typeset
Search or ask a question

Showing papers on "Sparse grid published in 2011"


Book ChapterDOI
01 Jan 2011
TL;DR: Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SC-SM and SG-TD being the best choices of approximation spaces for each method.
Abstract: Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods. By introducing a suitable generalization of the classical sparse grid SC method, we are able to compare SG and SC on the same underlying multivariate polynomial space in terms of accuracy vs. computational work. The approximation spaces considered here include isotropic and anisotropic versions of Tensor Product (TP), Total Degree (TD), Hyperbolic Cross (HC) and Smolyak (SM) polynomials. Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SC-SM and SG-TD being the best choices of approximation spaces for each method. Finally, numerical results corroborate the optimality of the theoretical estimate of anisotropy ratios introduced by the authors in a previous work for the construction of anisotropic approximation spaces.

211 citations


Journal ArticleDOI
TL;DR: Two approaches for moment design sensitivities are presented,one involving response function expansions over both design and uncertain variables and one involving response derivative expansions over only the uncertain variables.
Abstract: Non-intrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability. PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, or Smolyak sparse grid approaches. SC, on the other hand, forms interpolation functions for known coefficients, and requires the use of structured collocation point sets derived from tensor product or sparse grids. When tailoring the basis functions or interpolation grids to match the forms of the input uncertainties, exponential convergence rates can be achieved with both techniques for general probabilistic analysis problems. Once PCE or SC representations have been obtained for a response metric of interest, analytic expressions can be derived for the moments of the expansion and for the design derivatives of these moments, allowing for efficient design under uncertainty formulations involving moment control (e.g., robust design). This paper presents two approaches for moment design sensitivities, one involving response function expansions over both design and uncertain variables and one involving response derivative expansions over only the uncertain variables. These approaches present a trade-off between increased dimensionality in the expansions (and therefore increased simulation runs required to construct them) with global expansion validity versus increased data requirements per simulation with local expansion validity. Given this capability for analytic moments and their sensitivities, we explore bilevel, sequential, and multifidelity formulations for OUU. Initial experiences with these approaches is presented for a number of benchmark test problems.

104 citations


Journal ArticleDOI
TL;DR: This work solves the stationary monochromatic radiative transfer equation with a multi-level Galerkin FEM in physical space and a spectral discretization with harmonics in solid angle and shows that the benefits of the concept of sparse tensor products, known from the context of sparse grids, can also be leveraged in combination with a spectralDiscretization.

68 citations


ReportDOI
TL;DR: It is demonstrated that polynomialbased rules out-perform number-theoretic quadrature (Monte Carlo) rules both in terms of efficiency and accuracy.
Abstract: Efficient, accurate, multi-dimensional, numerical integration has become an important tool for approximating the integrals which arise in modern economic models built on unobserved heterogeneity, incomplete information, and uncertainty. This paper demonstrates that polynomialbased rules out-perform number-theoretic quadrature (Monte Carlo) rules both in terms of efficiency and accuracy. To show the impact a quadrature method can have on results, we examine the performance of these rules in the context of Berry, Levinsohn, and Pakes (1995)’s model of product differentiation, where Monte Carlo methods introduce considerable numerical error and instability into the computations. These problems include inaccurate point estimates, excessively tight standard errors, instability of the inner loop ‘contraction’ mapping for inverting market shares, and poor convergence of several state of the art solvers when computing point estimates. Both monomial rules and sparse grid methods lack these problems and provide a more accurate, cheaper method for quadrature. Finally, we demonstrate how researchers can easily utilize high quality, high dimensional quadrature rules in their own work.

59 citations


Journal ArticleDOI
TL;DR: It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes ''optimal'', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space.

57 citations


Book ChapterDOI
Markus Holtz1
01 Jan 2011
TL;DR: In this chapter, various numerical experiments with different applications from finance are presented and the performance of the different numerical quadrature methods from Chapter 3 and Chapter 4 are studied.
Abstract: In this chapter, we present various numerical experiments with different applications from finance. We study the performance of the different numerical quadrature methods from Chapter 3 and Chapter 4 and investigate the impact of the different approaches for dimension reduction and smoothing from Chapter 5. Parts of this chapter are taken from [58].

54 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the convergence rate of interpolating splines with respect to sparse grids for Besov spaces of dominating mixed smoothness (Tensor Product Besov Spaces).
Abstract: We investigate the rate of convergence of interpolating splines with respect to sparse grids for Besov spaces of dominating mixed smoothness (tensor product Besov spaces). Main emphasis is given to the approximation by piecewise linear functions.

53 citations


Journal ArticleDOI
TL;DR: An explicit formula for evaluating a Lagrange basis interpolating polynomial associated with the Chebyshev extrema is recovered which allows one to manipulate the sparse grid collocation results in a highly efficient manner.
Abstract: The stochastic collocation method using sparse grids has become a popular choice for performing stochastic computations in high dimensional (random) parame- ter space. In addition to providing highly accurate stochastic solutions, the sparse grid collocation results naturally contain sensitivity information with respect to the input random parameters. In this paper, we use the sparse grid interpolation and cubature methods of Smolyak together with combinatorial analysis to give a computationally efficient method for computing the global sensitivity values of Sobol'. This method al- lows for approximation of all main effect and total effect values from evaluation of f on a single set of sparse grids. We discuss convergence of this method, apply it to several test cases and compare to existing methods. As a result which may be of independent interest, we recover an explicit formula for evaluating a Lagrange basis interpolating polynomial associated with the Chebyshev extrema. This allows one to manipulate the sparse grid collocation results in a highly efficient manner.

53 citations


Journal ArticleDOI
TL;DR: In this article, a new Bayesian computational approach is developed to estimate spatially varying parameters, based on a hierarchically structured sparse grid, a multiscale representation of the spatial field is constructed.
Abstract: A new Bayesian computational approach is developed to estimate spatially varying parameters. The sparse grid collocation method is adopted to parameterize the spatial field. Based on a hierarchically structured sparse grid, a multiscale representation of the spatial field is constructed. An adaptive refinement strategy is then used for computing the spatially varying parameter. A sequential Monte Carlo (SMC) sampler is used to explore the posterior distributions defined on multiple scales. The SMC sampling is directly parallelizable and is superior to conventional Markov chain Monte Carlo methods for multi-modal target distributions. The samples obtained at coarser levels of resolution are used to provide prior information for the estimation at finer levels. This Bayesian computational approach is rather general and applicable to various spatially varying parameter estimation problems. The method is demonstrated with the estimation of permeability in flows through porous media.

41 citations


Journal ArticleDOI
TL;DR: A practical nonintrusive multiscale solver that permits consideration of uncertainties in heterogeneous materials without exhausting the available computational resources is described and verified against the Latin Hypercube Monte–Carlo method.
Abstract: In this paper, we describe a practical nonintrusive multiscale solver that permits consideration of uncertainties in heterogeneous materials without exhausting the available computational resources. The computational complexity of analyzing heterogeneous material systems is governed by the physical and probability spaces at multiple scales. To deal with these large spaces, we employ reduced order homogenization approach in combination with the Karhunen–Loeve expansion and stochastic collocation method based on sparse grid. The resulting nonintrusive multiscale solver, which is aimed at providing practical solutions for complex multiscale stochastic problems, has been verified against the Latin Hypercube Monte–Carlo method. Copyright © 2011 John Wiley & Sons, Ltd.

37 citations


Journal ArticleDOI
TL;DR: An adaptive approach for the computation of problems with discrete random variables is introduced and its efficiency for scattering problems with a random number of holes is demonstrated.

Posted Content
TL;DR: In this article, it was shown that polynomial-based rules out-performed Monte Carlo rules both in terms of efficiency and accuracy in the context of product differentiation, where Monte Carlo methods introduce considerable numerical error and instability into the computations.
Abstract: Efficient, accurate, multi-dimensional, numerical integration has become an important tool for approximating the integrals which arise in modern economic models built on unobserved heterogeneity, incomplete information, and uncertainty. This paper demonstrates that polynomialbased rules out-perform number-theoretic quadrature (Monte Carlo) rules both in terms of efficiency and accuracy. To show the impact a quadrature method can have on results, we examine the performance of these rules in the context of Berry, Levinsohn, and Pakes (1995)'s model of product differentiation, where Monte Carlo methods introduce considerable numerical error and instability into the computations. These problems include inaccurate point estimates, excessively tight standard errors, instability of the inner loop 'contraction' mapping for inverting market shares, and poor convergence of several state of the art solvers when computing point estimates. Both monomial rules and sparse grid methods lack these problems and provide a more accurate, cheaper method for quadrature. Finally, we demonstrate how researchers can easily utilize high quality, high dimensional quadrature rules in their own work.

Journal ArticleDOI
TL;DR: This article extends the idea of the linear scaling algorithm to more general sparse grid spaces by abstracting the algorithm given in (Balder and Zenger, SIAM J. Sci. Comput. 17:631, 1996) from specific bases, thereby identifying the prerequisites for performing the algorithm.
Abstract: Sparse grid discretization of higher dimensional partial differential equations is a means to break the curse of dimensionality. For classical sparse grids based on the one-dimensional hierarchical basis, a sophisticated algorithm has been devised to calculate the application of a vector to the Galerkin matrix in linear complexity, despite the fact that the matrix is not sparse. However more general sparse grid constructions have been recently introduced, e.g. based on multilevel finite elements, where the specified algorithms only have a log-linear scaling. This article extends the idea of the linear scaling algorithm to more general sparse grid spaces. This is achieved by abstracting the algorithm given in (Balder and Zenger, SIAM J. Sci. Comput. 17:631, 1996) from specific bases, thereby identifying the prerequisites for performing the algorithm. In this way one can easily adapt the algorithm to specific discretizations, leading for example to an optimal linear scaling algorithm in the case of multilevel finite element frames.

Proceedings ArticleDOI
03 May 2011
TL;DR: This paper presents the parallelization on several current task- and data-parallel platforms, covering multi-core CPUs with vector units, GPUs, and hybrid systems, and analyzes the suitability of parallel programming languages for the implementation.
Abstract: Gaining knowledge out of vast datasets is a main challenge in data-driven applications nowadays. Sparse grids provide a numerical method for both classification and regression in data mining which scales only linearly in the number of data points and is thus well-suited for huge amounts of data. Due to the recursive nature of sparse grid algorithms, they impose a challenge for the parallelization on modern hardware architectures such as accelerators. In this paper, we present the parallelization on several current task- and data-parallel platforms, covering multi-core CPUs with vector units, GPUs, and hybrid systems. Furthermore, we analyze the suitability of parallel programming languages for the implementation.Considering hardware, we restrict ourselves to the x86 platform with SSE and AVX vector extensions and to NVIDIA's Fermi architecture for GPUs. We consider both multi-core CPU and GPU architectures independently, as well as hybrid systems with up to 12 cores and 2 Fermi GPUs. With respect to parallel programming, we examine both the open standard OpenCL and Intel Array Building Blocks, a recently introduced high-level programming approach. As the baseline, we use the best results obtained with classically parallelized sparse grid algorithms and their OpenMP-parallelized intrinsics counterpart (SSE and AVX instructions), reporting both single and double precision measurements. The huge data sets we use are a real-life dataset stemming from astrophysics and an artificial one which exhibits challenging properties. In all settings, we achieve excellent results, obtaining speedups of more than 60 using single precision on a hybrid system.

Proceedings ArticleDOI
12 Feb 2011
TL;DR: Optimizations that enable us to implement compression and decompression, the crucial sparse grid algorithms for the authors' application, on Nvidia GPUs are described and it is shown that the optimizations are also applicable to multicore CPUs.
Abstract: The sparse grid discretization technique enables a compressed representation of higher-dimensional functions. In its original form, it relies heavily on recursion and complex data structures, thus being far from well-suited for GPUs. In this paper, we describe optimizations that enable us to implement compression and decompression, the crucial sparse grid algorithms for our application, on Nvidia GPUs. The main idea consists of a bijective mapping between the set of points in a multi-dimensional sparse grid and a set of consecutive natural numbers. The resulting data structure consumes a minimum amount of memory. For a 10-dimensional sparse grid with approximately 127 million points, it consumes up to 30 times less memory than trees or hash tables which are typically used. Compared to a sequential CPU implementation, the speedups achieved on GPU are up to 17 for compression and up to 70 for decompression, respectively. We show that the optimizations are also applicable to multicore CPUs.

Proceedings ArticleDOI
17 Jun 2011
TL;DR: It is shown that the robustly optimized designs of the rocket are significantly less sensitive to the input variations compared to the deterministic oneS, which demonstrates the effectiveness of the proposed PCE based robust design procedure in the designs involving varying random dimensions.
Abstract: In this paper, we present the polynomial chaos expansion (PCE) approach as a rigorous method for uncertainty propagation and further extend its use to robust design optimization. Thus a PCE based robust design optimization approach is developed. The mathematical background of PCE is introduced, where techniques of full factorial numerical integration (FFNI) and sparse grid numerical integration (SGNI) are proposed to estimate the PCE coefficients for low and high dimensional cases, respectively. Through a rocket design example, it is shown that the robustly optimized designs of the rocket are significantly less sensitive to the input variations compared to the deterministic oneS, which demonstrates the effectiveness of the proposed PCE based robust design procedure in the designs involving varying random dimensions. Specifically, the “curse of dimensionality” is significantly alleviated for high-dimension problems by SGNI, which indicates the high efficiency of our approach.

Book ChapterDOI
Markus Holtz1
01 Jan 2011
TL;DR: This chapter is concerned with sparse grid (SG) quadrature methods, which can exploit the smoothness of f, overcome the curse of dimension to a certain extent and profit from low effective dimensions.
Abstract: This chapter is concerned with sparse grid (SG) quadrature methods These methods are constructed using certain combinations of tensor products of one-dimensional quadrature rules They can exploit the smoothness of f, overcome the curse of dimension to a certain extent and profit from low effective dimensions, see, eg, [16, 44, 45, 57, 116, 146]

Journal ArticleDOI
TL;DR: This work proposes a weighted Smolyak algorithm based on piecewise linear basis functions, which incorporates information regarding non‐uniform probability measures, during the construction of sparse grids, which results in sparse grids with higher number of support nodes in regions of the random domain with higher probability density.
Abstract: This paper deals with numerical solution of differential equations with random inputs, defined on bounded random domain with non-uniform probability measures. Recently, there has been a growing interest in the stochastic collocation approach, which seeks to approximate the unknown stochastic solution using polynomial interpolation in the multi-dimensional random domain. Existing approaches employ sparse grid interpolation based on the Smolyak algorithm, which leads to orders of magnitude reduction in the number of support nodes as compared with usual tensor product. However, such sparse grid interpolation approaches based on piecewise linear interpolation employ uniformly sampled nodes from the random domain and do not take into account the probability measures during the construction of the sparse grids. Such a construction based on uniform sparse grids may not be ideal, especially for highly skewed or localized probability measures. To this end, this work proposes a weighted Smolyak algorithm based on piecewise linear basis functions, which incorporates information regarding non-uniform probability measures, during the construction of sparse grids. The basic idea is to construct piecewise linear univariate interpolation formulas, where the support nodes are specially chosen based on the marginal probability distribution. These weighted univariate interpolation formulas are then used to construct weighted sparse grid interpolants, using the standard Smolyak algorithm. This algorithm results in sparse grids with higher number of support nodes in regions of the random domain with higher probability density. Several numerical examples are presented to demonstrate that the proposed approach results in a more efficient algorithm, for the purpose of computation of moments of the stochastic solution, while maintaining the accuracy of the approximation of the solution. Copyright 2010 John Wiley & Sons, Ltd.

Posted Content
TL;DR: The proposed algorithm combines the strengths of the generalised sparse grid algorithm and hierarchical surplus-guided local adaptivity to obtain a high-order method which, given sufficient smoothness, performs significantly better than the piecewise-linear basis.
Abstract: In this paper we present a locally and dimension-adaptive sparse grid method for interpolation and integration of high-dimensional functions with discontinuities. The proposed algorithm combines the strengths of the generalised sparse grid algorithm and hierarchical surplus-guided local adaptivity. A high-degree basis is used to obtain a high-order method which, given sufficient smoothness, performs significantly better than the piecewise-linear basis. The underlying generalised sparse grid algorithm greedily selects the dimensions and variable interactions that contribute most to the variability of a function. The hierarchical surplus of points within the sparse grid is used as an error criterion for local refinement with the aim of concentrating computational effort within rapidly varying or discontinuous regions. This approach limits the number of points that are invested in `unimportant' dimensions and regions within the high-dimensional domain. We show the utility of the proposed method for non-smooth functions with hundreds of variables.

Journal ArticleDOI
TL;DR: A recently developed modification of this method that precomputes a multiscale flux basis to avoid the need for subdomain solves on each iteration is employed, leading to collocation algorithms that are more efficient than the traditional implementation by orders of magnitude.
Abstract: This paper presents an efficient multiscale stochastic framework for uncertainty quantification in modeling of flow through porous media with multiple rock types The governing equations are based on Darcy's law with nonstationary stochastic permeability represented as a sum of local Karhunen-Loeve expansions The approximation uses stochastic collocation on either a tensor product or a sparse grid, coupled with a domain decomposition algorithm known as the multiscale mortar mixed finite element method The latter method requires solving a coarse scale mortar interface problem via an iterative procedure The traditional implementation requires the solution of local fine scale linear systems on each iteration We employ a recently developed modification of this method that precomputes a multiscale flux basis to avoid the need for subdomain solves on each iteration In the stochastic setting, the basis is further reused over multiple realizations, leading to collocation algorithms that are more efficient than the traditional implementation by orders of magnitude Error analysis and numerical experiments are presented

Journal ArticleDOI
TL;DR: This work discusses how the combination of ANOVA expansions, different sparse grid techniques, and the total sensitivity index (TSI) as a pre-selective mechanism enables the modeling of problems with hundred of parameters.
Abstract: The important task of evaluating the impact of random parameters on the output of stochastic ordinary differential equations (SODE) can be computationally very demanding, in particular for problems with a high-dimensional parameter space. In this work we consider this problem in some detail and demonstrate that by combining several techniques one can dramatically reduce the overall cost without impacting the predictive accuracy of the output of interests. We discuss how the combination of ANOVA expansions, different sparse grid techniques, and the total sensitivity index (TSI) as a pre-selective mechanism enables the modeling of problems with hundred of parameters. We demonstrate the accuracy and efficiency of this approach on a number of challenging test cases drawn from engineering and science.

Journal ArticleDOI
TL;DR: This work adapts the sparse grid combination technique to the solution space of the RTE and shows that it is obtained a sparse DOM which uses essentially only as many degrees of freedom as required for a purely spatial transport problem.
Abstract: The stationary monochromatic radiative transfer equation (RTE) is a partial differential transport equation stated on a five-dimensional phase space, the Cartesian product of physical and angular domain. We solve the RTE with a Galerkin FEM in physical space and collocation in angle, corresponding to a discrete ordinates method (DOM). To reduce the complexity of the problem and to avoid the “curse of dimension”, we adapt the sparse grid combination technique to the solution space of the RTE and show that we obtain a sparse DOM which uses essentially only as many degrees of freedom as required for a purely spatial transport problem. For smooth solutions, the convergence rates deteriorate only by a logarithmic factor. We compare the sparse DOM to the standard full DOM and a sparse tensor product approach developed earlier with Galerkin FEM in physical space and a spectral method in angle. Numerical experiments confirm our findings.

Proceedings ArticleDOI
20 Mar 2011
TL;DR: The simulation results shows that the proposed probabilistic collocation method can achieve same accuracy as Monte-Carlo method with smaller ensemble size and thus is computationally more efficient than MC method.
Abstract: This paper proposes a probabilistic collocation method (PCM) to quantify the uncertainties in state estimation. Comparing to classic Monte-Carlo (MC) method, the proposed PCM is based on sparse grid points and uses a smaller number of sparse grid points to quantify the uncertainty. Thus, the proposed PCM can quantify a large number of uncertain power system variables with relatively lower computational cost. The algorithm and procedure are outlined. The proposed PCM is applied to IEEE 14 bus system to quantify the uncertainty of power system state estimation. Comparison is made with MC method. The simulation results shows that the proposed PCM can achieve same accuracy as MC method with smaller ensemble size and thus is computationally more efficient than MC method.

Journal ArticleDOI
TL;DR: This work focuses on the Stochastic Galerkin approximation of the solution u of an elliptic stochastic PDE, and uses the previous estimates to introduce a new effective class of Sparse Grids that features better convergence properties compared to standard Smolyak or tensor product grids.
Abstract: In this work we first focus on the Stochastic Galerkin approximation of the solution u of an elliptic stochastic PDE. We rely on sharp estimates for the decay of the coefficients of the spectral expansion of u on orthogonal polynomials to build a sequence of polynomial subspaces that features better convergence properties compared to standard polynomial subspaces such as Total Degree or Tensor Product. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new effective class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids.

Journal ArticleDOI
01 Jan 2011
TL;DR: This paper presents a method for computational steering with pre-computed data as a particular form of visual scientific exploration, which considers a parametrized simulation as a multi-variate function in several parameters and uses the technique of sparse grids to sample and compress potentially high-dimensional parameter spaces.
Abstract: With the ever-increasing complexity, accuracy, dimensionality, and size of simulations, a step in the direction of data-intensive scientific discovery becomes necessary. Parameter-dependent simulations are an example of such a data-intensive tasks: The researcher, who is interested in the dependency of the simulation's result on a set of input parameters, changes essential parameters and wants to immediately see the effect of the changes in a visual environment. In this scenario, an interactive exploration is not possible due to the long execution time needed by even a single simulation corresponding to one parameter combination and the overall large number of parameter combinations which could be of interest. In this paper, we present a method for computational steering with pre-computed data as a particular form of visual scientific exploration. We consider a parametrized simulation as a multi-variate function in several parameters. Using the technique of sparse grids, this makes it possible to sample and compress potentially high-dimensional parameter spaces and to effciently deliver a combination of simulated and precomputed data to the steering process, thus enabling the user to interactively explore high-dimensional simulation results.

Book ChapterDOI
05 Dec 2011
TL;DR: This work proposes to use sparse grid functions to approximate the eigenfunctions of the Laplace-Beltrami operator, and has an explicit mapping between ambient and latent space, so that out-of-sample points can be mapped as well.
Abstract: Spectral graph theoretic methods such as Laplacian Eigenmaps are among the most popular algorithms for manifold learning and clustering. One drawback of these methods is, however, that they do not provide a natural out-of-sample extension. They only provide an embedding for the given training data. We propose to use sparse grid functions to approximate the eigenfunctions of the Laplace-Beltrami operator. We then have an explicit mapping between ambient and latent space. Thus, out-of-sample points can be mapped as well. We present results for synthetic and real-world examples to support the effectiveness of the sparse-grid-based explicit mapping.

Journal ArticleDOI
TL;DR: It is proved that both of these operations require only O(nlog^d^-^1n) number of multiplications, where n is the number of univariate B-spline basis functions used in each coordinate direction and d is theNumber of variables of the functions.

Proceedings Article
10 Oct 2011
TL;DR: This paper introduces a sparse grid representation, where grid cells are only stored where necessary, and builds an abstract graph which represents possible movement in the world at a high-level of granularity, which allows the representation of three-dimensional worlds.
Abstract: Grid representations offer many advantages for path planning. Lookups in grids are fast, due to the uniform memory layout, and it is easy to modify grids. But, grids often have significant memory requirements, they cannot directly represent more complex surfaces, and path planning is slower due to their high granularity representation of the world. The speed of path planning on grids has been addressed using abstract representations, such as has been documented in work on Dragon Age: Origins. The abstract representation used in this game was compact, preventing permanent changes to the grid. In this paper we introduce a sparse grid representation, where grid cells are only stored where necessary. From this sparse representation we incrementally build an abstract graph which represents possible movement in the world at a high-level of granularity. This sparse representation also allows the representation of three-dimensional worlds. This representation allows the world to be incrementally changed in under a millisecond, reducing the maximum memory required to store a map and abstraction from Dragon Age: Origins by nearly one megabyte. Fundamentally, the representation allows previously allocated but unused memory to be used in ways that result in higher-quality planning and more intelligent agents.

Book ChapterDOI
29 Aug 2011
TL;DR: In this paper, the authors propose static and dynamic load balancing in the context of an application for visualizing high-dimensional simulation data, which relies on the sparse grid technique for data compression.
Abstract: Multi-core parallelism and accelerators are becoming common features of today's computer systems, as they allow for computational power without sacrificing energy efficiency. Due to heterogeneity, tuning for each type of compute unit and adequate load balancing is essential. This paper proposes static and dynamic solutions for load balancing in the context of an application for visualizing high-dimensional simulation data. The application relies on the sparse grid technique for data compression. Its performance critical part is the interpolation routine used for decompression. Results show that our load balancing scheme allows for an efficient acceleration of interpolation on heterogeneous systems containing multi-core CPUs and GPUs.

Journal ArticleDOI
TL;DR: In this paper, a stochastic collocation method is proposed to investigate the secondary bifurcation of a two-dimensional aeroelastic system with structural nonlinearity represented by cubic restoring forces, and uncertainties expressed by random parameters in the cubic stiffness coefficient and in the initial pitch angle.