scispace - formally typeset
Search or ask a question

Showing papers on "Sparse grid published in 2001"


Journal ArticleDOI
TL;DR: It turns out that the new method achieves correctness rates which are competitive to that of the best existing methods, i.e. the amount of data to be classified.
Abstract: (h n −1 n d −1) instead of O(h n −d ) grid points and unknowns are involved. Here d denotes the dimension of the feature space and h n = 2 −n gives the mesh size. To be precise, we suggest to use the sparse grid combination technique [42] where the classification problem is discretized and solved on a certain sequence of conventional grids with uniform mesh sizes in each coordinate direction. The sparse grid solution is then obtained from the solutions on these different grids by linear combination. In contrast to other sparse grid techniques, the combination method is simpler to use and can be parallelized in a natural and straightforward way. We describe the sparse grid combination technique for the classification problem in terms of the regularization network approach. We then give implementational details and discuss the complexity of the algorithm. It turns out that the method scales only linearly with the number of instances, i.e. the amount of data to be classified. Finally we report on the quality of the classifier built by our new method. Here we consider standard test problems from the UCI repository and problems with huge synthetical data sets in up to 9 dimensions. It turns out that our new method achieves correctness rates which are competitive to that of the best existing methods.

135 citations


Journal ArticleDOI
TL;DR: A regularized version of the hierarchical cover construction algorithm which reduces the number of integration cells even further and subsequently improves the computational efficiency.
Abstract: In this paper we present a meshfree discretization technique based only on a set of irregularly spaced points $x_i \in \mathbb R^d$ and the partition of unity approach. In this sequel to [M. Griebel and M. A. Schweitzer, SIAM J. Sci. Comput., 22 (2000), pp. 853--890] we focus on the cover construction and its interplay with the integration problem arising in a Galerkin discretization. We present a hierarchical cover construction algorithm and a reliable decomposition quadrature scheme. Here, we decompose the integration domains into disjoint cells on which we employ local sparse grid quadrature rules to improve computational efficiency. The use of these two schemes already reduces the operation count for the assembly of the stiffness matrix significantly. Now the overall computational costs are dominated by the number of the integration cells. We present a regularized version of the hierarchical cover construction algorithm which reduces the number of integration cells even further and subsequently improves the computational efficiency. In fact, the computational costs during the integration of the nonzeros of the stiffness matrix are comparable to that of a finite element method, yet the presented method is completely independent of a mesh. Moreover, our method is applicable to general domains and allows for the construction of approximations of any order and regularity.

101 citations


Journal ArticleDOI
TL;DR: An algorithm is presented for the integration problem that reduces the time for the calculation and exposition of the coefficients in such a way that for increasing dimension, this time is small compared to dn, where n is the number of involved function values.
Abstract: For many numerical problems involving smooth multivariate functions on d-cubes, the so-called Smolyak algorithm (or Boolean method, sparse grid method, etc.) has proved to be very useful. The final form of the algorithm (see equation (12) below) requires functional evaluation as well as the computation of coefficients. The latter can be done in different ways that may have considerable influence on the total cost of the algorithm. In this paper, we try to diminish this influence as far as possible. For example, we present an algorithm for the integration problem that reduces the time for the calculation and exposition of the coefficients in such a way that for increasing dimension, this time is small compared to dn, where n is the number of involved function values.

64 citations


Patent
05 Dec 2001
TL;DR: In this paper, a method and device for valuation of financial derivatives is proposed, wherein a value of a derivative is computed by a determination of an expectation, and one or more expectation parameters are computed by combining the integrand values and the integration weights.
Abstract: A method and device for valuation of financial derivatives, wherein a value of a derivative is computed by a determination of an expectation. Input parameters are communicated by an input unit to a computer, such as at least one processor, to establish an integrand as a function of the input parameters. A multivariate integration domain is computed. A sparse grid method is used to determine integration points and integration weights as a function of the input parameters. The integrand is integrated with an integration domain at the integration points to determine integrand values. One or more expectation parameters are computed by combining the integrand values and the integration weights. A value of the derivative is communicated through an output unit, for example to a display monitor or another display device.

39 citations


Proceedings ArticleDOI
26 Aug 2001
TL;DR: It turns out that the new method achieves correctness rates which are competitive to that of the best existing methods and scales linearly with the number of given data points.
Abstract: Recently we presented a new approach [18] to the classification problem arising in data mining. It is based on the regularization network approach but, in contrast to other methods which employ ansatz functions associated to data points, we use a grid in the usually high-dimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [49]. Thus, only O(hn-1nd-1) instead of O(hn-d) grid points and unknowns are involved. Here d denotes the dimension of the feature space and hn = 2-n gives the mesh size. We use the sparse grid combination technique [28] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. In contrast to our former work, where d-linear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point.We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 10 dimensions. It turns out that our new method achieves correctness rates which are competitive to that of the best existing methods.

20 citations


Book ChapterDOI
06 Jun 2001
TL;DR: The sparse grid combination technique for the classification problem is described, the two ways of parallelisation are discussed, and the results on a 10 dimensional data set are reported.
Abstract: Recently we presented a new approach [5, 6] to the classification problem arising in data mining. It is based on the regularization network approach, but in contrast to other methods which employ ansatz functions associated to data points, we use basis functions coming from a grid in the usually high-dimensional feature space for the minimization process. Here, to cope with the curse of dimensionality, we employ so-called sparse grids. To be precise we use the sparse grid combination technique [11] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. The method scales only linearly with the number of data points and is well suited for data mining applications where the amount of data is very large, but where the dimension of the feature space is moderately high. The computation on each grid of the sequence of grids is independent of each other and therefore can be done in parallel already on a coarse grain level. A second level of parallelization on a fine grain level can be introduced on each grid through the use of threading on shared-memory multi-processor computers.We describe the sparse grid combination technique for the classification problem, we discuss the two ways of parallelisation, and we report on the results on a 10 dimensional data set.

20 citations


Journal ArticleDOI
TL;DR: In this article, time-stepping is performed on a set of semi-coarsened space grids and the solutions on the different grids are combined to obtain the asymptotic convergence of a single, fine uniform grid.

19 citations


Journal ArticleDOI
TL;DR: The combination technique proved more efficient than a single grid approach for the simpler linear problem and for the Burgers' equations this gain in efficiency was only observed if one of the two solution components was set to zero, which makes the problem more grid-aligned.
Abstract: In the current paper the e-ciency of the sparse-grid combination tech- nique applied to time-dependent advection-difiusion problems is investigated. For the time-integration we employ a third-order Rosenbrock scheme implemented with adap- tive step-size control and approximate matrix factorization. Two model problems are considered, a scalar 2D linear, constant-coe-cient problem and a system of 2D non- linear Burgers' equations. In short, the combination technique proved more e-cient than a single grid approach for the simpler linear problem. For the Burgers' equations this gain in e-ciency was only observed if one of the two solution components was set to zero, which makes the problem more grid-aligned. 2000 Mathematics Subject Classiflcation: 65G99, 65M20, 65M55, 65L06, 76R99.

18 citations


Journal ArticleDOI
TL;DR: A lower bound on the exponent of tractability for Sparse Grid Quadratures for multivariate integration of functions from a certain class of weighted tensor product spaces is provided.

9 citations


Journal ArticleDOI
TL;DR: This paper construction and use of wavelet approximation spaces for the fast evaluation of integral expressions based on biorthogonal anisotropic tensor product wavelets are concerned and sparse grid (hyperbolic cross) approximation spaces which are adapted to the smoothness of the kernel are introduced.
Abstract: In this paper we are concerned with the construction and use of wavelet approximation spaces for the fast evaluation of integral expressions. The spaces are based on biorthogonal anisotropic tensor product wavelets. We introduce sparse grid (hyperbolic cross) approximation spaces which are adapted not only to the smoothness of the kernel but also to the norm in which the error is measured. Furthermore, we introduce compression schemes for the corresponding discretizations. Numerical examples for the Laplace equation with Dirichlet boundary conditions and an additional integral term with a smooth kernel demonstrate the validity of our theoretical results.

8 citations


Journal ArticleDOI
TL;DR: Some algorithms to solve the system of linear equations arising from the finite difference discretization on sparse grids using the multilevel structure of the sparse grid space or its full grid subspaces are proposed.
Abstract: We propose some algorithms to solve the system of linear equations arising from the finite difference discretization on sparse grids. For this, we will use the multilevel structure of the sparse grid space or its full grid subspaces, respectively.

Journal ArticleDOI
TL;DR: In this paper, the authors describe methods to approximate functions and dif- ferential operators on adaptive sparse (dyadic) grids and distinguish between several representations of a function on the sparse grid and describe how finite dierence operators can be applied to these representations for general variable coecient equations on sparse grids.
Abstract: In this paper we describe methods to approximate functions and dif- ferential operators on adaptive sparse (dyadic) grids We distinguish between several representations of a function on the sparse grid and we describe how finite dierence (FD) operators can be applied to these representations For general variable coecient equations on sparse grids, genuine finite element (FE) discretizations are not feasible and FD operators allow an easier operator evaluation than the adapted FE operators However, the structure of the FD operators is complex With the aim to construct an ecient multigrid procedure, we analyze the structure of the discrete Laplacian in its hierarchical representation and show the relation between the full and the sparse grid case The rather complex relations, that are expressed by scaling matrices for each separate coordinate direction, make us doubt about the pos- sibility of constructing ecient preconditioners that show spectral equivalence Hence, we question the possibility of constructing a natural multigrid algorithm with optimal O(N) eciency We conjecture that for the ecient solution of a general class of adaptive grid problems it is better to accept an additional condition for the dyadic grids (condition L) and to apply adaptive hp-discretization