scispace - formally typeset
Search or ask a question

Showing papers on "Sparse grid published in 2002"


Journal ArticleDOI
01 Dec 2002
TL;DR: It turns out that the method scales linearly with the number of given data points and is well suited for data mining applications where the amount of data is very large, but where the dimension of the feature space is moderately high.
Abstract: Recently we presented a new approach [20] to the classification problem arising in data mining. It is based on the regularization network approach but in contrast to other methods, which employ ansatz functions associated to data points, we use a grid in the usually high-dimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [52]. Thus, only O(h_n^{-1} n^{d-1}) instead of O(h_n^{-d}) grid points and unknowns are involved. Here d denotes the dimension of the feature space and h_n = 2^{-n} gives the mesh size. We use the sparse grid combination technique [30] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. The method computes a nonlinear classifier but scales only linearly with the number of data points and is well suited for data mining applications where the amount of data is very large, but where the dimension of the feature space is moderately high. In contrast to our former work, where d-linear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point. We further extend the method to so-called anisotropic sparse grids, where now different a-priori chosen mesh sizes can be used for the discretization of each attribute. This can improve the run time of the method and the approximation results in the case of data sets with different importance of the attributes. We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 14 dimensions. We show that our new method achieves correctness rates which are competitive to those of the best existing methods.

45 citations


Proceedings ArticleDOI
24 Jun 2002
TL;DR: This work investigates the utility of the scaling information, valid for atmospheric phase screen data, in the process of unwrapping a set of sparse measurements, and shows how the power-law behaviour of the data variogram can be used as an a priori constraint for optimization through techniques such as simulated annealing.
Abstract: Scaling information is an important tool for the description of natural processes. Many applications of SAR (differential) interferometry lead to a set of sparse phase measurements, e.g. the monitoring of permanent scatterers. In this case, the atmospheric phase screen component of a given SAR image can be estimated over the PS sparse grid. Usually such data have to be unwrapped and then interpolated on a regular grid. We investigate the utility of the scaling information, valid for atmospheric phase screen data, in the process of unwrapping a set of sparse measurements. We show how the power-law behaviour of the data variogram can be used as an a priori constraint for optimization through techniques such as simulated annealing. The results are interpreted in view of operational applications to real data.

4 citations


Proceedings ArticleDOI
29 Oct 2002
TL;DR: An iterative procedure is described to objectively determine a sparse, non-uniform grid that will maximize transmission loss accuracy as a function of computation time, and requires no a priori knowledge of the environment.
Abstract: Many tactical decision aids available to the Navy require acoustic sensor performance predictions over large ocean areas. Unfortunately, accurate computation of such acoustic fields requires significant computation time, which often renders the advanced technology tactically useless. The use of low-density uniform grids usually leads to unacceptable field uncertainty. Attempts to optimize grid density by analyzing the environment prior to acoustic calculation have made only modest gains, but tactical needs dictate an order of magnitude reduction. Further, in many cases, the relations between the desired accuracy, the choice of propagation model input parameters, and the required computation times are simply unknown. In this work, an iterative procedure is described to objectively determine a sparse, non-uniform grid that will maximize transmission loss accuracy as a function of computation time. The premise is that the physical environmental complexity controls the need for dense sampling in space and azimuth, and that the transmission loss curves already calculated or nearby coordinates on previous iterations can be used to predict that complexity. Although the grid grows unevenly, it is built around bookkeeping schemes commonly used for interpolation, so rapidly transforming the sparse grid to a uniform grid is always easy. Benchmarking against uniform grids shows that the total sparse grid field uncertainty is equal to or significantly lower than the uncertainty of uniform grids requiring similar computation time, and the method requires no a priori knowledge of the environment. Each iteration produces an acoustic field for the entire area of interest with ever-increasing accuracy, so the process can, in principle, be terminated whenever the gain in accuracy is overshadowed by the danger of further delay.

2 citations


Posted Content
18 Dec 2002
TL;DR: A novel lattice method of option valuation that is especially suitable to American-style options whose values depend on multiple factors is proposed, which allows using sparse lattices and thus mitigates the curse of dimensionality.
Abstract: This note proposes a method for pricing high-dimensional American options based on modern methods of multidimensional interpolation. The method allows using sparse grids and thus mitigates the curse of dimensionality. A framework of the pricing algorithm and the corresponding interpolation methods are discussed, and a theorem is demonstrated that suggests that the pricing method is less vulnerable to the curse of dimensionality. The method is illustrated by an application to rainbow options and compared to Least Squares Monte Carlo and other benchmarks.

1 citations


Proceedings ArticleDOI
04 Aug 2002
TL;DR: In this paper, a deterministic maximum likelihood (ML) direction-of-arrival (DOA) estimation in unknown spatially correlated noise fields using sensor arrays composed of multiple subarrays on a sparse grid is presented.
Abstract: We address the problem of maximum likelihood (ML) direction-of-arrival (DOA) estimation in unknown spatially correlated noise fields using sensor arrays composed of multiple subarrays on a sparse grid. In such arrays, the noise covariance matrix has a block-diagonal structure which enables the number of nuisance noise parameters to be reduced substantially and the identifiability of the underlying DOA estimation problem to be ensured. A new deterministic ML DOA estimator is derived for the considered class of sparse sensor arrays. The proposed approach concentrates the estimation problem with respect to all nuisance parameters. In contrast to the analytic concentration used in conventional ML techniques, the proposed estimator is based on an iterative procedure, which includes a stepwise concentration of the LL (log-likelihood) function. Our algorithm is free of any further structural constraints or parametric model restrictions which are usually imposed on the noise covariance matrix and received signals in most existing ML-based approaches to DOA estimation in spatially correlated noise.

1 citations


01 Jan 2002
TL;DR: Detailed error analyses are given for sparse-gri d function represen tationss through the combination technique and explicit pointwise error expression s for the representatio n error are given, rather than order estimates.
Abstract: . Detailed error analyses are given for sparse-gri d function represen tationss through the combination technique. Twoand three-dimensional , and smoothh and discontinuous functions are considered , as well as piecewise-constan t andd piecewise-linea r interpolation techniques . Where appropriate, the results of thee analyses are verified in numerical experiments . Instead of the common vertexbasedd function representation , cell-centered function representatio n is considered . Explicit,, pointwise error expression s for the representatio n error are given, rather thann order estimates . The paper contributes to the theory of sparse-gri d techniques. . 10 0 FUNCTIONN REPRESENTATION

Posted Content
TL;DR: In this paper, a method for pricing high-dimensional American options based on modern methods of multidimensional interpolation is proposed, which allows using sparse grids and thus mitigates the curse of dimensionality.
Abstract: This note proposes a method for pricing high-dimensional American options based on modern methods of multidimensional interpolation. The method allows using sparse grids and thus mitigates the curse of dimensionality. A framework of the pricing algorithm and the corresponding interpolation methods are discussed, and a theorem is demonstrated that suggests that the pricing method is less vulnerable to the curse of dimensionality. The method is illustrated by an application to rainbow options and compared to Least Squares Monte Carlo and other benchmarks.