Topic

# Variable kernel density estimation

About: Variable kernel density estimation is a research topic. Over the lifetime, 5534 publications have been published within this topic receiving 178612 citations.

##### Papers published on a yearly basis

##### Papers

More filters

••

[...]

01 Jan 1986

TL;DR: The Kernel Method for Multivariate Data: Three Important Methods and Density Estimation in Action.

Abstract: Introduction. Survey of Existing Methods. The Kernel Method for Univariate Data. The Kernel Method for Multivariate Data. Three Important Methods. Density Estimation in Action.

15,308 citations

••

[...]

TL;DR: Kernel methods are of flexible form and can be used where simple parametric models are found to be inappropriate or difficult to specify and give alternative approaches to the Anderson (1982) Fourier transform methods.

Abstract: In this paper kernel methods for the nonparametric estimation of the utilization distribution from a random sample of locational observations made on an animal in its home range are described. They are of flexible form, thus can be used where simple parametric models are found to be inappropriate or difficult to specify. Two examples are given to illustrate the fixed and adaptive kernel approaches in data analysis and to compare the methods. Various choices for the smoothing parameter used in kernel methods are discussed. Since kernel methods give alternative approaches to the Anderson (1982) Fourier transform methods, some comparisons are made.

3,713 citations

••

[...]

TL;DR: Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality problems, with the ultimate goal of providing further understanding of these problems in terms of density gradients.

Abstract: Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistency of the estimates. The results are generalized to obtain a simple mcan-shift estimate that can be extended in a k -nearest-neighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality problems, with the ultimate goal of providing further understanding of these problems in terms of density gradients.

2,900 citations

•

[...]

03 Dec 2007

TL;DR: Two sets of random features are explored, provided convergence bounds on their ability to approximate various radial basis kernels, and it is shown that in large-scale classification and regression tasks linear machine learning algorithms applied to these features outperform state-of-the-art large- scale kernel machines.

Abstract: To accelerate the training of kernel machines, we propose to map the input data to a randomized low-dimensional feature space and then apply existing fast linear methods. The features are designed so that the inner products of the transformed data are approximately equal to those in the feature space of a user specified shift-invariant kernel. We explore two sets of random features, provide convergence bounds on their ability to approximate various radial basis kernels, and show that in large-scale classification and regression tasks linear machine learning algorithms applied to these features outperform state-of-the-art large-scale kernel machines.

2,755 citations

•

[...]

TL;DR: In this paper, the authors use statistical methods for nonparametric density estimation for a naive Bayesian classifier, comparing two methods of density estimation: assuming normality and modeling each conditional distribution with a single Gaussian; and using non-parametric kernel density estimation.

Abstract: When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality assumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experimental results on a variety of natural and artificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional distribution with a single Gaussian; and using nonparametric kernel density estimation. We observe large reductions in error on several natural and artificial data sets, which suggests that kernel estimation is a useful tool for learning Bayesian models.

2,523 citations